WorldWideScience

Sample records for sequential blocking model

  1. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  2. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  3. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  4. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  5. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  6. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  7. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  8. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  9. Sequential and Parallel Attack Tree Modelling

    NARCIS (Netherlands)

    Arnold, Florian; Guck, Dennis; Kumar, Rajesh; Stoelinga, Mariëlle Ida Antoinette; Koornneef, Floor; van Gulijk, Coen

    The intricacy of socio-technical systems requires a careful planning and utilisation of security resources to ensure uninterrupted, secure and reliable services. Even though many studies have been conducted to understand and model the behaviour of a potential attacker, the detection of crucial

  10. Models of sequential decision making in consumer lending

    OpenAIRE

    Kanshukan Rajaratnam; Peter A. Beling; George A. Overstreet

    2016-01-01

    Abstract In this paper, we introduce models of sequential decision making in consumer lending. From the definition of adverse selection in static lending models, we show that homogenous borrowers take-up offers at different instances of time when faced with a sequence of loan offers. We postulate that bounded rationality and diverse decision heuristics used by consumers drive the decisions they make about credit offers. Under that postulate, we show how observation of early decisions in a seq...

  11. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference

  12. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    In this article, we present a method and an associated toolchain for the formal verification of the new Danish railway interlocking systems that are compatible with the European Train Control System (ETCS) Level 2. We have made a generic and reconfigurable model of the system behaviour and generic...... safety properties. This model accommodates sequential release - a feature in the new Danish interlocking systems. To verify the safety of an interlocking system, first a domain-specific description of interlocking configuration data is constructed and validated. Then the generic model and safety...

  13. Infinite-degree-corrected stochastic block model

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2014-01-01

    In stochastic block models, which are among the most prominent statistical models for cluster analysis of complex networks, clusters are defined as groups of nodes with statistically similar link probabilities within and between groups. A recent extension by Karrer and Newman [Karrer and Newman...... corrected stochastic block model as a nonparametric Bayesian model, incorporating a parameter to control the amount of degree correction that can then be inferred from data. Additionally, our formulation yields principled ways of inferring the number of groups as well as predicting missing links...

  14. A continuous-time neural model for sequential action.

    Science.gov (United States)

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. Spring-block Model for Barkhausen Noise

    International Nuclear Information System (INIS)

    Kovacs, K.; Brechet, Y.; Neda, Z.

    2005-01-01

    A simple mechanical spring-block model is used for studying Barkhausen noise (BN). The model incorporates the generally accepted physics of domain wall movement and pinning. Computer simulations on this model reproduces the main features of the hysteresis loop and Barkhausen jumps. The statistics of the obtained Barkhausen jumps follows several scaling laws, in qualitative agreement with experimental results. The model consists of a one-dimensional frictional spring-block system. The blocks model the Bloch-walls that separate inversely oriented magnetic domains, and springs correspond to the magnetized regions. Three types of realistic forces are modelled with this system: 1. the force resulting from the magnetic energy of the neighboring domains in external magnetic field (modelled by forces having alternating orientations and acting directly on the blocks); 2. the force resulting from the magnetic self-energy of each domain (modelled by the elastic forces of the springs); 3. the pinning forces acting on the domain walls (modelled by position dependent static friction acting on blocks). The dynamics of the system is governed by searching for equilibrium: one particular domain wall can jump to the next pinning center if the resultant of forces 1. and 2. is greater then the pinning force. The external magnetic field is successively increased (or decreased) and the system is relaxed to mechanical equilibrium. During the simulations we are monitoring the variation of the magnetization focusing on the shape of the hysteresis loop, power spectrum, jump size (avalanche size) distribution, signal duration distribution, signal area distribution. The simulated shape of the hysteresis loops fulfills all the requirements for real magnetization phenomena. The power spectrum indicates different behavior in the low (1/f noise) and high (white noise) frequency region. All the relevant distribution functions show scaling behavior over several decades of magnitude with a naturally

  16. Modeling sequential context effects in diagnostic interpretation of screening mammograms.

    Science.gov (United States)

    Alamudun, Folami; Paulus, Paige; Yoon, Hong-Jun; Tourassi, Georgia

    2018-07-01

    Prior research has shown that physicians' medical decisions can be influenced by sequential context, particularly in cases where successive stimuli exhibit similar characteristics when analyzing medical images. This type of systematic error is known to psychophysicists as sequential context effect as it indicates that judgments are influenced by features of and decisions about the preceding case in the sequence of examined cases, rather than being based solely on the peculiarities unique to the present case. We determine if radiologists experience some form of context bias, using screening mammography as the use case. To this end, we explore correlations between previous perceptual behavior and diagnostic decisions and current decisions. We hypothesize that a radiologist's visual search pattern and diagnostic decisions in previous cases are predictive of the radiologist's current diagnostic decisions. To test our hypothesis, we tasked 10 radiologists of varied experience to conduct blind reviews of 100 four-view screening mammograms. Eye-tracking data and diagnostic decisions were collected from each radiologist under conditions mimicking clinical practice. Perceptual behavior was quantified using the fractal dimension of gaze scanpath, which was computed using the Minkowski-Bouligand box-counting method. To test the effect of previous behavior and decisions, we conducted a multifactor fixed-effects ANOVA. Further, to examine the predictive value of previous perceptual behavior and decisions, we trained and evaluated a predictive model for radiologists' current diagnostic decisions. ANOVA tests showed that previous visual behavior, characterized by fractal analysis, previous diagnostic decisions, and image characteristics of previous cases are significant predictors of current diagnostic decisions. Additionally, predictive modeling of diagnostic decisions showed an overall improvement in prediction error when the model is trained on additional information about

  17. Modelling of multi-block data

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Svinning, K.

    2006-01-01

    Here is presented a unified approach to modelling multi-block regression data. The starting point is a partition of the data X into L data blocks, X = (X-1, X-2,...X-L), and the data Y into M data-blocks, Y = (Y-1, Y-2,...,Y-M). The methods of linear regression, X -> Y, are extended to the case...... of a linear relationship between each X-i and Y-j. X-i -> Y-j. A modelling strategy is used to decide if the residual X-i should take part in the modelling of one or more Y(j)s. At each step the procedure of finding score vectors is based on well-defined optimisation procedures. The principle of optimisation...... is based on that the score vectors should give the sizes of the resulting Y(j)s loading vectors as large as possible. The partition of X and Y are independent of each other. The choice of Y-j can be X-j, Y-i = X-i, thus including the possibility of modelling X -> X-i,i=1,...,L. It is shown how...

  18. Sequential Logic Model Deciphers Dynamic Transcriptional Control of Gene Expressions

    Science.gov (United States)

    Yeo, Zhen Xuan; Wong, Sum Thai; Arjunan, Satya Nanda Vel; Piras, Vincent; Tomita, Masaru; Selvarajoo, Kumar; Giuliani, Alessandro; Tsuchiya, Masa

    2007-01-01

    Background Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. Methodology Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM) is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. Principal Findings SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin) during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. Conclusions/Significance The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet providing rich biological

  19. Sequential logic model deciphers dynamic transcriptional control of gene expressions.

    Directory of Open Access Journals (Sweden)

    Zhen Xuan Yeo

    Full Text Available BACKGROUND: Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. METHODOLOGY: Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. PRINCIPAL FINDINGS: SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. CONCLUSIONS/SIGNIFICANCE: The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet

  20. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  1. About Block Dynamic Model of Earthquake Source.

    Science.gov (United States)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  2. Block models and personalized PageRank.

    Science.gov (United States)

    Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon

    2017-01-03

    Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods through the "seed set expansion problem": given a subset [Formula: see text] of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work, we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter [Formula: see text] that depends on the block model parameters. This connection provides a formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance, despite being simple linear classification rules, and are even competitive with belief propagation.

  3. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  4. Modeling the building blocks of biodiversity.

    Directory of Open Access Journals (Sweden)

    Lucas N Joppa

    Full Text Available BACKGROUND: Networks of single interaction types, such as plant-pollinator mutualisms, are biodiversity's "building blocks". Yet, the structure of mutualistic and antagonistic networks differs, leaving no unified modeling framework across biodiversity's component pieces. METHODS/PRINCIPAL FINDINGS: We use a one-dimensional "niche model" to predict antagonistic and mutualistic species interactions, finding that accuracy decreases with the size of the network. We show that properties of the modeled network structure closely approximate empirical properties even where individual interactions are poorly predicted. Further, some aspects of the structure of the niche space were consistently different between network classes. CONCLUSIONS/SIGNIFICANCE: These novel results reveal fundamental differences between the ability to predict ecologically important features of the overall structure of a network and the ability to predict pair-wise species interactions.

  5. Evaluation of transversus abdominis plane block for renal transplant recipients - A meta-analysis and trial sequential analysis of published studies.

    Science.gov (United States)

    Singh, Preet Mohinder; Borle, Anuradha; Makkar, Jeetinder Kaur; Trisha, Aanjan; Sinha, Aashish

    2018-01-01

    Patients undergoing renal transplant (RT) have altered drug/opioid pharmacokinetics. Transversus abdominis plane (TAP) block in renal transplant recipients has been recently evaluated for analgesic and opioid-sparing potential by many trials. The studies comparing TAP-block to conventional analgesic regimens for RT were searched. Comparisons were made for total opioids consumed (as morphine-equivalents) during the first postoperative 24-h (primary objective), intraoperative, and immediate-postoperative period. Pain scores and postoperative nausea-vomiting (PONV) were also evaluated. Trial sequential analysis (TSA) was used to quantify the strength of analysis. Ten-trials with 258 and 237 patients in control and TAP-block group, respectively, were included. TAP-block decreased the 24-h (reported in 9-trials) opioid consumption by 14.61 ± 4.34 mg (reduction by 42.7%, random-effects, P consumption also decreased by 2.06 ± 0.63 mg (reduction of 27.8%) (random effects, P consumption in RT recipients. Persistent and better pain control is achieved when TAP-Block is used. Benefits of TAP block extend beyond the analgesic actions alone as it also decreases the 24-h incidence of postoperative nausea vomiting as well. The technique of the block needs standardization for RT recipients.

  6. Use of sequential diagnostic pain blocks in a patient of posttraumatic complex regional pain syndrome-not otherwise specified complicated by myofascial trigger points and thoracolumbar pain syndrome

    Directory of Open Access Journals (Sweden)

    Kailash Kothari

    2017-01-01

    Full Text Available We are presenting a case of posttraumatic lower limb Complex regional pain syndrome – Not otherwise specified (CRPS – NOS. As it was not treated in acute phase, the pain became chronic and got complicated by myofascial and thoracolumbar pain syndrome. This case posed us a diagnostic challenge. We used sequential diagnostic pain blocks to identify the pain generators and successfully treat the patient. We used diagnostic blocks step by step to identify and treat pain generators – T12,L1 and L2 Facet joints, Lumbar sympathetic block for CRPS NOS and Trigger point injection with dry needling for myofascial pain syndrome. This case highlights the facet that additional pain generators unrelated to original pain may complicate the presentation. Identifying these pain generators requires out of box thinking and high index of suspicion.

  7. Sequential Path Model for Grain Yield in Soybean

    Directory of Open Access Journals (Sweden)

    Mohammad SEDGHI

    2010-09-01

    Full Text Available This study was performed to determine some physiological traits that affect soybean,s grain yield via sequential path analysis. In a factorial experiment, two cultivars (Harcor and Williams were sown under four levels of nitrogen and two levels of weed management at the research station of Tabriz University, Iran, during 2004 and 2005. Grain yield, some yield components and physiological traits were measured. Correlation coefficient analysis showed that grain yield had significant positive and negative association with measured traits. A sequential path analysis was done in order to evaluate associations among grain yield and related traits by ordering the various variables in first, second and third order paths on the basis of their maximum direct effects and minimal collinearity. Two first-order variables, namely number of pods per plant and pre-flowering net photosynthesis revealed highest direct effect on total grain yield and explained 49, 44 and 47 % of the variation in grain yield based on 2004, 2005, and combined datasets, respectively. Four traits i.e. post-flowering net photosynthesis, plant height, leaf area index and intercepted radiation at the bottom layer of canopy were found to fit as second-order variables. Pre- and post-flowering chlorophyll content, main root length and intercepted radiation at the middle layer of canopy were placed at the third-order path. From the results concluded that, number of pods per plant and pre-flowering net photosynthesis are the best selection criteria in soybean for grain yield.

  8. Microwave Ablation: Comparison of Simultaneous and Sequential Activation of Multiple Antennas in Liver Model Systems.

    Science.gov (United States)

    Harari, Colin M; Magagna, Michelle; Bedoya, Mariajose; Lee, Fred T; Lubner, Meghan G; Hinshaw, J Louis; Ziemlewicz, Timothy; Brace, Christopher L

    2016-01-01

    To compare microwave ablation zones created by using sequential or simultaneous power delivery in ex vivo and in vivo liver tissue. All procedures were approved by the institutional animal care and use committee. Microwave ablations were performed in both ex vivo and in vivo liver models with a 2.45-GHz system capable of powering up to three antennas simultaneously. Two- and three-antenna arrays were evaluated in each model. Sequential and simultaneous ablations were created by delivering power (50 W ex vivo, 65 W in vivo) for 5 minutes per antenna (10 and 15 minutes total ablation time for sequential ablations, 5 minutes for simultaneous ablations). Thirty-two ablations were performed in ex vivo bovine livers (eight per group) and 28 in the livers of eight swine in vivo (seven per group). Ablation zone size and circularity metrics were determined from ablations excised postmortem. Mixed effects modeling was used to evaluate the influence of power delivery, number of antennas, and tissue type. On average, ablations created by using the simultaneous power delivery technique were larger than those with the sequential technique (P Simultaneous ablations were also more circular than sequential ablations (P = .0001). Larger and more circular ablations were achieved with three antennas compared with two antennas (P simultaneous power delivery creates larger, more confluent ablations with greater temperatures than those created with sequential power delivery. © RSNA, 2015.

  9. Modeling two-phase ferroelectric composites by sequential laminates

    International Nuclear Information System (INIS)

    Idiart, Martín I

    2014-01-01

    Theoretical estimates are given for the overall dissipative response of two-phase ferroelectric composites with complex particulate microstructures under arbitrary loading histories. The ferroelectric behavior of the constituent phases is described via a stored energy density and a dissipation potential in accordance with the theory of generalized standard materials. An implicit time-discretization scheme is used to generate a variational representation of the overall response in terms of a single incremental potential. Estimates are then generated by constructing sequentially laminated microgeometries of particulate type whose overall incremental potential can be computed exactly. Because they are realizable, by construction, these estimates are guaranteed to conform with any material constraints, to satisfy all pertinent bounds and to exhibit the required convexity properties with no duality gap. Predictions for representative composite and porous systems are reported and discussed in the light of existing experimental data. (paper)

  10. A ligand exchange strategy for one-pot sequential synthesis of (hyperbranched polyethylene)-b-(linear polyketone) block polymers.

    Science.gov (United States)

    Zhang, Zhichao; Ye, Zhibin

    2012-08-18

    Upon the addition of an equimolar amount of 2,2'-bipyridine, a cationic Pd-diimine complex capable of facilitating "living" ethylene polymerization is switched to catalyze "living" alternating copolymerization of 4-tertbutylstyrene and CO. This unique chemistry is thus employed to synthesize a range of well-defined treelike (hyperbranched polyethylene)-b-(linear polyketone) block polymers.

  11. Sequential SPECT/CT imaging starting with stress SPECT in patients with left bundle branch block suspected for coronary artery disease

    Energy Technology Data Exchange (ETDEWEB)

    Engbers, Elsemiek M.; Mouden, Mohamed [Isala, Department of Cardiology, Zwolle (Netherlands); Isala, Department of Nuclear Medicine, Zwolle (Netherlands); Timmer, Jorik R.; Ottervanger, Jan Paul [Isala, Department of Cardiology, Zwolle (Netherlands); Knollema, Siert; Jager, Pieter L. [Isala, Department of Nuclear Medicine, Zwolle (Netherlands)

    2017-01-15

    To investigate the impact of left bundle branch block (LBBB) on sequential single photon emission computed tomography (SPECT)/ CT imaging starting with stress-first SPECT. Consecutive symptomatic low- to intermediate-risk patients without a history of coronary artery disease (CAD) referred for SPECT/CT were included from an observational registry. If stress SPECT was abnormal, additional rest SPECT and, if feasible, coronary CT angiography (CCTA) were acquired. Of the 5,018 patients, 218 (4.3 %) demonstrated LBBB. Patients with LBBB were slightly older than patients without LBBB (65±12 vs. 61±11 years, p<0.001). Stress SPECT was more frequently abnormal in patients with LBBB (82 % vs. 46 %, p<0.001). After reviewing stress and rest images, SPECT was normal in 43 % of the patients with LBBB, compared to 77 % of the patients without LBBB (p<0.001). Sixty-four of the 124 patients with LBBB and abnormal stress-rest SPECT underwent CCTA (52 %), which could exclude obstructive CAD in 46 of the patients (72 %). Sequential SPECT/CT imaging starting with stress SPECT is not the optimal imaging protocol in patients with LBBB, as the majority of these patients have potentially false-positive stress SPECT. First-line testing using CCTA may be more appropriate in low- to intermediate-risk patients with LBBB. (orig.)

  12. RADIOMETRIC BLOCK ADJUSMENT AND DIGITAL RADIOMETRIC MODEL GENERATION

    Directory of Open Access Journals (Sweden)

    A. Pros

    2013-05-01

    Full Text Available In this paper we present a radiometric block adjustment method that is related to geometric block adjustment and to the concept of a terrain Digital Radiometric Model (DRM as a complement to the terrain digital elevation and surface models. A DRM, in our concept, is a function that for each ground point returns a reflectance value and a Bidirectional Reflectance Distribution Function (BRDF. In a similar way to the terrain geometric reconstruction procedure, given an image block of some terrain area, we split the DRM generation in two phases: radiometric block adjustment and DRM generation. In the paper we concentrate on the radiometric block adjustment step, but we also describe a preliminary DRM generator. In the block adjustment step, after a radiometric pre-calibraton step, local atmosphere radiative transfer parameters, and ground reflectances and BRDFs at the radiometric tie points are estimated. This radiometric block adjustment is based on atmospheric radiative transfer (ART models, pre-selected BRDF models and radiometric ground control points. The proposed concept is implemented and applied in an experimental campaign, and the obtained results are presented. The DRM and orthophoto mosaics are generated showing no radiometric differences at the seam lines.

  13. Sequential decoding of intramuscular EMG signals via estimation of a Markov model.

    Science.gov (United States)

    Monsifrot, Jonathan; Le Carpentier, Eric; Aoustin, Yannick; Farina, Dario

    2014-09-01

    This paper addresses the sequential decoding of intramuscular single-channel electromyographic (EMG) signals to extract the activity of individual motor neurons. A hidden Markov model is derived from the physiological generation of the EMG signal. The EMG signal is described as a sum of several action potentials (wavelet) trains, embedded in noise. For each train, the time interval between wavelets is modeled by a process that parameters are linked to the muscular activity. The parameters of this process are estimated sequentially by a Bayes filter, along with the firing instants. The method was tested on some simulated signals and an experimental one, from which the rates of detection and classification of action potentials were above 95% with respect to the reference decomposition. The method works sequentially in time, and is the first to address the problem of intramuscular EMG decomposition online. It has potential applications for man-machine interfacing based on motor neuron activities.

  14. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  15. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2014-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  16. A sequential model for the structure of health care utilization.

    NARCIS (Netherlands)

    Herrmann, W.J.; Haarmann, A.; Baerheim, A.

    2017-01-01

    Traditional measurement models of health care utilization are not able to represent the complex structure of health care utilization. In this qualitative study, we, therefore, developed a new model to represent the health care utilization structure. In Norway and Germany, we conducted episodic

  17. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    checking (BMC) and inductive reasoning, it is verified that the generated model instance satisfies the generated safety properties. Using this method, we are able to verify the safety properties for model instances corresponding to railway networks of industrial size. Experiments show that BMC is also...

  18. Likelihood ratio sequential sampling models of recognition memory.

    Science.gov (United States)

    Osth, Adam F; Dennis, Simon; Heathcote, Andrew

    2017-02-01

    The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. RF building block modeling: optimization and synthesis

    NARCIS (Netherlands)

    Cheng, W.

    2012-01-01

    For circuit designers it is desirable to have relatively simple RF circuit models that do give decent estimation accuracy and provide sufficient understanding of circuits. Chapter 2 in this thesis shows a general weak nonlinearity model that meets these demands. Using a method that is related to

  20. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    Directory of Open Access Journals (Sweden)

    Dario Cuevas Rivera

    2015-10-01

    Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.

  1. Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback

    OpenAIRE

    Jung–Min Yang

    2016-01-01

    Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...

  2. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    Science.gov (United States)

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  3. Internet of Things building blocks and business models

    CERN Document Server

    Hussain, Fatima

    2017-01-01

    This book describes the building blocks and introductory business models for Internet of Things (IoT). The author provide an overview of the entire IoT architecture and constituent layers, followed by detail description of each block . Various inter-connecting technologies and sensors are discussed in context of IoT networks. In addition to this, concepts of Big Data and Fog Computing are presented and characterized as per data generated by versatile IoT applications . Smart parking system and context aware services are presented as an hybrid model of cloud and Fog Afterwards, various IoT applications and respective business models are discussed. Finally, author summarizes the IoT building blocks and identify research issues in each, and suggest potential research projects worthy of pursuing. .

  4. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    Science.gov (United States)

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  5. Non-Markovianity in the collision model with environmental block

    Science.gov (United States)

    Jin, Jiasen; Yu, Chang-shui

    2018-05-01

    We present an extended collision model to simulate the dynamics of an open quantum system. In our model, the unit to represent the environment is, instead of a single particle, a block which consists of a number of environment particles. The introduced blocks enable us to study the effects of different strategies of system–environment interactions and states of the blocks on the non-Markovianities. We demonstrate our idea in the Gaussian channels of an all-optical system and derive a necessary and sufficient condition of non-Markovianity for such channels. Moreover, we show the equivalence of our criterion to the non-Markovian quantum jump in the simulation of the pure damping process of a single-mode field. We also show that the non-Markovianity of the channel working in the strategy that the system collides with environmental particles in each block in a certain order will be affected by the size of the block and the embedded entanglement and the effects of heating and squeezing the vacuum environmental state will quantitatively enhance the non-Markovianity.

  6. Mathematical Model for the Sequential Action of Radiation and Heat on Yeast Cells

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Lee, Yun Jong; Kim, Su Hyoun; Nili, Mohammad; Zhurakovskaya, Galina P.; Petin, Vladislav G.

    2009-01-01

    It is well known that the synergistic interaction of hyperthermia with ionizing radiation and other agents is widely used in hyperthermic oncology. Interaction between two agents may be considered as synergistic or antagonistic when the effect produced is greater or smaller than the sum of the two single responses. It has long be considered that the mechanism of synergistic interaction of hyperthermia and ionizing radiation may be brought about by an inhibition of the repair from sublethal and potentially lethal damage at the cellular level. The inhibition of the recovery process after combined treatments cannot be considered as a reason for the synergy, but rather would be the expected and predicted consequence of the production of irreversible damage. On the basis of it, a simple mathematical model of the synergistic interaction of two agents acting simultaneously has been proposed. However, the model has not been applied to predict the degree of interaction of heat and ionizing radiation after their sequential action. Extension of the model to the sequential treatment of heat and ionizing radiation seems to be of interest for theoretical and practical reasons. Thus, the purposes of the present work is to suggest the simplest mathematical model which would be able to account for the results obtained and currently available experimental information on the sequential action of radiation and heat.

  7. Development and sensitivity analysis of a fullykinetic model of sequential reductive dechlorination in subsurface

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Albrechtsen, Hans-Jørgen

    2010-01-01

    and natural degradation of chlorinated solvents frequently occurs in the subsurface through sequential reductive dechlorination. However, the occurrence and the performance of natural sequential reductive dechlorination strongly depends on environmental factor such as redox conditions, presence of fermenting...... organic matter / electron donors, presence of specific biomass, etc. Here we develop a new fully-kinetic biogeochemical reactive model able to simulate chlorinated solvents degradation as well as production and consumption of molecular hydrogen. The model is validated using batch experiment data......Chlorinated hydrocarbons originating from point sources are amongst the most prevalent contaminants of ground water and often represent a serious threat to groundwater-based drinking water resources. Natural attenuation of contaminant plumes can play a major role in contaminated site management...

  8. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    OpenAIRE

    Ciptono, Wakhid Slamet

    2006-01-01

    This study extends the prior research (Zahra and Das 1993) by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs) of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment) that may ...

  9. [Mathematical modeling of synergistic interaction of sequential thermoradiation action on mammalian cells].

    Science.gov (United States)

    Belkina, S V; Semkina, M A; Kritskiĭ, R O; Petin, V G

    2010-01-01

    Data obtained by other authors for mammalian cells treated by sequential action of ionizing radiation and hyperthermia were used to estimate the dependence of synergistic enhancement ratio on the ratio of damages induced by these agents. Experimental results were described and interpreted by means of the mathematical model of synergism in accordance with which the synergism is expected to result from the additional lethal damage arising from the interaction of sublesions induced by both agents.

  10. The importance of examining movements within the US health care system: sequential logit modeling

    Directory of Open Access Journals (Sweden)

    Lee Chioun

    2010-09-01

    Full Text Available Abstract Background Utilization of specialty care may not be a discrete, isolated behavior but rather, a behavior of sequential movements within the health care system. Although patients may often visit their primary care physician and receive a referral before utilizing specialty care, prior studies have underestimated the importance of accounting for these sequential movements. Methods The sample included 6,772 adults aged 18 years and older who participated in the 2001 Survey on Disparities in Quality of Care, sponsored by the Commonwealth Fund. A sequential logit model was used to account for movement in all stages of utilization: use of any health services (i.e., first stage, having a perceived need for specialty care (i.e., second stage, and utilization of specialty care (i.e., third stage. In the sequential logit model, all stages are nested within the previous stage. Results Gender, race/ethnicity, education and poor health had significant explanatory effects with regard to use of any health services and having a perceived need for specialty care, however racial/ethnic, gender, and educational disparities were not present in utilization of specialty care. After controlling for use of any health services and having a perceived need for specialty care, inability to pay for specialty care via income (AOR = 1.334, CI = 1.10 to 1.62 or health insurance (unstable insurance: AOR = 0.26, CI = 0.14 to 0.48; no insurance: AOR = 0.12, CI = 0.07 to 0.20 were significant barriers to utilization of specialty care. Conclusions Use of a sequential logit model to examine utilization of specialty care resulted in a detailed representation of utilization behaviors and patient characteristics that impact these behaviors at all stages within the health care system. After controlling for sequential movements within the health care system, the biggest barrier to utilizing specialty care is the inability to pay, while racial, gender, and educational disparities

  11. Employees’ Perceptions of Corporate Social Responsibility and Job Performance: A Sequential Mediation Model

    Directory of Open Access Journals (Sweden)

    Inyong Shin

    2016-05-01

    Full Text Available In spite of the increasing importance of corporate social responsibility (CSR and employee job performance, little is still known about the links between the socially responsible actions of organizations and the job performance of their members. In order to explain how employees’ perceptions of CSR influence their job performance, this study first examines the relationships between perceived CSR, organizational identification, job satisfaction, and job performance, and then develops a sequential mediation model by fully integrating these links. The results of structural equation modeling analyses conducted for 250 employees at hotels in South Korea offered strong support for the proposed model. We found that perceived CSR was indirectly and positively associated with job performance sequentially mediated first through organizational identification and then job satisfaction. This study theoretically contributes to the CSR literature by revealing the sequential mechanism through which employees’ perceptions of CSR affect their job performance, and offers practical implications by stressing the importance of employees’ perceptions of CSR. Limitations of this study and future research directions are discussed.

  12. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  13. Quantifying private benefits of control from a structural model of block trades

    NARCIS (Netherlands)

    Albuquerque, R.; Schroth, E.

    2009-01-01

    We study the determinants of private benefits of control in negotiated block transactions. We estimate the block pricing model in Burkart, Gromb, and Panunzi (2000) explicitly accounting for both block premia and block discounts in the data. The evidence suggests that the occurrence of a block

  14. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    Science.gov (United States)

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  15. Coarse-grained modeling of hybrid block copolymer system

    Science.gov (United States)

    Su, Yongrui

    This thesis is comprised of three major projects of my research. In the first project, I proposed a nanoparticle model and combined it with the Theoretically Informed Coarse Grained (TICG) model for pure polymer systems and the grand canonical slip springs model developed in our group to build a new model for entangled nanocomposites. With Molecule Dynamics(MD) simulation, I studied the mechanic properties of the nanocomposites, for example the influence of nanoparticles size and volume fraction on entanglements, the diffusion of polymers and nanoparticles, and the influence of nanoparticles size and volume fraction on viscosity et al.. We found that the addition of small-size nanoparticles reduces the viscosity of the nanocomposites, which is in contrary to what Einstein predicted a century ago. However, when particle increases its size to micrometers the Einstein predictions is recovered. From our simulation, we believe that small-size nanoparticles can more effectively decrease the entanglements of nanocomposites than larger particles. The free volume effect introduced by small-size nanoparticles also helps decrease the viscosity of the whole system. In the second project, I combined the Ohta-Kawasaki (OK) model [3] and the Covariance Matrix Adaptation Evolutionary Strategy(CMA-ES) to optimize the block copolymer blends self-assembly in the hole-shrink process. The aim is to predict the optimal composition and the optimal surface energy to direct the block copolymer blends self-assembly process in the confined hole. After optimization in the OK model, we calibrated the optimal results by the more reliable TICG model and got the same morphology. By comparing different optimization process, we found that the homopolymers which are comprised of the same monomers as either block of the block copolymer can form a perfect perforated hole and might have better performance than the pure block copolymer. While homopolymers which are comprised of a third-party monomers

  16. Mathematical modeling and simulation of nanopore blocking by precipitation

    KAUST Repository

    Wolfram, M-T

    2010-10-29

    High surface charges of polymer pore walls and applied electric fields can lead to the formation and subsequent dissolution of precipitates in nanopores. These precipitates block the pore, leading to current fluctuations. We present an extended Poisson-Nernst-Planck system which includes chemical reactions of precipitation and dissolution. We discuss the mathematical modeling and present 2D numerical simulations. © 2010 IOP Publishing Ltd.

  17. . Redundancy and blocking in the spatial domain: A connectionist model

    Directory of Open Access Journals (Sweden)

    I. P. L. Mc Laren

    2002-01-01

    Full Text Available How can the observations of spatial blocking (Rodrigo, Chamizo, McLaren & Mackintosh, 1997 and cue redundancy (O’Keefe and Conway, 1978 be reconciled within the framework provided by an error-correcting, connectionist account of spatial navigation? I show that an implementation of McLaren’s (1995 better beta model can serve this purpose, and examine some of the implications for spatial learning and memory.

  18. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  19. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  20. A sequential threshold cure model for genetic analysis of time-to-event data

    DEFF Research Database (Denmark)

    Ødegård, J; Madsen, Per; Labouriau, Rodrigo S.

    2011-01-01

    In analysis of time-to-event data, classical survival models ignore the presence of potential nonsusceptible (cured) individuals, which, if present, will invalidate the inference procedures. Existence of nonsusceptible individuals is particularly relevant under challenge testing with specific...... pathogens, which is a common procedure in aquaculture breeding schemes. A cure model is a survival model accounting for a fraction of nonsusceptible individuals in the population. This study proposes a mixed cure model for time-to-event data, measured as sequential binary records. In a simulation study...... survival data were generated through 2 underlying traits: susceptibility and endurance (risk of dying per time-unit), associated with 2 sets of underlying liabilities. Despite considerable phenotypic confounding, the proposed model was largely able to distinguish the 2 traits. Furthermore, if selection...

  1. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    Science.gov (United States)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  2. The constructive backlash dissipate effect model for concrete blocks

    International Nuclear Information System (INIS)

    Tepes-Onea Florin

    2004-01-01

    From physical point of view, the dumping represents the soil seismic excitation energy taken over process through internal absorption, rubbed between existent layers, as and cracks on rocky foundations. Generally, on heavy dams dynamic analysis it is considered a viscous dump, proportional with deformation speed. The dumping can be evaluated on experimental bases or on environmental conditions measurements. The latest determine higher values of dumping elements. This it could be explained with the local factors influence which is not possible to modeled as backlash treatment, foundation ground characteristics, the concrete technology. This represents atypical dissipate phenomenon. A major influence is done by the excitation level as real seism or experimental excitation. The present work is about to establish the influence of the dissipate effect of the backlash on concrete blocks. The backlash finite elements modeling make this possible, studying different situations as rub effect, cohesion effect, seismic action on varying directions with the same accelerogram of 0.4 g. The studied blocks have the same dimensions, the relative displacement being obtained by foundation stiffness modified under two block parts. (author)

  3. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    Directory of Open Access Journals (Sweden)

    Wakhid Slamet Ciptono

    2006-05-01

    Full Text Available This study extends the prior research (Zahra and Das 1993 by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment that may lead to higher company non-financial performance (productivity and operational reliability. The study distributed a questionnaire (by mail, e-mailed web system, and focus group discussion to three levels of managers (top, middle, and first-line of 49 oil and gas companies with 140 SBUs in Indonesia. These qualified samples fell into 47 upstream (supply-chain companies with 132 SBUs, and 2 downstream (demand-chain companies with 8 SBUs. A total of 1,332 individual usable questionnaires were returned thus qualified for analysis, representing an effective response rate of 50.19 percent. The researcher conducts structural equation modeling (SEM and hierarchical multiple regression analysis to assess the goodness-of-fit between the research models and the sample data and to test whether innovation strategy mediates the impact of leadership orientation on company non-financial performance. SEM reveals that the models have met goodness-of-fit criteria, thus the interpretation of the sequential models fits with the data. The results of SEM and hierarchical multiple regression: (1 support the importance of innovation strategy as a determinant of company non-financial performance, (2 suggest that the sequential model is appropriate for examining the relationships between six dimensions of innovation strategy and company non-financial performance, and (3 show that the sequential model provides additional insights into the indirect contribution of the individual

  4. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    Science.gov (United States)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  5. Behavioral Modeling of WSN MAC Layer Security Attacks: A Sequential UML Approach

    DEFF Research Database (Denmark)

    Pawar, Pranav M.; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    is the vulnerability to security attacks/threats. The performance and behavior of a WSN are vastly affected by such attacks. In order to be able to better address the vulnerabilities of WSNs in terms of security, it is important to understand the behavior of the attacks. This paper addresses the behavioral modeling...... of medium access control (MAC) security attacks in WSNs. The MAC layer is responsible for energy consumption, delay and channel utilization of the network and attacks on this layer can introduce significant degradation of the individual sensor nodes due to energy drain and in performance due to delays....... The behavioral modeling of attacks will be beneficial for designing efficient and secure MAC layer protocols. The security attacks are modeled using a sequential diagram approach of Unified Modeling Language (UML). Further, a new attack definition, specific to hybrid MAC mechanisms, is proposed....

  6. Macroscopic Dynamic Modeling of Sequential Batch Cultures of Hybridoma Cells: An Experimental Validation

    Directory of Open Access Journals (Sweden)

    Laurent Dewasme

    2017-02-01

    Full Text Available Hybridoma cells are commonly grown for the production of monoclonal antibodies (MAb. For monitoring and control purposes of the bioreactors, dynamic models of the cultures are required. However these models are difficult to infer from the usually limited amount of available experimental data and do not focus on target protein production optimization. This paper explores an experimental case study where hybridoma cells are grown in a sequential batch reactor. The simplest macroscopic reaction scheme translating the data is first derived using a maximum likelihood principal component analysis. Subsequently, nonlinear least-squares estimation is used to determine the kinetic laws. The resulting dynamic model reproduces quite satisfactorily the experimental data, as evidenced in direct and cross-validation tests. Furthermore, model predictions can also be used to predict optimal medium renewal time and composition.

  7. Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte

    2004-01-01

    are proposed, but all remain applicable to other domain problems. The well-known multi-resolution AAM optimisation is extended to include sequential relaxations on texture resolution, model coverage and model parameter constraints. Fully unsupervised analysis is obtained by exploiting model parameter...... that show that the method produces accurate, robust and rapid segmentations in a cross sectional study of 17 subjects, establishing its feasibility as a fully automated clinical tool for analysis and segmentation.......This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case...

  8. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  9. Exposure assessment of mobile phone base station radiation in an outdoor environment using sequential surrogate modeling.

    Science.gov (United States)

    Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom

    2013-05-01

    Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.

  10. Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data

    KAUST Repository

    Xie, Qing

    2016-02-23

    Alzheimer\\'s Disease (AD) is currently attracting much attention in elders\\' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD\\'s progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients\\' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer\\'s Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.

  11. Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data

    KAUST Repository

    Xie, Qing; Wang, Su; Zhu, Jia; Zhang, Xiangliang

    2016-01-01

    Alzheimer's Disease (AD) is currently attracting much attention in elders' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD's progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer's Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.

  12. Clustering network layers with the strata multilayer stochastic block model.

    Science.gov (United States)

    Stanley, Natalie; Shai, Saray; Taylor, Dane; Mucha, Peter J

    2016-01-01

    Multilayer networks are a useful data structure for simultaneously capturing multiple types of relationships between a set of nodes. In such networks, each relational definition gives rise to a layer. While each layer provides its own set of information, community structure across layers can be collectively utilized to discover and quantify underlying relational patterns between nodes. To concisely extract information from a multilayer network, we propose to identify and combine sets of layers with meaningful similarities in community structure. In this paper, we describe the "strata multilayer stochastic block model" (sMLSBM), a probabilistic model for multilayer community structure. The central extension of the model is that there exist groups of layers, called "strata", which are defined such that all layers in a given stratum have community structure described by a common stochastic block model (SBM). That is, layers in a stratum exhibit similar node-to-community assignments and SBM probability parameters. Fitting the sMLSBM to a multilayer network provides a joint clustering that yields node-to-community and layer-to-stratum assignments, which cooperatively aid one another during inference. We describe an algorithm for separating layers into their appropriate strata and an inference technique for estimating the SBM parameters for each stratum. We demonstrate our method using synthetic networks and a multilayer network inferred from data collected in the Human Microbiome Project.

  13. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    International Nuclear Information System (INIS)

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  14. Block of GABA(A) receptor ion channel by penicillin: electrophysiological and modeling insights toward the mechanism.

    Science.gov (United States)

    Rossokhin, Alexey V; Sharonova, Irina N; Bukanova, Julia V; Kolbaev, Sergey N; Skrebitsky, Vladimir G

    2014-11-01

    GABA(A) receptors (GABA(A)R) mainly mediate fast inhibitory neurotransmission in the central nervous system. Different classes of modulators target GABA(A)R properties. Penicillin G (PNG) belongs to the class of noncompetitive antagonists blocking the open GABA(A)R and is a prototype of β-lactam antibiotics. In this study, we combined electrophysiological and modeling approaches to investigate the peculiarities of PNG blockade of GABA-activated currents recorded from isolated rat Purkinje cells and to predict the PNG binding site. Whole-cell patch-сlamp recording and fast application system was used in the electrophysiological experiments. PNG block developed after channel activation and increased with membrane depolarization suggesting that the ligand binds within the open channel pore. PNG blocked stationary component of GABA-activated currents in a concentration-dependent manner with IC50 value of 1.12mM at -70mV. The termination of GABA and PNG co-application was followed by a transient tail current. Protection of the tail current from bicuculline block and dependence of its kinetic parameters on agonist affinity suggest that PNG acts as a sequential open channel blocker that prevents agonist dissociation while the channel remains blocked. We built the GABA(A)R models based on nAChR and GLIC structures and performed an unbiased systematic search of the PNG binding site. Monte-Carlo energy minimization was used to find the lowest energy binding modes. We have shown that PNG binds close to the intracellular vestibule. In both models the maximum contribution to the energy of ligand-receptor interactions revealed residues located on the level of 2', 6' and 9' rings formed by a bundle of M2 transmembrane segments, indicating that these residues most likely participate in PNG binding. The predicted structural models support the described mechanism of PNG block. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Block Pickard Models for Two-Dimensional Constraints

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2009-01-01

    In Pickard random fields (PRF), the probabilities of finite configurations and the entropy of the field can be calculated explicitly, but only very simple structures can be incorporated into such a field. Given two Markov chains describing a boundary, an algorithm is presented which determines...... for the domino tiling constraint represented by a quaternary alphabet. PRF models are also presented for higher order constraints, including the no isolated bits (n.i.b.) constraint, and a minimum distance 3 constraint by defining super symbols on blocks of binary symbols....

  16. Tyre tread-block friction: modelling, simulation and experimental validation

    Science.gov (United States)

    Wallaschek, Jörg; Wies, Burkard

    2013-07-01

    Pneumatic tyres are used in vehicles since the beginning of the last century. They generate braking and steering forces for bicycles, motor cycles, cars, busses, trucks, agricultural vehicles and aircraft. These forces are generated in the usually very small contact area between tyre and road and their performance characteristics are of eminent importance for safety and comfort. Much research has been addressed to optimise tyre design with respect to footprint pressure and friction. In this context, the development of virtual tyre prototypes, that is, simulation models for the tyre, has grown to a science in its own. While the modelling of the structural dynamics of the tyre has reached a very advanced level, which allows to take into account effects like the rate-independent inelasticity of filled elastomers or the transient 3D deformations of the ply-reinforced tread, shoulder and sidewalls, little is known about the friction between tread-block elements and road. This is particularly obvious in the case when snow, ice, water or a third-body layer are present in the tyre-road contact. In the present paper, we give a survey on the present state of knowledge in the modelling, simulation and experimental validation of tyre tread-block friction processes. We concentrate on experimental techniques.

  17. Validation studies on indexed sequential modeling for the Colorado River Basin

    International Nuclear Information System (INIS)

    Labadie, J.W.; Fontane, D.G.; Salas, J.D.; Ouarda, T.

    1991-01-01

    This paper reports on a method called indexed sequential modeling (ISM) that has been developed by the Western Area Power Administration to estimate reliable levels of project dependable power capacity (PDC) and applied to several federal hydro systems in the Western U.S. The validity of ISM in relation to more commonly accepted stochastic modeling approaches is analyzed by applying it to the Colorado River Basin using the Colorado River Simulation System (CRSS) developed by the U.S. Bureau of Reclamation. Performance of ISM is compared with results from input of stochastically generated data using the LAST Applied Stochastic Techniques Package. Results indicate that output generated from ISM synthetically generated sequences display an acceptable correspondence with results obtained from final convergent stochastically generated hydrology for the Colorado River Basin

  18. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task

    Directory of Open Access Journals (Sweden)

    Cristóbal Moënne-Loccoz

    2017-09-01

    Full Text Available Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  19. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  20. Ecohydrologic process modeling of mountain block groundwater recharge.

    Science.gov (United States)

    Magruder, Ian A; Woessner, William W; Running, Steve W

    2009-01-01

    Regional mountain block recharge (MBR) is a key component of alluvial basin aquifer systems typical of the western United States. Yet neither water scientists nor resource managers have a commonly available and reasonably invoked quantitative method to constrain MBR rates. Recent advances in landscape-scale ecohydrologic process modeling offer the possibility that meteorological data and land surface physical and vegetative conditions can be used to generate estimates of MBR. A water balance was generated for a temperate 24,600-ha mountain watershed, elevation 1565 to 3207 m, using the ecosystem process model Biome-BGC (BioGeochemical Cycles) (Running and Hunt 1993). Input data included remotely sensed landscape information and climate data generated with the Mountain Climate Simulator (MT-CLIM) (Running et al. 1987). Estimated mean annual MBR flux into the crystalline bedrock terrain is 99,000 m(3) /d, or approximately 19% of annual precipitation for the 2003 water year. Controls on MBR predictions include evapotranspiration (radiation limited in wet years and moisture limited in dry years), soil properties, vegetative ecotones (significant at lower elevations), and snowmelt (dominant recharge process). The ecohydrologic model is also used to investigate how climatic and vegetative controls influence recharge dynamics within three elevation zones. The ecohydrologic model proves useful for investigating controls on recharge to mountain blocks as a function of climate and vegetation. Future efforts will need to investigate the uncertainty in the modeled water balance by incorporating an advanced understanding of mountain recharge processes, an ability to simulate those processes at varying scales, and independent approaches to calibrating MBR estimates. Copyright © 2009 The Author(s). Journal compilation © 2009 National Ground Water Association.

  1. Mind-to-mind heteroclinic coordination: Model of sequential episodic memory initiation

    Science.gov (United States)

    Afraimovich, V. S.; Zaks, M. A.; Rabinovich, M. I.

    2018-05-01

    Retrieval of episodic memory is a dynamical process in the large scale brain networks. In social groups, the neural patterns, associated with specific events directly experienced by single members, are encoded, recalled, and shared by all participants. Here, we construct and study the dynamical model for the formation and maintaining of episodic memory in small ensembles of interacting minds. We prove that the unconventional dynamical attractor of this process—the nonsmooth heteroclinic torus—is structurally stable within the Lotka-Volterra-like sets of equations. Dynamics on this torus combines the absence of chaos with asymptotic instability of every separate trajectory; its adequate quantitative characteristics are length-related Lyapunov exponents. Variation of the coupling strength between the participants results in different types of sequential switching between metastable states; we interpret them as stages in formation and modification of the episodic memory.

  2. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

    DEFF Research Database (Denmark)

    Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

    2016-01-01

    generated by a mathematical model of the competitive growth of multiple strains of Escherichia coli.Results: Simulation studies showed that sequential use of tetracycline and ampicillin reduced the level of double resistance, when compared to the combination treatment. The effect of the cycling frequency...... frequency did not play a role in suppressing the growth of resistant strains, but the specific order of the two antimicrobials did. Predictions made from the study could be used to redesign multidrug treatment strategies not only for intramuscular treatment in pigs, but also for other dosing routes.......Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...

  3. Modelling sequential Biosphere systems under Climate change for radioactive waste disposal. Project BIOCLIM

    International Nuclear Information System (INIS)

    Texier, D.; Degnan, P.; Loutre, M.F.; Lemaitre, G.; Paillard, D.; Thorne, M.

    2000-01-01

    The BIOCLIM project (Modelling Sequential Biosphere systems under Climate change for Radioactive Waste Disposal) is part of the EURATOM fifth European framework programme. The project was launched in October 2000 for a three-year period. It is coordinated by ANDRA, the French national radioactive waste management agency. The project brings together a number of European radioactive waste management organisations that have national responsibilities for the safe disposal of radioactive wastes, and several highly experienced climate research teams. Waste management organisations involved are: NIREX (UK), GRS (Germany), ENRESA (Spain), NRI (Czech Republic) and ANDRA (France). Climate research teams involved are: LSCE (CEA/CNRS, France), CIEMAT (Spain), UPMETSIMM (Spain), UCL/ASTR (Belgium) and CRU (UEA, UK). The Environmental Agency for England and Wales provides a regulatory perspective. The consulting company Enviros Consulting (UK) assists ANDRA by contributing to both the administrative and scientific aspects of the project. This paper describes the project and progress to date. (authors)

  4. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  5. Modelling a multi-crystal detector block for PET

    International Nuclear Information System (INIS)

    Carroll, L.R.; Nutt, R.; Casey, M.

    1985-01-01

    A simple mathematical model describes the performance of a modular detector ''block'' which is a key component in an advanced, high-resolution PET Scanner. Each block contains 32 small bismuth germanate (BGO) crystals coupled to four photomultiplier tubes (PMTs) through a coded light pipe. AT each PMT cathode the charge released for 511 keV coincidence events may be characterized as Poisson random variables in which the variance grows as the mean of the observed current. Given the light from BGO, one must; arrange the best coding - the distribution of light to the four PMTs, specify an optimum decoding scheme for choosing the correct crystal location from a noisy ensemble of PMT currents, and estimate the average probability of error. The statistical fluctuation or ''noise'' becomes decoupled from the ''signal'' and can be regarded as independent, additive components with zero mean and unit variance. Moreover, the envelope of the transformed noise distribution approximates very closely a normal (Gaussian) distribution with variance = 1. Specifying the coding and decoding strategy becomes a problem of signalling through a channel corrupted by additive, white, Gaussian noise; a classic problem long since solved within the context of Communication Engineering using geometry: i.e. distance, volume, angle, inner product, etc., in a linear space of higher dimension

  6. Methamphetamine-alcohol interactions in murine models of sequential and simultaneous oral drug-taking.

    Science.gov (United States)

    Fultz, Elissa K; Martin, Douglas L; Hudson, Courtney N; Kippin, Tod E; Szumlinski, Karen K

    2017-08-01

    A high degree of co-morbidity exists between methamphetamine (MA) addiction and alcohol use disorders and both sequential and simultaneous MA-alcohol mixing increases risk for co-abuse. As little preclinical work has focused on the biobehavioral interactions between MA and alcohol within the context of drug-taking behavior, we employed simple murine models of voluntary oral drug consumption to examine how prior histories of either MA- or alcohol-taking influence the intake of the other drug. In one study, mice with a 10-day history of binge alcohol-drinking [5,10, 20 and 40% (v/v); 2h/day] were trained to self-administer oral MA in an operant-conditioning paradigm (10-40mg/L). In a second study, mice with a 10-day history of limited-access oral MA-drinking (5, 10, 20 and 40mg/L; 2h/day) were presented with alcohol (5-40% v/v; 2h/day) and then a choice between solutions of 20% alcohol, 10mg/L MA or their mix. Under operant-conditioning procedures, alcohol-drinking mice exhibited less MA reinforcement overall, than water controls. However, when drug availability was not behaviorally-contingent, alcohol-drinking mice consumed more MA and exhibited greater preference for the 10mg/L MA solution than drug-naïve and combination drug-experienced mice. Conversely, prior MA-drinking history increased alcohol intake across a range of alcohol concentrations. These exploratory studies indicate the feasibility of employing procedurally simple murine models of sequential and simultaneous oral MA-alcohol mixing of relevance to advancing our biobehavioral understanding of MA-alcohol co-abuse. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    A. Tran-Duy (An); A. Boonen (Annelies); M.A.F.J. van de Laar (Mart); A. Franke (Andre); J.L. Severens (Hans)

    2011-01-01

    textabstractObjective: To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods: Discrete event simulation paradigm was selected for model

  8. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    Tran-Duy, A.; Boonen, A.; Laar, M.A.F.J.; Franke, A.C.; Severens, J.L.

    2011-01-01

    Objective To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods Discrete event simulation paradigm was selected for model development. Drug

  9. AUTOMATIC 3D BUILDING MODEL GENERATION FROM LIDAR AND IMAGE DATA USING SEQUENTIAL MINIMUM BOUNDING RECTANGLE

    Directory of Open Access Journals (Sweden)

    E. Kwak

    2012-07-01

    Full Text Available Digital Building Model is an important component in many applications such as city modelling, natural disaster planning, and aftermath evaluation. The importance of accurate and up-to-date building models has been discussed by many researchers, and many different approaches for efficient building model generation have been proposed. They can be categorised according to the data source used, the data processing strategy, and the amount of human interaction. In terms of data source, due to the limitations of using single source data, integration of multi-senor data is desired since it preserves the advantages of the involved datasets. Aerial imagery and LiDAR data are among the commonly combined sources to obtain 3D building models with good vertical accuracy from laser scanning and good planimetric accuracy from aerial images. The most used data processing strategies are data-driven and model-driven ones. Theoretically one can model any shape of buildings using data-driven approaches but practically it leaves the question of how to impose constraints and set the rules during the generation process. Due to the complexity of the implementation of the data-driven approaches, model-based approaches draw the attention of the researchers. However, the major drawback of model-based approaches is that the establishment of representative models involves a manual process that requires human intervention. Therefore, the objective of this research work is to automatically generate building models using the Minimum Bounding Rectangle algorithm and sequentially adjusting them to combine the advantages of image and LiDAR datasets.

  10. Analysis of membrane fusion as a two-state sequential process: evaluation of the stalk model.

    Science.gov (United States)

    Weinreb, Gabriel; Lentz, Barry R

    2007-06-01

    We propose a model that accounts for the time courses of PEG-induced fusion of membrane vesicles of varying lipid compositions and sizes. The model assumes that fusion proceeds from an initial, aggregated vesicle state ((A) membrane contact) through two sequential intermediate states (I(1) and I(2)) and then on to a fusion pore state (FP). Using this model, we interpreted data on the fusion of seven different vesicle systems. We found that the initial aggregated state involved no lipid or content mixing but did produce leakage. The final state (FP) was not leaky. Lipid mixing normally dominated the first intermediate state (I(1)), but content mixing signal was also observed in this state for most systems. The second intermediate state (I(2)) exhibited both lipid and content mixing signals and leakage, and was sometimes the only leaky state. In some systems, the first and second intermediates were indistinguishable and converted directly to the FP state. Having also tested a parallel, two-intermediate model subject to different assumptions about the nature of the intermediates, we conclude that a sequential, two-intermediate model is the simplest model sufficient to describe PEG-mediated fusion in all vesicle systems studied. We conclude as well that a fusion intermediate "state" should not be thought of as a fixed structure (e.g., "stalk" or "transmembrane contact") of uniform properties. Rather, a fusion "state" describes an ensemble of similar structures that can have different mechanical properties. Thus, a "state" can have varying probabilities of having a given functional property such as content mixing, lipid mixing, or leakage. Our data show that the content mixing signal may occur through two processes, one correlated and one not correlated with leakage. Finally, we consider the implications of our results in terms of the "modified stalk" hypothesis for the mechanism of lipid pore formation. We conclude that our results not only support this hypothesis but

  11. Rectangular amplitudes, conformal blocks, and applications to loop models

    Energy Technology Data Exchange (ETDEWEB)

    Bondesan, Roberto, E-mail: roberto.bondesan@cea.fr [LPTENS, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Institute de Physique Theorique, CEA Saclay, F-91191 Gif-sur-Yvette (France); Jacobsen, Jesper L. [LPTENS, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Universite Pierre et Marie Curie, 4 place Jussieu, 75252 Paris (France); Saleur, Hubert [Institute de Physique Theorique, CEA Saclay, F-91191 Gif-sur-Yvette (France); Physics Department, USC, Los Angeles, CA 90089-0484 (United States)

    2013-02-21

    In this paper we continue the investigation of partition functions of critical systems on a rectangle initiated in [R. Bondesan, et al., Nucl. Phys. B 862 (2012) 553-575]. Here we develop a general formalism of rectangle boundary states using conformal field theory, adapted to describe geometries supporting different boundary conditions. We discuss the computation of rectangular amplitudes and their modular properties, presenting explicit results for the case of free theories. In a second part of the paper we focus on applications to loop models, discussing in details lattice discretizations using both numerical and analytical calculations. These results allow to interpret geometrically conformal blocks, and as an application we derive new probability formulas for self-avoiding walks.

  12. Block-based approach to modelling of granulated fertilizers' quality

    DEFF Research Database (Denmark)

    Kohonen, J.; Reinikainen, S. P.; Høskuldsson, Agnar

    2009-01-01

    be defined through testing the flow rate with, e.g., seed drill. Besides the chemical composition, flowability can be considered as one of the most important characteristics. There are numerous factors affecting the flowability of a granulated fertilizer, several of them related to the particle size......Fertilizer manufacturing is a customer-driven industry, where the quality of a product is a key factor in order to survive the competition. However, measuring the most important feature with granulated fertilizers, flowability, is tedious, time-consuming and thus expensive. Flowability can...... size distribution. The goals are to find a reliable model for flowability using this data and to find the most important variables and to identify the effect of blocks to the quality....

  13. Distributed data access in the sequential access model at the D0 experiment at Fermilab

    International Nuclear Information System (INIS)

    Terekhov, Igor; White, Victoria

    2000-01-01

    The authors present the Sequential Access Model (SAM), which is the data handling system for D0, one of two primary High Energy Experiments at Fermilab. During the next several years, the D0 experiment will store a total of about 1 PByte of data, including raw detector data and data processed at various levels. The design of SAM is not specific to the D0 experiment and carries few assumptions about the underlying mass storage level; its ideas are applicable to any sequential data access. By definition, in the sequential access mode a user application needs to process a stream of data, by accessing each data unit exactly once, the order of data units in the stream being irrelevant. The units of data are laid out sequentially in files. The adopted model allows for significant optimizations of system performance, decrease of user file latency and increase of overall throughput. In particular, caching is done with the knowledge of all the files needed in the near future, defined as all the files of the already running or submitted jobs. The bulk of the data is stored in files on tape in the mass storage system (MSS) called Enstore[2] and also developed at Fermilab. (The tape drives are served by an ADIC AML/2 Automated Tape Library). At any given time, SAM has a small fraction of the data cached on disk for processing. In the present paper, the authors discuss how data is delivered onto disk and how it is accessed by user applications. They will concentrate on data retrieval (consumption) from the MSS; when SAM is used for storing of data, the mechanisms are rather symmetrical. All of the data managed by SAM is cataloged in great detail in a relational database (ORACLE). The database also serves as the persistency mechanism for the SAM servers described in this paper. Any client or server in the SAM system which needs to store or retrieve information from the database does so through the interfaces of a CORBA-based database server. The users (physicists) use the

  14. A 2d Block Model For Landslide Simulation: An Application To The 1963 Vajont Case

    Science.gov (United States)

    Tinti, S.; Zaniboni, F.; Manucci, A.; Bortolucci, E.

    A 2D block model to study the motion of a sliding mass is presented. The slide is par- titioned into a matrix of blocks the basis of which are quadrilaterals. The blocks move on a specified sliding surface and follow a trajectory that is computed by the model. The forces acting on the blocks are gravity, basal friction, buoyancy in case of under- water motion, and interaction with neighbouring blocks. At any time step, the position of the blocks on the sliding surface is determined in curvilinear (local) co-ordinates by computing the position of the vertices of the quadrilaterals and the position of the block centre of mass. Mathematically, the topology of the system is invariant during the motion, which means that the number of blocks is constant and that each block has always the same neighbours. Physically, this means that blocks are allowed to change form, but not to penetrate into each other, not to coalesce, not to split. The change of form is compensated by the change of height, under the computational assumption that the block volume is constant during motion: consequently lateral expansion or contraction yield respectively height reduction or increment of the blocks. This model is superior to the analogous 1D model where the mass is partitioned into a chain of interacting blocks. 1D models require the a-priori specification of the sliding path, that is of the trajectory of the blocks, which the 2D block model supplies as one of its output. In continuation of previous studies on the catastrophic slide of Vajont that occurred in 1963 in northern Italy and caused more than 2000 victims, the 2D block model has been applied to the Vajont case. The results are compared to the outcome of the 1D model, and more importantly to the observational data concerning the deposit position and morphology. The agreement between simulation and data is found to be quite good.

  15. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Richard Stahl

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  16. Patient specific dynamic geometric models from sequential volumetric time series image data.

    Science.gov (United States)

    Cameron, B M; Robb, R A

    2004-01-01

    Generating patient specific dynamic models is complicated by the complexity of the motion intrinsic and extrinsic to the anatomic structures being modeled. Using a physics-based sequentially deforming algorithm, an anatomically accurate dynamic four-dimensional model can be created from a sequence of 3-D volumetric time series data sets. While such algorithms may accurately track the cyclic non-linear motion of the heart, they generally fail to accurately track extrinsic structural and non-cyclic motion. To accurately model these motions, we have modified a physics-based deformation algorithm to use a meta-surface defining the temporal and spatial maxima of the anatomic structure as the base reference surface. A mass-spring physics-based deformable model, which can expand or shrink with the local intrinsic motion, is applied to the metasurface, deforming this base reference surface to the volumetric data at each time point. As the meta-surface encompasses the temporal maxima of the structure, any extrinsic motion is inherently encoded into the base reference surface and allows the computation of the time point surfaces to be performed in parallel. The resultant 4-D model can be interactively transformed and viewed from different angles, showing the spatial and temporal motion of the anatomic structure. Using texture maps and per-vertex coloring, additional data such as physiological and/or biomechanical variables (e.g., mapping electrical activation sequences onto contracting myocardial surfaces) can be associated with the dynamic model, producing a 5-D model. For acquisition systems that may capture only limited time series data (e.g., only images at end-diastole/end-systole or inhalation/exhalation), this algorithm can provide useful interpolated surfaces between the time points. Such models help minimize the number of time points required to usefully depict the motion of anatomic structures for quantitative assessment of regional dynamics.

  17. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  18. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  19. Sequential assimilation of multi-mission dynamical topography into a global finite-element ocean model

    Directory of Open Access Journals (Sweden)

    S. Skachko

    2008-12-01

    Full Text Available This study focuses on an accurate estimation of ocean circulation via assimilation of satellite measurements of ocean dynamical topography into the global finite-element ocean model (FEOM. The dynamical topography data are derived from a complex analysis of multi-mission altimetry data combined with a referenced earth geoid. The assimilation is split into two parts. First, the mean dynamic topography is adjusted. To this end an adiabatic pressure correction method is used which reduces model divergence from the real evolution. Second, a sequential assimilation technique is applied to improve the representation of thermodynamical processes by assimilating the time varying dynamic topography. A method is used according to which the temperature and salinity are updated following the vertical structure of the first baroclinic mode. It is shown that the method leads to a partially successful assimilation approach reducing the rms difference between the model and data from 16 cm to 2 cm. This improvement of the mean state is accompanied by significant improvement of temporal variability in our analysis. However, it remains suboptimal, showing a tendency in the forecast phase of returning toward a free run without data assimilation. Both the mean difference and standard deviation of the difference between the forecast and observation data are reduced as the result of assimilation.

  20. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity.

    Science.gov (United States)

    Kuersteiner, Guido M; Prucha, Ingmar R

    2013-06-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n . The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.

  1. Prospectivity Modeling of Karstic Groundwater Using a Sequential Exploration Approach in Tepal Area, Iran

    Science.gov (United States)

    Sharifi, Fereydoun; Arab-Amiri, Ali Reza; Kamkar-Rouhani, Abolghasem; Yousefi, Mahyar; Davoodabadi-Farahani, Meysam

    2017-09-01

    The purpose of this study is water prospectivity modeling (WPM) for recognizing karstic water-bearing zones by using analyses of geo-exploration data in Kal-Qorno valley, located in Tepal area, north of Iran. For this, a sequential exploration method applied on geo-evidential data to delineate target areas for further exploration. In this regard, two major exploration phases including regional and local scales were performed. In the first phase, indicator geological features, structures and lithological units, were used to model groundwater prospectivity as a regional scale. In this phase, for karstic WPM, fuzzy lithological and structural evidence layers were generated and combined using fuzzy operators. After generating target areas using WPM, in the second phase geophysical surveys including gravimetry and geoelectrical resistivity were carried out on the recognized high potential zones as a local scale exploration. Finally the results of geophysical analyses in the second phase were used to select suitable drilling locations to access and extract karstic groundwater in the study area.

  2. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    Science.gov (United States)

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522

  3. Work-Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress.

    Science.gov (United States)

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  4. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    Directory of Open Access Journals (Sweden)

    Shiyi Zhou

    2018-04-01

    Full Text Available After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  5. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  6. In vivo comparison of simultaneous versus sequential injection technique for thermochemical ablation in a porcine model.

    Science.gov (United States)

    Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C

    2012-01-01

    To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.

  7. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Fiandaca, G.; Auken, Esben

    2013-01-01

    hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM) and electrical resistivity tomography (ERT) data. In a sequential hydrogeophysical inversion (SHI) a groundwater model is calibrated with geophysical data by coupling groundwater model parameters...... with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI). In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical...

  8. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Fiandaca, G.; Auken, Esben

    2013-01-01

    with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI). In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical...... hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM) and electrical resistivity tomography (ERT) data. In a sequential hydrogeophysical inversion (SHI) a groundwater model is calibrated with geophysical data by coupling groundwater model parameters...

  9. Normative personality trait development in adulthood: A 6-year cohort-sequential growth model.

    Science.gov (United States)

    Milojev, Petar; Sibley, Chris G

    2017-03-01

    The present study investigated patterns of normative change in personality traits across the adult life span (19 through 74 years of age). We examined change in extraversion, agreeableness, conscientiousness, neuroticism, openness to experience and honesty-humility using data from the first 6 annual waves of the New Zealand Attitudes and Values Study (N = 10,416; 61.1% female, average age = 49.46). We present a cohort-sequential latent growth model assessing patterns of mean-level change due to both aging and cohort effects. Extraversion decreased as people aged, with the most pronounced declines occurring in young adulthood, and then again in old age. Agreeableness, indexed with a measure focusing on empathy, decreased in young adulthood and remained relatively unchanged thereafter. Conscientiousness increased among young adults then leveled off and remained fairly consistent for the rest of the adult life span. Neuroticism and openness to experience decreased as people aged. However, the models suggest that these latter effects may also be partially due to cohort differences, as older people showed lower levels of neuroticism and openness to experience more generally. Honesty-humility showed a pronounced and consistent increase across the adult life span. These analyses of large-scale longitudinal national probability panel data indicate that different dimensions of personality follow distinct developmental processes throughout adulthood. Our findings also highlight the importance of young adulthood (up to about the age of 30) in personality trait development, as well as continuing change throughout the adult life span. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  11. Effect of Nonsteroidal Anti-inflammatory Drug as an Oral Premedication on the Anesthetic Success of Inferior Alveolar Nerve Block in Treatment of Irreversible Pulpitis: A Systematic Review with Meta-analysis and Trial Sequential Analysis.

    Science.gov (United States)

    Nagendrababu, Venkateshbabu; Pulikkotil, Shaju Jacob; Veettil, Sajesh K; Teerawattanapong, Nattawat; Setzer, Frank C

    2018-06-01

    Successful anesthesia with an inferior alveolar nerve block (IANB) is imperative for treating patients with irreversible pulpitis in mandibular teeth. This systematic review assessed the efficacy of nonsteroidal anti-inflammatory drugs (NSAIDs) as oral premedications on the success of IANBs in irreversible pulpitis. Three databases were searched to identify randomized clinical trials (RCTs) published up until September 2017. Retrieved RCTs were evaluated using the revised Cochrane Risk of Bias Tool. The primary efficacy outcome of interest was the success rate of IANB anesthesia. Meta-analytic estimates (risk ratio [RR] with 95% confidence intervals [CIs]) performed using a random effects model and publication bias determined using funnel plot analysis were assessed. Random errors were evaluated with trial sequential analyses, and the quality of evidence was appraised using a Grading of Recommendations, Assessment, Development and Evaluation approach. Thirteen RCTs (N = 1034) were included. Eight studies had low risk of bias. Statistical analysis of good-quality RCTs showed a significant beneficial effect of any NSAID in increasing the anesthetic success of IANBs compared with placebo (RR = 1.92; 95% CI, 1.55-2.38). Subgroup analyses showed a similar beneficial effect for ibuprofen, diclofenac, and ketorolac (RR = 1.83 [95% CI, 1.43-2.35], RR = 2.56 [95% CI, 1.46-4.50], and RR = 2.07 [95% CI, 1.47-2.90], respectively). Dose-dependent ibuprofen >400 mg/d (RR = 1.85; 95% CI, 1.39-2.45) was shown to be effective; however, ibuprofen ≤400 mg/d showed no association (RR = 1.78; 95% CI, 0.90-3.55). TSA confirmed conclusive evidence for a beneficial effect of NSAIDs for IANB premedication. The Grading of Recommendations, Assessment, Development and Evaluation approach did not reveal any concerns regarding the quality of the results. Oral premedication with NSAIDs and ibuprofen (>400 mg/d) increased the anesthetic success of IANBs in patients with irreversible

  12. From spinning conformal blocks to matrix Calogero-Sutherland models

    Science.gov (United States)

    Schomerus, Volker; Sobko, Evgeny

    2018-04-01

    In this paper we develop further the relation between conformal four-point blocks involving external spinning fields and Calogero-Sutherland quantum mechanics with matrix-valued potentials. To this end, the analysis of [1] is extended to arbitrary dimensions and to the case of boundary two-point functions. In particular, we construct the potential for any set of external tensor fields. Some of the resulting Schrödinger equations are mapped explicitly to the known Casimir equations for 4-dimensional seed conformal blocks. Our approach furnishes solutions of Casimir equations for external fields of arbitrary spin and dimension in terms of functions on the conformal group. This allows us to reinterpret standard operations on conformal blocks in terms of group-theoretic objects. In particular, we shall discuss the relation between the construction of spinning blocks in any dimension through differential operators acting on seed blocks and the action of left/right invariant vector fields on the conformal group.

  13. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Guiseppe, Cavaliere; Rahbæk, Anders; Taylor, A.M. Robert

    with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....

  14. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    OpenAIRE

    Shiyi Zhou; Shu Da; Heng Guo; Xichao Zhang

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relat...

  15. Modeling of IPMC cantilever’s displacements and blocking forces

    Czech Academy of Sciences Publication Activity Database

    Vokoun, David; He, Q.; Heller, Luděk; Yu, M.; Dai, Z.

    2015-01-01

    Roč. 12, č. 1 (2015), s. 142-151 ISSN 1672-6529 R&D Projects: GA ČR GB14-36566G Institutional support: RVO:68378271 Keywords : ionic polymer metal composite * actuator * blocking force * finite element method Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.466, year: 2015

  16. Modelling of composite concrete block pavement systems applying a cohesive zone model

    DEFF Research Database (Denmark)

    Skar, Asmus; Poulsen, Peter Noe

    This paper presents a numerical analysis of the fracture behaviour of the cement bound base material in composite concrete block pavement systems, using a cohesive zone model. The functionality of the proposed model is tested on experimental and numerical investigations of beam bending tests....... The pavement is modelled as a simple slab on grade structure and parameters influencing the response, such as analysis technique, geometry and material parameters are studied. Moreover, the analysis is extended to a real scale example, modelling the pavement as a three-layered structure. It is found...... block pavements. It is envisaged that the methodology implemented in this study can be extended and thereby contribute to the ongoing development of rational failure criteria that can replace the empirical formulas currently used in pavement engineering....

  17. Blocking layer modeling for temperature analysis of electron transfer ...

    African Journals Online (AJOL)

    In this article, we simulate thermal effects on the electron transfer rate from three quantum dots CdSe, CdS and CdTe to three metal oxides TiO2, SnO2 and ZnO2 in the presence of four blocking layers ZnS, ZnO, TiO2 and Al2O3, in a porous quantum dot sensitized solar cell (QDSSC) structure, using Marcus theory.

  18. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

    DEFF Research Database (Denmark)

    Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

    2016-01-01

    Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...... the sensitive fraction of the commensal flora.Growth parameters for competing bacterial strains were estimated from the combined in vitro pharmacodynamic effect of two antimicrobials using the relationship between concentration and net bacterial growth rate. Predictions of in vivo bacterial growth were...... (how frequently antibiotics are alternated in a sequential treatment) of the two drugs was dependent upon the order in which the two drugs were used.Conclusion: Sequential treatment was more effective in preventing the growth of resistant strains when compared to the combination treatment. The cycling...

  19. A casemix model for estimating the impact of hospital access block on the emergency department.

    Science.gov (United States)

    Stuart, Peter

    2004-06-01

    To determine the ED activity and costs resulting from access block. A casemix model (AWOOS) was developed to measure activity due to access block. Using data from four hospitals between 1998 and 2002, ED activity was measured using the urgency and disposition group (UDG) casemix model and the AWOOS model with the purpose of determining the change in ED activity due to access block. Whilst the mean length of stay in ED (admitted patients) increased by 93% between 1998 and 2002, mean UDG activity increased by 0.63% compared to a mean increase in AWOOS activity of 24.5%. The 23.9% difference between UDG and AWOOS activity represents the (unmeasured) increase in ED activity and costs for the period 1998-2002 resulting from access block. The UDG system significantly underestimates the activity in EDs experiencing marked access block.

  20. A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.

    Science.gov (United States)

    Baron, Jonathan; Gürçay, Burcu

    2017-05-01

    The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.

  1. Blocking Radial Diffusion in a Double-Waved Hamiltonian Model

    International Nuclear Information System (INIS)

    Martins, Caroline G L; De Carvalho, R Egydio; Marcus, F A; Caldas, I L

    2011-01-01

    A non-twist Hamiltonian system perturbed by two waves with particular wave numbers can present Robust Tori, barriers created by the vanishing of the perturbing Hamiltonian at some defined positions. When Robust Tori exist, any trajectory in phase space passing close to them is blocked by emergent invariant curves that prevent the chaotic transport. We analyze the breaking up of the RT as well the transport dependence on the wave numbers and on the wave amplitudes. Moreover, we report the chaotic web formation in the phase space and how this pattern influences the transport.

  2. Completely random measures for modelling block-structured sparse networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2016-01-01

    Many statistical methods for network data parameterize the edge-probability by attributing latent traits to the vertices such as block structure and assume exchangeability in the sense of the Aldous-Hoover representation theorem. Empirical studies of networks indicate that many real-world networks...... have a power-law distribution of the vertices which in turn implies the number of edges scale slower than quadratically in the number of vertices. These assumptions are fundamentally irreconcilable as the Aldous-Hoover theorem implies quadratic scaling of the number of edges. Recently Caron and Fox...

  3. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  4. Sequential radioimmunotherapy with 177Lu- and 211At-labeled monoclonal antibody BR96 in a syngeneic rat colon carcinoma model

    DEFF Research Database (Denmark)

    Eriksson, Sophie E; Elgström, Erika; Bäck, Tom

    2014-01-01

    for small, established tumors. A combination of such radionuclides may be successful in regimens of radioimmunotherapy. In this study, rats were treated by sequential administration of first a 177Lu-labeled antibody, followed by a 211At-labeled antibody 25 days later. METHODS: Rats bearing solid colon...... carcinoma tumors were treated with 400 MBq/kg body weight 177Lu-BR96. After 25 days, three groups of animals were given either 5 or 10 MBq/kg body weight of 211At-BR96 simultaneously with or without a blocking agent reducing halogen uptake in normal tissues. Control animals were not given any 211At-BR96....... The rats suffered from reversible myelotoxicity after treatment. CONCLUSIONS: Sequential administration of 177Lu-BR96 and 211At-BR96 resulted in tolerable toxicity providing halogen blocking but did not enhance the therapeutic effect....

  5. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  6. Universal block diagram based modeling and simulation schemes for fractional-order control systems.

    Science.gov (United States)

    Bai, Lu; Xue, Dingyü

    2017-05-08

    Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-11-24

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  8. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  9. Systems-level modeling the effects of arsenic exposure with sequential pulsed and fluctuating patterns for tilapia and freshwater clam

    International Nuclear Information System (INIS)

    Chen, W.-Y.; Tsai, J.-W.; Ju, Y.-R.; Liao, C.-M.

    2010-01-01

    The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.

  10. Systems-level modeling the effects of arsenic exposure with sequential pulsed and fluctuating patterns for tilapia and freshwater clam

    Energy Technology Data Exchange (ETDEWEB)

    Chen, W.-Y. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Tsai, J.-W. [Institute of Ecology and Evolutionary Ecology, China Medical University, Taichung 40402, Taiwan (China); Ju, Y.-R. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Liao, C.-M., E-mail: cmliao@ntu.edu.t [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China)

    2010-05-15

    The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.

  11. Efficacy of premixed versus sequential administration of ...

    African Journals Online (AJOL)

    sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

  12. AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.

    Science.gov (United States)

    Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo

    2017-09-21

    Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.

  13. A Partial Proportional Odds Model for Pedestrian Crashes at Mid-Blocks in Melbourne Metropolitan Area

    Directory of Open Access Journals (Sweden)

    Toran Pour Alireza

    2016-01-01

    Full Text Available Pedestrian crashes account for 11% of all reported traffic crashes in Melbourne metropolitan area between 2004 and 2013. There are very limited studies on pedestrian accidents at mid-blocks. Mid-block crashes account for about 46% of the total pedestrian crashes in Melbourne metropolitan area. Meanwhile, about 50% of all pedestrian fatalities occur at mid-blocks. In this research, Partial Proportional Odds (PPO model is applied to examine vehicle-pedestrian crash severity at mid-blocks in Melbourne metropolitan area. The PPO model is a logistic regression model that allows the covariates that meet the proportional odds assumption to affect different crash severity levels with the same magnitude; whereas the covariates that do not meet the proportional odds assumption can have different effects on different severity levels. In this research vehicle-pedestrian crashes at mid-blocks are analysed for first time. In addition, some factors such as distance of crashes to public transport stops, average road slope and some social characteristics are considered to develop the model in this research for first time. Results of PPO model show that speed limit, light condition, pedestrian age and gender, and vehicle type are the most significant factors that influence vehicle-pedestrian crash severity at mid-blocks.

  14. Tibetan Microblog Emotional Analysis Based on Sequential Model in Online Social Platforms

    Directory of Open Access Journals (Sweden)

    Lirong Qiu

    2017-01-01

    Full Text Available With the development of microblogs, selling and buying appear in online social platforms such as Sina Weibo and Wechat. Besides Mandarin, Tibetan language is also used to describe products and customers’ opinions. In this paper, we are interested in analyzing the emotions of Tibetan microblogs, which are helpful to understand opinions and product reviews for Tibetan customers. It is challenging since existing studies paid little attention to Tibetan language. Our key idea is to express Tibetan microblogs as vectors and then classify them. To express microblogs more fully, we select two kinds of features, which are sequential features and semantic features. In addition, our experimental results on the Sina Weibo dataset clearly demonstrate the effectiveness of feature selection and the efficiency of our classification method.

  15. Excitation block in a nerve fibre model owing to potassium-dependent changes in myelin resistance.

    Science.gov (United States)

    Brazhe, A R; Maksimov, G V; Mosekilde, E; Sosnovtseva, O V

    2011-02-06

    The myelinated nerve fibre is formed by an axon and Schwann cells or oligodendrocytes that sheath the axon by winding around it in tight myelin layers. Repetitive stimulation of a fibre is known to result in accumulation of extracellular potassium ions, especially between the axon and the myelin. Uptake of potassium leads to Schwann cell swelling and myelin restructuring that impacts the electrical properties of the myelin. In order to further understand the dynamic interaction that takes place between the myelin and the axon, we have modelled submyelin potassium accumulation and related changes in myelin resistance during prolonged high-frequency stimulation. We predict that potassium-mediated decrease in myelin resistance leads to a functional excitation block with various patterns of altered spike trains. The patterns are found to depend on stimulation frequency and amplitude and to range from no block (less than 100 Hz) to a complete block (greater than 500 Hz). The transitional patterns include intermittent periodic block with interleaved spiking and non-spiking intervals of different relative duration as well as an unstable regime with chaotic switching between the spiking and non-spiking states. Intermittent conduction blocks are accompanied by oscillations of extracellular potassium. The mechanism of conductance block based on myelin restructuring complements the already known and modelled block via hyperpolarization mediated by the axonal sodium pump and potassium depolarization.

  16. DETERMINATION OF RESOLUTION LIMITS OF ELECTRICAL TOMOGRAPHY ON THE BLOCK MODEL IN A HOMOGENOUS ENVIRONMENT BY MEANS OF ELECTRICAL MODELLING

    Directory of Open Access Journals (Sweden)

    Franjo Šumanovac

    2007-12-01

    Full Text Available The block model in a homogenous environment can generally serve for presentation of some geological models: changes of facies, changes of rock compactness-fragmentation, underground cavities, bauxite deposits, etc. Therefore, on the block model of increased resistivities in a homogenous environment of low resistivity, the potentials of the electrical tomography method were tested for the purpose of their detection. Regarding potentials of block detection, resolution methods depend on: depth of block location, ratio between block resistivity and the environment in which it is located as well as applied survey geometry, i.e. electrode array. Thus the analyses carried out for the most frequently used electrode arrays in the investigations are the following: the Wenner, Wenner-Schlumberger, dipole-dipole and pole-pole arrays. For each array, maximum depths at which a block can be detected relative to the ratio between block resistivity and parent rock environment were analyzed. The results are shown in the two-dimensional graphs, where the ratio between the block resistivity and the environment is shown on the X-axis, and the resolution depth on the Y-axis, after which the curves defining the resolution limits were drawn. These graphs have a practical use, since they enable a fast, simple determination of potentials of the method application on a specific geological model.

  17. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    Science.gov (United States)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S

  18. Direct methods and residue type specific isotope labeling in NMR structure determination and model-driven sequential assignment

    International Nuclear Information System (INIS)

    Schedlbauer, Andreas; Auer, Renate; Ledolter, Karin; Tollinger, Martin; Kloiber, Karin; Lichtenecker, Roman; Ruedisser, Simon; Hommel, Ulrich; Schmid, Walther; Konrat, Robert; Kontaxis, Georg

    2008-01-01

    Direct methods in NMR based structure determination start from an unassigned ensemble of unconnected gaseous hydrogen atoms. Under favorable conditions they can produce low resolution structures of proteins. Usually a prohibitively large number of NOEs is required, to solve a protein structure ab-initio, but even with a much smaller set of distance restraints low resolution models can be obtained which resemble a protein fold. One problem is that at such low resolution and in the absence of a force field it is impossible to distinguish the correct protein fold from its mirror image. In a hybrid approach these ambiguous models have the potential to aid in the process of sequential backbone chemical shift assignment when 13 C β and 13 C' shifts are not available for sensitivity reasons. Regardless of the overall fold they enhance the information content of the NOE spectra. These, combined with residue specific labeling and minimal triple-resonance data using 13 C α connectivity can provide almost complete sequential assignment. Strategies for residue type specific labeling with customized isotope labeling patterns are of great advantage in this context. Furthermore, this approach is to some extent error-tolerant with respect to data incompleteness, limited precision of the peak picking, and structural errors caused by misassignment of NOEs

  19. Modelling the sequential geographical exploitation and potential collapse of marine fisheries through economic globalization, climate change and management alternatives

    Directory of Open Access Journals (Sweden)

    Gorka Merino

    2011-07-01

    Full Text Available Global marine fisheries production has reached a maximum and may even be declining. Underlying this trend is a well-understood sequence of development, overexploitation, depletion and in some instances collapse of individual fish stocks, a pattern that can sequentially link geographically distant populations. Ineffective governance, economic considerations and climate impacts are often responsible for this sequence, although the relative contribution of each factor is contentious. In this paper we use a global bioeconomic model to explore the synergistic effects of climate variability, economic pressures and management measures in causing or avoiding this sequence. The model shows how a combination of climate-induced variability in the underlying fish population production, particular patterns of demand for fish products and inadequate management is capable of driving the world’s fisheries into development, overexploitation, collapse and recovery phases consistent with observations. Furthermore, it demonstrates how a sequential pattern of overexploitation can emerge as an endogenous property of the interaction between regional environmental fluctuations and a globalized trade system. This situation is avoidable through adaptive management measures that ensure the sustainability of regional production systems in the face of increasing global environmental change and markets. It is concluded that global management measures are needed to ensure that global food supply from marine products is optimized while protecting long-term ecosystem services across the world’s oceans.

  20. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  1. Sequential fragmentation of Pleistocene forests in an East Africa biodiversity hotspot: chameleons as a model to track forest history.

    Directory of Open Access Journals (Sweden)

    G John Measey

    Full Text Available The Eastern Arc Mountains (EAM is an example of naturally fragmented tropical forests, which contain one of the highest known concentrations of endemic plants and vertebrates. Numerous paleo-climatic studies have not provided direct evidence for ancient presence of Pleistocene forests, particularly in the regions in which savannah presently occurs. Knowledge of the last period when forests connected EAM would provide a sound basis for hypothesis testing of vicariance and dispersal models of speciation. Dated phylogenies have revealed complex patterns throughout EAM, so we investigated divergence times of forest fauna on four montane isolates in close proximity to determine whether forest break-up was most likely to have been simultaneous or sequential, using population genetics of a forest restricted arboreal chameleon, Kinyongia boehmei.We used mitochondrial and nuclear genetic sequence data and mutation rates from a fossil-calibrated phylogeny to estimate divergence times between montane isolates using a coalescent approach. We found that chameleons on all mountains are most likely to have diverged sequentially within the Pleistocene from 0.93-0.59 Ma (95% HPD 0.22-1.84 Ma. In addition, post-hoc tests on chameleons on the largest montane isolate suggest a population expansion ∼182 Ka.Sequential divergence is most likely to have occurred after the last of three wet periods within the arid Plio-Pleistocene era, but was not correlated with inter-montane distance. We speculate that forest connection persisted due to riparian corridors regardless of proximity, highlighting their importance in the region's historic dispersal events. The population expansion coincides with nearby volcanic activity, which may also explain the relative paucity of the Taita's endemic fauna. Our study shows that forest chameleons are an apposite group to track forest fragmentation, with the inference that forest extended between some EAM during the Pleistocene 1

  2. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  3. Physical and theoretical modeling of rock slopes against block-flexure toppling failure

    Directory of Open Access Journals (Sweden)

    Mehdi Amini

    2015-12-01

    Full Text Available Block-flexure is the most common mode of toppling failure in natural and excavated rock slopes. In such failure, some rock blocks break due to tensile stresses and some overturn under their own weights and then all of them topple together. In this paper, first, a brief review of previous studies on toppling failures is presented. Then, the physical and mechanical properties of experimental modeling materials are summarized. Next, the physical modeling results of rock slopes with the potential of block-flexural toppling failures are explained and a new analytical solution is proposed for the stability analysis of such slopes. The results of this method are compared with the outcomes of the experiments. The comparative studies show that the proposed analytical approach is appropriate for the stability analysis of rock slopes against block-flexure toppling failure. Finally, a real case study is used for the practical verification of the suggested method.

  4. Traffic Modelling for Moving-Block Train Control System

    International Nuclear Information System (INIS)

    Tang Tao; Li Keping

    2007-01-01

    This paper presents a new cellular automaton (CA) model for train control system simulation. In the proposed CA model, the driver reactions to train movements are captured by some updated rules. The space-time diagram of traffic flow and the trajectory of train movement is used to obtain insight into the characteristic behavior of railway traffic flow. A number of simulation results demonstrate that the proposed CA model can be successfully used for the simulations of railway traffic. Not only the characteristic behavior of railway traffic flow can be reproduced, but also the simulation values of the minimum time headway are close to the theoretical values.

  5. Spatial distribution of block falls using volumetric GIS-decision-tree models

    Science.gov (United States)

    Abdallah, C.

    2010-10-01

    Block falls are considered a significant aspect of surficial instability contributing to losses in land and socio-economic aspects through their damaging effects to natural and human environments. This paper predicts and maps the geographic distribution and volumes of block falls in central Lebanon using remote sensing, geographic information systems (GIS) and decision-tree modeling (un-pruned and pruned trees). Eleven terrain parameters (lithology, proximity to fault line, karst type, soil type, distance to drainage line, elevation, slope gradient, slope aspect, slope curvature, land cover/use, and proximity to roads) were generated to statistically explain the occurrence of block falls. The latter were discriminated using SPOT4 satellite imageries, and their dimensions were determined during field surveys. The un-pruned tree model based on all considered parameters explained 86% of the variability in field block fall measurements. Once pruned, it classifies 50% in block falls' volumes by selecting just four parameters (lithology, slope gradient, soil type, and land cover/use). Both tree models (un-pruned and pruned) were converted to quantitative 1:50,000 block falls' maps with different classes; starting from Nil (no block falls) to more than 4000 m 3. These maps are fairly matching with coincidence value equal to 45%; however, both can be used to prioritize the choice of specific zones for further measurement and modeling, as well as for land-use management. The proposed tree models are relatively simple, and may also be applied to other areas (i.e. the choice of un-pruned or pruned model is related to the availability of terrain parameters in a given area).

  6. Sequential super-stereotypy of an instinctive fixed action pattern in hyper-dopaminergic mutant mice: a model of obsessive compulsive disorder and Tourette's

    Directory of Open Access Journals (Sweden)

    Houchard Kimberly R

    2005-02-01

    Full Text Available Abstract Background Excessive sequential stereotypy of behavioral patterns (sequential super-stereotypy in Tourette's syndrome and obsessive compulsive disorder (OCD is thought to involve dysfunction in nigrostriatal dopamine systems. In sequential super-stereotypy, patients become trapped in overly rigid sequential patterns of action, language, or thought. Some instinctive behavioral patterns of animals, such as the syntactic grooming chain pattern of rodents, have sufficiently complex and stereotyped serial structure to detect potential production of overly-rigid sequential patterns. A syntactic grooming chain is a fixed action pattern that serially links up to 25 grooming movements into 4 predictable phases that follow 1 syntactic rule. New mutant mouse models allow gene-based manipulation of brain function relevant to sequential patterns, but no current animal model of spontaneous OCD-like behaviors has so far been reported to exhibit sequential super-stereotypy in the sense of a whole complex serial pattern that becomes stronger and excessively rigid. Here we used a hyper-dopaminergic mutant mouse to examine whether an OCD-like behavioral sequence in animals shows sequential super-stereotypy. Knockdown mutation of the dopamine transporter gene (DAT causes extracellular dopamine levels in the neostriatum of these adult mutant mice to rise to 170% of wild-type control levels. Results We found that the serial pattern of this instinctive behavioral sequence becomes strengthened as an entire entity in hyper-dopaminergic mutants, and more resistant to interruption. Hyper-dopaminergic mutant mice have stronger and more rigid syntactic grooming chain patterns than wild-type control mice. Mutants showed sequential super-stereotypy in the sense of having more stereotyped and predictable syntactic grooming sequences, and were also more likely to resist disruption of the pattern en route, by returning after a disruption to complete the pattern from the

  7. An electromagnetic model for post-wall waveguide building blocks

    NARCIS (Netherlands)

    Coenen, T.J.; Bekers, D.J.; Tauritz, J.L.; Vliet, van F.E.

    2010-01-01

    During the past five years, dielectric and metallic post-wall waveguides (PWWGs) have been analyzed at TNO Defence, Security and Safety, using both an integral equation approach and a modal approach. The model developed focuses on TEn0 modes facilitating the analysis of infinitelylong, straight

  8. Markovian Building Blocks for Individual-Based Modelling

    DEFF Research Database (Denmark)

    Nilsson, Lars Anders Fredrik

    2007-01-01

    previous exposure to Markov chains in continuous time (see e.g. Grimmett and Stirzaker, 2001)). Markovian arrival processes are very general point processes that are relatively easy to analyse. They have, so far, been largely unknown to the ecological modelling community. The article C deals...

  9. Stripe patterns in a model for block polymers

    NARCIS (Netherlands)

    Peletier, M.A.; Veneroni, M.

    2009-01-01

    We consider a pattern-forming system in two space dimensions defined by an energy Ge. The functional Ge models strong phase separation in AB diblock copolymer melts, and patterns are represented by {0, 1}-valued functions; the values 0 and 1 correspond to the A and B phases. The parameter e is the

  10. Stripe patterns in a model for block polymers

    NARCIS (Netherlands)

    Peletier, M.A.; Veneroni, M.

    2010-01-01

    We consider a pattern-forming system in two space dimensions defined by an energy Ge. The functional Ge models strong phase separation in AB diblock copolymer melts, and patterns are represented by {0, 1}-valued functions; the values 0 and 1 correspond to the A and B phases. The parameter e is the

  11. Percutaneous sciatic nerve block with tramadol induces analgesia and motor blockade in two animal pain models

    International Nuclear Information System (INIS)

    Sousa, A.M.; Ashmawi, H.A.; Costa, L.S.; Posso, I.P.; Slullitel, A.

    2011-01-01

    Local anesthetic efficacy of tramadol has been reported following intradermal application. Our aim was to investigate the effect of perineural tramadol as the sole analgesic in two pain models. Male Wistar rats (280-380 g; N = 5/group) were used in these experiments. A neurostimulation-guided sciatic nerve block was performed and 2% lidocaine or tramadol (1.25 and 5 mg) was perineurally injected in two different animal pain models. In the flinching behavior test, the number of flinches was evaluated and in the plantar incision model, mechanical and heat thresholds were measured. Motor effects of lidocaine and tramadol were quantified and a motor block score elaborated. Tramadol, 1.25 mg, completely blocked the first and reduced the second phase of the flinching behavior test. In the plantar incision model, tramadol (1.25 mg) increased both paw withdrawal latency in response to radiant heat (8.3 ± 1.1, 12.7 ± 1.8, 8.4 ± 0.8, and 11.1 ± 3.3 s) and mechanical threshold in response to von Frey filaments (459 ± 82.8, 447.5 ± 91.7, 320.1 ± 120, 126.43 ± 92.8 mN) at 5, 15, 30, and 60 min, respectively. Sham block or contralateral sciatic nerve block did not differ from perineural saline injection throughout the study in either model. The effect of tramadol was not antagonized by intraperitoneal naloxone. High dose tramadol (5 mg) blocked motor function as well as 2% lidocaine. In conclusion, tramadol blocks nociception and motor function in vivo similar to local anesthetics

  12. Percutaneous sciatic nerve block with tramadol induces analgesia and motor blockade in two animal pain models

    Directory of Open Access Journals (Sweden)

    A.M. Sousa

    2012-02-01

    Full Text Available Local anesthetic efficacy of tramadol has been reported following intradermal application. Our aim was to investigate the effect of perineural tramadol as the sole analgesic in two pain models. Male Wistar rats (280-380 g; N = 5/group were used in these experiments. A neurostimulation-guided sciatic nerve block was performed and 2% lidocaine or tramadol (1.25 and 5 mg was perineurally injected in two different animal pain models. In the flinching behavior test, the number of flinches was evaluated and in the plantar incision model, mechanical and heat thresholds were measured. Motor effects of lidocaine and tramadol were quantified and a motor block score elaborated. Tramadol, 1.25 mg, completely blocked the first and reduced the second phase of the flinching behavior test. In the plantar incision model, tramadol (1.25 mg increased both paw withdrawal latency in response to radiant heat (8.3 ± 1.1, 12.7 ± 1.8, 8.4 ± 0.8, and 11.1 ± 3.3 s and mechanical threshold in response to von Frey filaments (459 ± 82.8, 447.5 ± 91.7, 320.1 ± 120, 126.43 ± 92.8 mN at 5, 15, 30, and 60 min, respectively. Sham block or contralateral sciatic nerve block did not differ from perineural saline injection throughout the study in either model. The effect of tramadol was not antagonized by intraperitoneal naloxone. High dose tramadol (5 mg blocked motor function as well as 2% lidocaine. In conclusion, tramadol blocks nociception and motor function in vivo similar to local anesthetics.

  13. Three-dimensional classical-ensemble modeling of non-sequential double ionization

    International Nuclear Information System (INIS)

    Haan, S.L.; Breen, L.; Tannor, D.; Panfili, R.; Ho, Phay J.; Eberly, J.H.

    2005-01-01

    Full text: We have been using 1d ensembles of classical two-electron atoms to simulate helium atoms that are exposed to pulses of intense laser radiation. In this talk we discuss the challenges in setting up a 3d classical ensemble that can mimic the quantum ground state of helium. We then report studies in which each one of 500,000 two-electron trajectories is followed in 3d through a ten-cycle (25 fs) 780 nm laser pulse. We examine double-ionization yield for various intensities, finding the familiar knee structure. We consider the momentum spread of outcoming electrons in directions both parallel and perpendicular to the direction of laser polarization, and find results that are consistent with experiment. We examine individual trajectories and recollision processes that lead to double ionization, considering the best phases of the laser cycle for recollision events and looking at the possible time delay between recollision and emergence. We consider also the number of recollision events, and find that multiple recollisions are common in the classical ensemble. We investigate which collisional processes lead to various final electron momenta. We conclude with comments regarding the ability of classical mechanics to describe non-sequential double ionization, and a quick summary of similarities and differences between 1d and 3d classical double ionization using energy-trajectory comparisons. Refs. 3 (author)

  14. Transformation Strategies between Block-Oriented and Graph-Oriented Process Modelling Languages

    DEFF Research Database (Denmark)

    Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe

    2006-01-01

    Much recent research work discusses the transformation between different process modelling languages. This work, however, is mainly focussed on specific process modelling languages, and thus the general reusability of the applied transformation concepts is rather limited. In this paper, we aim...... to abstract from concrete transformation strategies by distinguishing two major paradigms for representing control flow in process modelling languages: block-oriented languages (such as BPEL and BPML) and graph-oriented languages (such as EPCs and YAWL). The contribution of this paper are generic strategies...... for transforming from block-oriented process languages to graph-oriented languages, and vice versa....

  15. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  16. Optical Fibres in the Modeling of Translucent Concrete Blocks

    OpenAIRE

    M.N.V.Padma Bhushan, D.Johnson, Md. Afzal Basheer Pasha And Ms. K. Prasanthi

    2013-01-01

    Translucent concrete is a concrete based material with light-transmissive properties, obtained due to embedded light optical elements like Optical fibers in it. Light is conducted through the stone from one end to the other. This results into a certain light pattern on the other surface, depending on the fibre structure. Optical fibres transmit light so effectively that there is virtually no loss of light conducted through the fibres. Our paper deals with the modelling of such translucent or ...

  17. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  18. Modeling and Simulation of Out of Step Blocking Relay for

    Directory of Open Access Journals (Sweden)

    Ahmed A. Al Adwani

    2013-05-01

    Full Text Available  This paper investigates a power swing effect on a distance protection relay performance installed on (HV/EHV transmission line as well as power system stability. A conventional distance relay can’t properly operate under transient stability conditions; therefore, it cause mol-operation, and it will adversely impact on its trip signals. To overcome this problem, the Out Of Step (OOS relay has modeled and simulated to joint with distance relay to supervise and control on its trip signals response. The setting characteristics technique of the OOS based on concentric polygons scheme method to detect power swing under transient stability situation.         This study ia a modeling and simulating using (Maltab\\ Simulink software. A Two relays had been  performed   and  tested  with  two equivalents network connected to ends.      The results of this study showed an activity and reliability of this way to control the distance relay response under a transient stability conditions and it indicated the possibility to find out faults which may occur at period of power swing

  19. A controlled human malaria infection model enabling evaluation of transmission-blocking interventions

    NARCIS (Netherlands)

    Collins, K.A.; Wang, C.Y.; Adams, M.; Mitchell, H.; Rampton, M.; Elliott, S.; Reuling, I.J.; Bousema, T.; Sauerwein, R.; Chalon, S.; Mohrle, J.J.; McCarthy, J.S.

    2018-01-01

    BACKGROUND: Drugs and vaccines that can interrupt the transmission of Plasmodium falciparum will be important for malaria control and elimination. However, models for early clinical evaluation of candidate transmission-blocking interventions are currently unavailable. Here, we describe a new model

  20. Influence of blocking on Northern European and Western Russian heatwaves in large climate model ensembles

    Science.gov (United States)

    Schaller, N.; Sillmann, J.; Anstey, J.; Fischer, E. M.; Grams, C. M.; Russo, S.

    2018-05-01

    Better preparedness for summer heatwaves could mitigate their adverse effects on society. This can potentially be attained through an increased understanding of the relationship between heatwaves and one of their main dynamical drivers, atmospheric blocking. In the 1979–2015 period, we find that there is a significant correlation between summer heatwave magnitudes and the number of days influenced by atmospheric blocking in Northern Europe and Western Russia. Using three large global climate model ensembles, we find similar correlations, indicating that these three models are able to represent the relationship between extreme temperature and atmospheric blocking, despite having biases in their simulation of individual climate variables such as temperature or geopotential height. Our results emphasize the need to use large ensembles of different global climate models as single realizations do not always capture this relationship. The three large ensembles further suggest that the relationship between summer heatwaves and atmospheric blocking will not change in the future. This could be used to statistically model heatwaves with atmospheric blocking as a covariate and aid decision-makers in planning disaster risk reduction and adaptation to climate change.

  1. Kinetic Modeling of Synthetic Wastewater Treatment by the Moving-bed Sequential Continuous-inflow Reactor (MSCR

    Directory of Open Access Journals (Sweden)

    Mohammadreza Khani

    2016-11-01

    Full Text Available It was the objective of the present study to conduct a kinetic modeling of a Moving-bed Sequential Continuous-inflow Reactor (MSCR and to develop its best prediction model. For this purpose, a MSCR consisting of an aerobic-anoxic pilot 50 l in volume and an anaerobic pilot of 20 l were prepared. The MSCR was fed a variety of organic loads and operated at different hydraulic retention times (HRT using synthetic wastewater at input COD concentrations of 300 to 1000 mg/L with HRTs of 2 to 5 h. Based on the results and the best system operation conditions, the highest COD removal (98.6% was obtained at COD=500 mg/L. The three well-known first order, second order, and Stover-Kincannon models were utilized for the kinetic modeling of the reactor. Based on the kinetic analysis of organic removal, the Stover-Kincannon model was chosen for the kinetic modeling of the moving bed biofilm. Given its advantageous properties in the statisfactory prediction of organic removal at different organic loads, this model is recommended for the design and operation of MSCR systems.

  2. A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems

    Science.gov (United States)

    Gambolati, Giuseppe; Teatini, Pietro

    1996-01-01

    A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.

  3. Application of blocking diagnosis methods to general circulation models. Part I: a novel detection scheme

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal); Universidad de Extremadura, Departamento de Fisica, Facultad de Ciencias, Badajoz (Spain); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain); Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal)

    2010-12-15

    This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable

  4. Adaptive decision making in a dynamic environment: a test of a sequential sampling model of relative judgment.

    Science.gov (United States)

    Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew

    2013-09-01

    Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. Valuation model of exploratory blocks; Modelo de valoracao de blocos exploratorios

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Thiago Neves de; Sartori, Vanderlei [Agencia Nacional do Petroleo, Gas Natural e Biocombustiveis (ANP), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    Last year completed 10 years of the promulgations of the Brazilian Petroleum Act. This act has regulated the of the sector of exploration and production of oil and natural gas in Brazil, enabling these activities were granted to private or state companies, preceded by a bidding round. Since 1998, ANP have been doing these bids, using in the judgment of offers the following criteria: Minimum Exploration Program, Local Content and Bonuses of Signature. The objective of this article is to present a model of valuation of the blocks on offer, showing a model of estimation of the monetary value of the block. (author)

  6. Transformation Strategies between Block-Oriented and Graph-Oriented Process Modelling Languages

    DEFF Research Database (Denmark)

    Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe

    to abstract from concrete transformationstrategies by distinguishing two major paradigms for process modelling languages:block-oriented languages (such as BPEL and BPML) and graph-oriented languages(such as EPCs and YAWL). The contribution of this paper are generic strategiesfor transforming from block......Much recent research work discusses the transformation between differentprocess modelling languages. This work, however, is mainly focussed on specific processmodelling languages, and thus the general reusability of the applied transformationconcepts is rather limited. In this paper, we aim......-oriented process languages to graph-oriented languages,and vice versa. We also present two case studies of applying our strategies....

  7. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    Directory of Open Access Journals (Sweden)

    D. Herckenrath

    2013-10-01

    Full Text Available Increasingly, ground-based and airborne geophysical data sets are used to inform groundwater models. Recent research focuses on establishing coupling relationships between geophysical and groundwater parameters. To fully exploit such information, this paper presents and compares different hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM and electrical resistivity tomography (ERT data. In a sequential hydrogeophysical inversion (SHI a groundwater model is calibrated with geophysical data by coupling groundwater model parameters with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI. In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical relationship and its accuracy. Simulations for a synthetic groundwater model and TDEM data showed improved estimates for groundwater model parameters that were coupled to relatively well-resolved geophysical parameters when employing a high-quality petrophysical relationship. Compared to a SHI these improvements were insignificant and geophysical parameter estimates became slightly worse. When employing a low-quality petrophysical relationship, groundwater model parameters improved less for both the SHI and JHI, where the SHI performed relatively better. When comparing a SHI and JHI for a real-world groundwater model and ERT data, differences in parameter estimates were small. For both cases investigated in this paper, the SHI seems favorable, taking into account parameter error, data fit and the complexity of implementing a JHI in combination with its larger computational burden.

  8. A variational EM method for pole-zero modeling of speech with mixed block sparse and Gaussian excitation

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    The modeling of speech can be used for speech synthesis and speech recognition. We present a speech analysis method based on pole-zero modeling of speech with mixed block sparse and Gaussian excitation. By using a pole-zero model, instead of the all-pole model, a better spectral fitting can...... be expected. Moreover, motivated by the block sparse glottal flow excitation during voiced speech and the white noise excitation for unvoiced speech, we model the excitation sequence as a combination of block sparse signals and white noise. A variational EM (VEM) method is proposed for estimating...... in reconstructing of the block sparse excitation....

  9. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...

  10. Sequential change in T2* values of cartilage, meniscus, and subchondral bone marrow in a rat model of knee osteoarthritis.

    Directory of Open Access Journals (Sweden)

    Ping-Huei Tsai

    Full Text Available BACKGROUND: There is an emerging interest in using magnetic resonance imaging (MRI T2* measurement for the evaluation of degenerative cartilage in osteoarthritis (OA. However, relatively few studies have addressed OA-related changes in adjacent knee structures. This study used MRI T2* measurement to investigate sequential changes in knee cartilage, meniscus, and subchondral bone marrow in a rat OA model induced by anterior cruciate ligament transection (ACLX. MATERIALS AND METHODS: Eighteen male Sprague Dawley rats were randomly separated into three groups (n = 6 each group. Group 1 was the normal control group. Groups 2 and 3 received ACLX and sham-ACLX, respectively, of the right knee. T2* values were measured in the knee cartilage, the meniscus, and femoral subchondral bone marrow of all rats at 0, 4, 13, and 18 weeks after surgery. RESULTS: Cartilage T2* values were significantly higher at 4, 13, and 18 weeks postoperatively in rats of the ACLX group than in rats of the control and sham groups (p<0.001. In the ACLX group (compared to the sham and control groups, T2* values increased significantly first in the posterior horn of the medial meniscus at 4 weeks (p = 0.001, then in the anterior horn of the medial meniscus at 13 weeks (p<0.001, and began to increase significantly in the femoral subchondral bone marrow at 13 weeks (p = 0.043. CONCLUSION: Quantitative MR T2* measurements of OA-related tissues are feasible. Sequential change in T2* over time in cartilage, meniscus, and subchondral bone marrow were documented. This information could be potentially useful for in vivo monitoring of disease progression.

  11. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    Science.gov (United States)

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  12. Analogue modelling of microcontinent formation: a case study from the Danakil Block, southern Red Sea

    Science.gov (United States)

    Molnar, Nicolas; Cruden, Alexander; Betts, Peter

    2017-04-01

    The kinematic evolution of the Danakil Block is well constrained but the processes responsible for the formation of an isolated continental segment around 13 Ma ago with an independent pole of rotation are still matter of debate. We performed three-dimensional analogue experiments of rotational continental extension containing a pre-existing linear weakness zones in the lithospheric mantle to investigate the formation of the Red Sea, including the Danakil Block. We imposed a rotational extensional boundary condition that simulates the progressive anticlockwise rotation of the Arabian Plate with respect to the Nubia Plate over the last 13-15 Ma and we simulated the presence of a narrow thermal anomaly related to the northward channelling of Afar plume by varying the viscosity of the model lithospheric mantle. The results from experiments containing a linear zone of weakness oriented at low angles with respect to the rift axis show that early stages of deformation are characterised by the development of two rift sub-parallel compartments that delimit an intra-rift block in the vicinity of the weak lithosphere boundary zone, which are analogous to the two rift branches that confine the Danakil Block in the southern Red Sea. The imposed rotational boundary condition creates a displacement gradient along the intra-rift block and prevents the nucleation of the early rift compartments to the north of the block, enhancing the formation of an independently rotating intra-rift segment. Comparison with geodetic data supports our modelling results, which are also in agreement with the "crank-arm" model of Sichler (1980. La biellette Danakile: un modèle pour l'évolution géodynamique de l'Afar. Bull. la Société Géologique Fr. 22, 925-933). Additional analogue models of i) orthogonal extension with an identical lithospheric mantle weakness and, ii) rotational extension with a homogeneous lithosphere (i.e., no lithospheric mantle weakness) show no evidence of developing

  13. Local persistence and blocking in the two-dimensional blume-capel model

    OpenAIRE

    Silva, Roberto da; Dahmen, S. R.

    2004-01-01

    In this paper we study the local persistence of the two-dimensional Blume-Capel Model by extending the concept of Glauber dynamics. We verify that for any value of the ratio alpha = D/J between anisotropy D and exchange J the persistence shows a power law behavior. In particular for alpha 0 (a ¹ 1) we observe the occurrence of blocking.

  14. Kalman-filter model for determining block and trickle SNM losses

    International Nuclear Information System (INIS)

    Barlow, R.E.; Durst, M.J.; Smiriga, N.G.

    1982-07-01

    This paper describes an integrated decision procedure for deciding whether a diversion of SNM has occurred. Two possible types of diversion are considered: a block loss during a single time period and a cumulative trickle loss over several time periods. The methodology used is based on a compound Kalman filter model. Numerical examples illustrate our approach

  15. Mixed logit model of intended residential mobility in renovated historical blocks in China

    NARCIS (Netherlands)

    Jiang, W.; Timmermans, H.J.P.; Li, H.; Feng, T.

    2016-01-01

    Using data from 8 historical blocks in China, the influence of socialdemographic characteristics and residential satisfaction on intended residentialmobility is analysed. The results of a mixed logit model indicate that higher residential satisfaction will lead to a lower intention to move house,

  16. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  17. On thermal vibration effects in diffusion model calculations of blocking dips

    International Nuclear Information System (INIS)

    Fuschini, E.; Ugozzoni, A.

    1983-01-01

    In the framework of the diffusion model, a method for calculating blocking dips is suggested that takes into account thermal vibrations of the crystal lattice. Results of calculations of the diffusion factor and the transverse energy distribution taking into accoUnt scattering of the channeled particles at thermal vibrations of lattice nuclei, are presented. Calculations are performed for α-particles with the energy of 2.12 MeV at 300 K scattered by Al crystal. It is shown that calculations performed according to the above method prove the necessity of taking into account effects of multiple scattering under blocking conditions

  18. Modeling of Activated Sludge Process Using Sequential Adaptive Neuro-fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Mahsa Vajedi

    2014-10-01

    Full Text Available In this study, an adaptive neuro-fuzzy inference system (ANFIS has been applied to model activated sludge wastewater treatment process of Mobin petrochemical company. The correlation coefficients between the input variables and the output variable were calculated to determine the input with the highest influence on the output (the quality of the outlet flow in order to compare three neuro-fuzzy structures with different number of parameters. The predictions of the neuro-fuzzy models were compared with those of multilayer artificial neural network models with similar structure. The comparison indicated that both methods resulted in flexible, robust and effective models for the activated sludge system. Moreover, the root mean square of the error for neuro-fuzzy and neural network models were 5.14 and 6.59, respectively, which means the former is the superior method.

  19. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation

    Science.gov (United States)

    Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori

    2017-12-01

    It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.

  20. The economic, environmental and public health impacts of new power plants: a sequential inter industry model integrated with GIS data

    Energy Technology Data Exchange (ETDEWEB)

    Avelino, Andre F.T.; Hewings, Geoffrey J.D.; Guilhoto, Joaquim J.M. [Universidade de Sao Paulo (FEA/USP), SE (Brazil). Fac. de Administracao e Contabilidade

    2010-07-01

    The electrical sector is responsible for a considerable amount of greenhouse gases emissions worldwide, but also the one in which modern society depends the most for maintenance of quality of life as well as the functioning of economic and social activities. Invariably, even CO2 emission-free power plants have some indirect environmental impacts due to the economic effects they produce during their life cycle (construction, O and M and decommissioning). Thus, sustainability issues should be always considered in energy planning, by evaluating the balance of positive/negative externalities on different areas of the country. This study aims to introduce a social-environmental economic model, based on a Regional Sequential Inter industry Model (SIM) integrated with geoprocessing data, in order to identify economic, pollution and public health impacts in state level for energy planning analysis. The model is based on the Impact Pathway Approach Methodology, using geoprocessing to locate social-environmental variables for dispersion and health evaluations. The final goal is to provide an auxiliary tool for policy makers to assess energy planning scenarios in Brazil. (author)

  1. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hang-cheong Wong

    2012-01-01

    Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

  2. An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)

    2016-06-15

    Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.

  3. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.; Hoteit, Ibrahim; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A.; Schumacher, M.; Pattiaratchi, C.

    2017-01-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques

  4. Application of Combined Cake Filtration-Complete Blocking Model to Ultrafiltration of Skim Milk

    Directory of Open Access Journals (Sweden)

    Mansoor Kazemimoghadam

    2017-10-01

    Full Text Available Membrane ultrafiltration (UF is widely used in dairy industries like milk concentration and dehydration processes. The limiting factor of UF systems is fouling which is defined as the precipitation of solutes in the form of a cake layer on the surface of the membrane. In this study, the combined cake filtration-complete blocking model was compared to cake filtration mechanism for flux data through ultrafiltration of skim milk at constant flow rate. The resistance data also was modeled using cake filtration model and standard blocking model. The effect of different trans-membrane pressures and temperatures on flux decline was then investigated. Based on the results obtained here, the combined complete blocking-cake formation model was in excellent agreement with experimental data. The cake filtration model also provided good data fits and can be applied to solutions whose solutes tend to accumulate on the surface of the membrane in the form of a cake layer. With increasing pressure, the differences between the model and experimental data increased.

  5. Modelling of the Vajont rockslide displacements by delayed plasticity of interacting sliding blocks

    Science.gov (United States)

    Castellanza, riccardo; Hedge, Amarnath; Crosta, Giovanni; di Prisco, Claudio; Frigerio, Gabriele

    2015-04-01

    In order to model complex sliding masses subject to continuous slow movements related to water table fluctuations it is convenient to: i) model the time-dependent mechanical behaviour of the materials by means of a viscous-plastic constitutive law; ii) assume the water table fluctuation as the main input to induce displacement acceleration; iii) consider, the 3D constrains by maintaining a level of simplicity such to allow the implementation into EWS (Early Warning System) for risk management. In this work a 1D pseudo-dynamic visco-plastic model (Secondi et al. 2011), based on Perzyna's delayed plasticity theory is applied. The sliding mass is considered as a rigid block subject to its self weight, inertial forces and seepage forces varying with time. All non-linearities are lumped in a thin layer positioned between the rigid block and the stable bedrock. The mechanical response of this interface is assumed to be visco-plastic. The viscous nucleus is assumed to be of the exponential type, so that irreversible strains develop for both positive and negative values of the yield function; the sliding mass is discretized in blocks to cope with complex rockslide geometries; the friction angle is assumed to reduce with strain rate assuming a sort of strain - rate law (Dietrich-Ruina law). To validate the improvements introduced in this paper the simulation of the displacements of the Vajont rockslide from 1960 to the failure, occurred on October the 9th 1963, is perfomed. It will be shown that, in its modified version, the model satisfactorily fits the Vajont pre-collapse displacements triggered by the fluctuation of the Vajont lake level and the associated groundwater level. The model is able to follow the critical acceleration of the motion with a minimal change in friction properties.The discretization in interacting sliding blocks confirms its suitability to model the complex 3D rockslide behaviour. We are currently implementing a multi-block model capable to include

  6. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    Science.gov (United States)

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  7. Effectiveness of maritime safety control in different navigation zones using a spatial sequential DEA model: Yangtze River case.

    Science.gov (United States)

    Wu, Bing; Wang, Yang; Zhang, Jinfen; Savan, Emanuel Emil; Yan, Xinping

    2015-08-01

    This paper aims to analyze the effectiveness of maritime safety control from the perspective of safety level along the Yangtze River with special considerations for navigational environments. The influencing variables of maritime safety are reviewed, including ship condition, maritime regulatory system, human reliability and navigational environment. Because the former three variables are generally assumed to be of the same level of safety, this paper focuses on studying the impact of navigational environments on the level of safety in different waterways. An improved data envelopment analysis (DEA) model is proposed by treating the navigational environment factors as inputs and ship accident data as outputs. Moreover, because the traditional DEA model cannot provide an overall ranking of different decision making units (DMUs), the spatial sequential frontiers and grey relational analysis are incorporated into the DEA model to facilitate a refined assessment. Based on the empirical study results, the proposed model is able to solve the problem of information missing in the prior models and evaluate the level of safety with a better accuracy. The results of the proposed DEA model are further compared with an evidential reasoning (ER) method, which has been widely used for level of safety evaluations. A sensitivity analysis is also conducted to better understand the relationship between the variation of navigational environments and level of safety. The sensitivity analysis shows that the level of safety varies in terms of traffic flow. It indicates that appropriate traffic control measures should be adopted for different waterways to improve their safety. This paper presents a practical method of conducting maritime level of safety assessments under dynamic navigational environment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Comparison of Nursing Student and Instructor Preferences for Block and Nonblock Clinical Models.

    Science.gov (United States)

    Rohatinsky, Noelle; Chachula, Kathryn; Sedgwick, Monique; Press, Madeline M; Compton, Roslyn M; Lane, Brenda

    2017-10-05

    Clinical experiences are the hallmark of prelicensure nursing programs and assist students with applying nursing theory into practice. The literature is limited with respect to nursing student and instructor preferences for type of clinical model to facilitate student learning. This article explores these perceptions in the nursing programs of 5 universities located in 4 Western Canadian provinces. Findings support the use of both nonblock and block clinical models throughout nursing education programs.

  9. A dynamic general disequilibrium model of a sequential monetary production economy

    International Nuclear Information System (INIS)

    Raberto, Marco; Teglio, Andrea; Cincotti, Silvano

    2006-01-01

    A discrete, deterministic, economic model, based on the framework of non-Walrasian or disequilibrium economics, is presented. The main feature of this approach is the presence of non-clearing markets, where not all demands and supplies are satisfied and some agents may be rationed. The model is characterized by three agents (i.e., a representative firm, a representative consumer, and a central bank), three commodities (i.e., goods, labour and money, each homogeneous) and three markets for their exchange. The imbalance between demand and supply in each market determines the dynamics of price, nominal wage and nominal interest rate. The central bank provides the money supply according to an operating target interest rate that is fixed accordingly to Taylor's rule. The model has been studied by means of computer simulations. Results pointed out the presence of business cycles that can be controlled by proper policies of the central bank

  10. Sequential Monte Carlo filter for state estimation of LiFePO4 batteries based on an online updated model

    Science.gov (United States)

    Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.

    2014-02-01

    Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.

  11. Effect of anaerobic digestion on sequential pyrolysis kinetics of organic solid wastes using thermogravimetric analysis and distributed activation energy model.

    Science.gov (United States)

    Li, Xiaowei; Mei, Qingqing; Dai, Xiaohu; Ding, Guoji

    2017-03-01

    Thermogravimetric analysis, Gaussian-fit-peak model (GFPM), and distributed activation energy model (DAEM) were firstly used to explore the effect of anaerobic digestion on sequential pyrolysis kinetic of four organic solid wastes (OSW). Results showed that the OSW weight loss mainly occurred in the second pyrolysis stage relating to organic matter decomposition. Compared with raw substrate, the weight loss of corresponding digestate was lower in the range of 180-550°C, but was higher in 550-900°C. GFPM analysis revealed that organic components volatized at peak temperatures of 188-263, 373-401 and 420-462°C had a faster degradation rate than those at 274-327°C during anaerobic digestion. DAEM analysis showed that anaerobic digestion had discrepant effects on activation energy for four OSW pyrolysis, possibly because of their different organic composition. It requires further investigation for the special organic matter, i.e., protein-like and carbohydrate-like groups, to confirm the assumption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Development and Sensitivity Analysis of a Fully Kinetic Model of Sequential Reductive Dechlorination in Groundwater

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup

    2011-01-01

    experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most...

  13. A model of type 2 diabetes in the guinea pig using sequential diet-induced glucose intolerance and streptozotocin treatment

    Science.gov (United States)

    Ackart, David F.; Richardson, Michael A.; DiLisio, James E.; Pulford, Bruce; Basaraba, Randall J.

    2017-01-01

    ABSTRACT Type 2 diabetes is a leading cause of morbidity and mortality among noncommunicable diseases, and additional animal models that more closely replicate the pathogenesis of human type 2 diabetes are needed. The goal of this study was to develop a model of type 2 diabetes in guinea pigs, in which diet-induced glucose intolerance precedes β-cell cytotoxicity, two processes that are crucial to the development of human type 2 diabetes. Guinea pigs developed impaired glucose tolerance after 8 weeks of feeding on a high-fat, high-carbohydrate diet, as determined by oral glucose challenge. Diet-induced glucose intolerance was accompanied by β-cell hyperplasia, compensatory hyperinsulinemia, and dyslipidemia with hepatocellular steatosis. Streptozotocin (STZ) treatment alone was ineffective at inducing diabetic hyperglycemia in guinea pigs, which failed to develop sustained glucose intolerance or fasting hyperglycemia and returned to euglycemia within 21 days after treatment. However, when high-fat, high-carbohydrate diet-fed guinea pigs were treated with STZ, glucose intolerance and fasting hyperglycemia persisted beyond 21 days post-STZ treatment. Guinea pigs with diet-induced glucose intolerance subsequently treated with STZ demonstrated an insulin-secretory capacity consistent with insulin-independent diabetes. This insulin-independent state was confirmed by response to oral antihyperglycemic drugs, metformin and glipizide, which resolved glucose intolerance and extended survival compared with guinea pigs with uncontrolled diabetes. In this study, we have developed a model of sequential glucose intolerance and β-cell loss, through high-fat, high-carbohydrate diet and extensive optimization of STZ treatment in the guinea pig, which closely resembles human type 2 diabetes. This model will prove useful in the study of insulin-independent diabetes pathogenesis with or without comorbidities, where the guinea pig serves as a relevant model species. PMID:28093504

  14. A model of type 2 diabetes in the guinea pig using sequential diet-induced glucose intolerance and streptozotocin treatment.

    Science.gov (United States)

    Podell, Brendan K; Ackart, David F; Richardson, Michael A; DiLisio, James E; Pulford, Bruce; Basaraba, Randall J

    2017-02-01

    Type 2 diabetes is a leading cause of morbidity and mortality among noncommunicable diseases, and additional animal models that more closely replicate the pathogenesis of human type 2 diabetes are needed. The goal of this study was to develop a model of type 2 diabetes in guinea pigs, in which diet-induced glucose intolerance precedes β-cell cytotoxicity, two processes that are crucial to the development of human type 2 diabetes. Guinea pigs developed impaired glucose tolerance after 8 weeks of feeding on a high-fat, high-carbohydrate diet, as determined by oral glucose challenge. Diet-induced glucose intolerance was accompanied by β-cell hyperplasia, compensatory hyperinsulinemia, and dyslipidemia with hepatocellular steatosis. Streptozotocin (STZ) treatment alone was ineffective at inducing diabetic hyperglycemia in guinea pigs, which failed to develop sustained glucose intolerance or fasting hyperglycemia and returned to euglycemia within 21 days after treatment. However, when high-fat, high-carbohydrate diet-fed guinea pigs were treated with STZ, glucose intolerance and fasting hyperglycemia persisted beyond 21 days post-STZ treatment. Guinea pigs with diet-induced glucose intolerance subsequently treated with STZ demonstrated an insulin-secretory capacity consistent with insulin-independent diabetes. This insulin-independent state was confirmed by response to oral antihyperglycemic drugs, metformin and glipizide, which resolved glucose intolerance and extended survival compared with guinea pigs with uncontrolled diabetes. In this study, we have developed a model of sequential glucose intolerance and β-cell loss, through high-fat, high-carbohydrate diet and extensive optimization of STZ treatment in the guinea pig, which closely resembles human type 2 diabetes. This model will prove useful in the study of insulin-independent diabetes pathogenesis with or without comorbidities, where the guinea pig serves as a relevant model species. © 2017. Published by

  15. Slip-spring model of entangled rod-coil block copolymers

    Science.gov (United States)

    Wang, Muzhou; Likhtman, Alexei E.; Olsen, Bradley D.

    2015-03-01

    Understanding the dynamics of rod-coil block copolymers is important for optimal design of functional nanostructured materials for organic electronics and biomaterials. Recently, we proposed a reptation theory of entangled rod-coil block copolymers, predicting the relaxation mechanisms of activated reptation and arm retraction that slow rod-coil dynamics relative to coil and rod homopolymers, respectively. In this work, we introduce a coarse-grained slip-spring model of rod-coil block copolymers to further explore these mechanisms. First, parameters of the coarse-grained model are tuned to match previous molecular dynamics simulation results for coils, rods, and block copolymers. For activated reptation, rod-coil copolymers are shown to disfavor configurations where the rod occupies curved portions of the entanglement tube of randomly varying curvature created by the coil ends. The effect of these barriers on diffusion is quantitatively captured by considering one-dimensional motion along an entanglement tube with a rough free energy potential. Finally, we analyze the crossover between the two mechanisms. The resulting dynamics from both mechanisms acting in combination is faster than from each one individually.

  16. Parameter optimisation for a better representation of drought by LSMs: inverse modelling vs. sequential data assimilation

    Science.gov (United States)

    Dewaele, Hélène; Munier, Simon; Albergel, Clément; Planque, Carole; Laanaia, Nabil; Carrer, Dominique; Calvet, Jean-Christophe

    2017-09-01

    Soil maximum available water content (MaxAWC) is a key parameter in land surface models (LSMs). However, being difficult to measure, this parameter is usually uncertain. This study assesses the feasibility of using a 15-year (1999-2013) time series of satellite-derived low-resolution observations of leaf area index (LAI) to estimate MaxAWC for rainfed croplands over France. LAI interannual variability is simulated using the CO2-responsive version of the Interactions between Soil, Biosphere and Atmosphere (ISBA) LSM for various values of MaxAWC. Optimal value is then selected by using (1) a simple inverse modelling technique, comparing simulated and observed LAI and (2) a more complex method consisting in integrating observed LAI in ISBA through a land data assimilation system (LDAS) and minimising LAI analysis increments. The evaluation of the MaxAWC estimates from both methods is done using simulated annual maximum above-ground biomass (Bag) and straw cereal grain yield (GY) values from the Agreste French agricultural statistics portal, for 45 administrative units presenting a high proportion of straw cereals. Significant correlations (p value Bag and GY are found for up to 36 and 53 % of the administrative units for the inverse modelling and LDAS tuning methods, respectively. It is found that the LDAS tuning experiment gives more realistic values of MaxAWC and maximum Bag than the inverse modelling experiment. Using undisaggregated LAI observations leads to an underestimation of MaxAWC and maximum Bag in both experiments. Median annual maximum values of disaggregated LAI observations are found to correlate very well with MaxAWC.

  17. Block spins and chirality in Heisenberg model on Kagome and triangular lattices

    International Nuclear Information System (INIS)

    Subrahmanyam, V.

    1994-01-01

    The spin-1/2 Heisenberg model (HM) is investigated using a block-spin renormalization approach on Kagome and triangular lattices. In both cases, after coarse graining the triangles on original lattice and truncation of the Hilbert space to the triangular ground state subspace, HM reduces to an effective model on a triangular lattice in terms of the triangular-block degrees of freedom viz. the spin and the chirality quantum numbers. The chirality part of the effective Hamiltonian captures the essential difference between the two lattices. It is seen that simple eigenstates can be constructed for the effective model whose energies serve as upper bounds on the exact ground state energy of HM, and chiral ordered variational states have high energies compared to the other variational states. (author). 12 refs, 2 figs

  18. Anthropogenic Changes in Mid-latitude Storm and Blocking Activities from Observations and Climate Models

    Science.gov (United States)

    Li, D.

    2017-12-01

    Fingerprints of anthropogenic climate change can be most readily detected in the high latitudes of Northern Hemisphere, where temperature has been rising faster than the rest of the globe and sea ice cover has shrunk dramatically over recent decades. Reducing the meridional temperature gradient, this amplified warming over the high latitudes influences weather in the middle latitudes by modulating the jet stream, storms, and atmospheric blocking activities. Whether observational records have revealed significant changes in mid-latitude storms and blocking activities, however, has remained a subject of much debate. Buried deep in strong year-to-year variations, the long-term dynamic responses of the atmosphere are more difficult to identify, compared with its thermodynamic responses. Variabilities of decadal and longer timescales further obscure any trends diagnosed from satellite observations, which are often shorter than 40 years. Here, new metrics reflecting storm and blocking activities are developed using surface air temperature and pressure records, and their variations and long-term trends are examined. This approach gives an inkling of the changes in storm and blocking activities since the Industrial Revolution in regions with abundant long-term observational records, e.g. Europe and North America. The relationship between Atlantic Multi-decadal Oscillation and variations in storm and blocking activities across the Atlantic is also scrutinized. The connection between observed centennial trends and anthropogenic forcings is investigated using a hierarchy of numerical tools, from highly idealized to fully coupled atmosphere-ocean models. Pre-industrial control simulations and a set of large ensemble simulations forced by increased CO2 are analyzed to evaluate the range of natural variabilities, which paves the way to singling out significant anthropogenic changes from observational records, as well as predicting future changes in mid-latitude storm and

  19. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  20. Sequential modelling of the effects of mass drug treatments on anopheline-mediated lymphatic filariasis infection in Papua New Guinea.

    Directory of Open Access Journals (Sweden)

    Brajendra K Singh

    Full Text Available Lymphatic filariasis (LF has been targeted by the WHO for global eradication leading to the implementation of large scale intervention programs based on annual mass drug administrations (MDA worldwide. Recent work has indicated that locality-specific bio-ecological complexities affecting parasite transmission may complicate the prediction of LF extinction endpoints, casting uncertainty on the achievement of this initiative. One source of difficulty is the limited quantity and quality of data used to parameterize models of parasite transmission, implying the important need to update initially-derived parameter values. Sequential analysis of longitudinal data following annual MDAs will also be important to gaining new understanding of the persistence dynamics of LF. Here, we apply a Bayesian statistical-dynamical modelling framework that enables assimilation of information in human infection data recorded from communities in Papua New Guinea that underwent annual MDAs, into our previously developed model of parasite transmission, in order to examine these questions in LF ecology and control.Biological parameters underlying transmission obtained by fitting the model to longitudinal data remained stable throughout the study period. This enabled us to reliably reconstruct the observed baseline data in each community. Endpoint estimates also showed little variation. However, the updating procedure showed a shift towards higher and less variable values for worm kill but not for any other drug-related parameters. An intriguing finding is that the stability in key biological parameters could be disrupted by a significant reduction in the vector biting rate prevailing in a locality.Temporal invariance of biological parameters in the face of intervention perturbations indicates a robust adaptation of LF transmission to local ecological conditions. The results imply that understanding the mechanisms that underlie locally adapted transmission dynamics will

  1. Predictive modelling of survival and length of stay in critically ill patients using sequential organ failure scores.

    Science.gov (United States)

    Houthooft, Rein; Ruyssinck, Joeri; van der Herten, Joachim; Stijven, Sean; Couckuyt, Ivo; Gadeyne, Bram; Ongenae, Femke; Colpaert, Kirsten; Decruyenaere, Johan; Dhaene, Tom; De Turck, Filip

    2015-03-01

    The length of stay of critically ill patients in the intensive care unit (ICU) is an indication of patient ICU resource usage and varies considerably. Planning of postoperative ICU admissions is important as ICUs often have no nonoccupied beds available. Estimation of the ICU bed availability for the next coming days is entirely based on clinical judgement by intensivists and therefore too inaccurate. For this reason, predictive models have much potential for improving planning for ICU patient admission. Our goal is to develop and optimize models for patient survival and ICU length of stay (LOS) based on monitored ICU patient data. Furthermore, these models are compared on their use of sequential organ failure (SOFA) scores as well as underlying raw data as input features. Different machine learning techniques are trained, using a 14,480 patient dataset, both on SOFA scores as well as their underlying raw data values from the first five days after admission, in order to predict (i) the patient LOS, and (ii) the patient mortality. Furthermore, to help physicians in assessing the prediction credibility, a probabilistic model is tailored to the output of our best-performing model, assigning a belief to each patient status prediction. A two-by-two grid is built, using the classification outputs of the mortality and prolonged stay predictors to improve the patient LOS regression models. For predicting patient mortality and a prolonged stay, the best performing model is a support vector machine (SVM) with GA,D=65.9% (area under the curve (AUC) of 0.77) and GS,L=73.2% (AUC of 0.82). In terms of LOS regression, the best performing model is support vector regression, achieving a mean absolute error of 1.79 days and a median absolute error of 1.22 days for those patients surviving a nonprolonged stay. Using a classification grid based on the predicted patient mortality and prolonged stay, allows more accurate modeling of the patient LOS. The detailed models allow to support

  2. Effects of Combined Simultaneous and Sequential Endostar and Cisplatin Treatment in a Mice Model of Gastric Cancer Peritoneal Metastases

    Directory of Open Access Journals (Sweden)

    Lin Jia

    2017-01-01

    Full Text Available Objective. Aimed to study the effects of endostar and cisplatin using an in vivo imaging system (IVIS in a model of peritoneal metastasis of gastric cancer. Methods. NUGC-4 gastric cancer cells transfected with luciferase gene (NUGC-4-Luc were injected i.p. into nude mice. One week later, mice were randomly injected i.p.: group 1, cisplatin (d1–3 + endostar (d4–7; group 2, endostar (d1–4 + cisplatin (d5–7; group 3, endostar + cisplatin d1, 4, and 7; group 4, saline for two weeks. One week after the final administration, mice were sacrificed. Bioluminescent data, microvessel density (MVD, and lymphatic vessel density (LVD were analyzed. Results. Among the four groups, there were no significant differences in the weights and in the number of cancer cell photons on days 1 and 8 (P>0.05. On day 15, the numbers in groups 3 and 1 were less than that in group 2 (P0.05 or in LVD number among the four groups (P>0.05. Conclusions. IVIS® was more useful than weight, volume of ascites, and number of peritoneal nodules. The simultaneous group was superior to sequential groups in killing cancer cells and inhibiting vascular endothelium. Cisplatin-endostar was superior to endostar-cisplatin in killing cancer cells, while the latter in inhibiting peritoneal vascular endothelium.

  3. Effects of Combined Simultaneous and Sequential Endostar and Cisplatin Treatment in a Mice Model of Gastric Cancer Peritoneal Metastases.

    Science.gov (United States)

    Jia, Lin; Ren, Shuguang; Li, Tao; Wu, Jianing; Zhou, Xinliang; Zhang, Yan; Wu, Jianhua; Liu, Wei

    2017-01-01

    Objective . Aimed to study the effects of endostar and cisplatin using an in vivo imaging system (IVIS) in a model of peritoneal metastasis of gastric cancer. Methods . NUGC-4 gastric cancer cells transfected with luciferase gene (NUGC-4-Luc) were injected i.p. into nude mice. One week later, mice were randomly injected i.p.: group 1, cisplatin (d1-3) + endostar (d4-7); group 2, endostar (d1-4) + cisplatin (d5-7); group 3, endostar + cisplatin d1, 4, and 7; group 4, saline for two weeks. One week after the final administration, mice were sacrificed. Bioluminescent data, microvessel density (MVD), and lymphatic vessel density (LVD) were analyzed. Results . Among the four groups, there were no significant differences in the weights and in the number of cancer cell photons on days 1 and 8 ( P > 0.05). On day 15, the numbers in groups 3 and 1 were less than that in group 2 ( P 0.05) or in LVD number among the four groups ( P > 0.05). Conclusions . IVIS® was more useful than weight, volume of ascites, and number of peritoneal nodules. The simultaneous group was superior to sequential groups in killing cancer cells and inhibiting vascular endothelium. Cisplatin-endostar was superior to endostar-cisplatin in killing cancer cells, while the latter in inhibiting peritoneal vascular endothelium.

  4. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  5. Dual Binding Site and Selective Acetylcholinesterase Inhibitors Derived from Integrated Pharmacophore Models and Sequential Virtual Screening

    Directory of Open Access Journals (Sweden)

    Shikhar Gupta

    2014-01-01

    Full Text Available In this study, we have employed in silico methodology combining double pharmacophore based screening, molecular docking, and ADME/T filtering to identify dual binding site acetylcholinesterase inhibitors that can preferentially inhibit acetylcholinesterase and simultaneously inhibit the butyrylcholinesterase also but in the lesser extent than acetylcholinesterase. 3D-pharmacophore models of AChE and BuChE enzyme inhibitors have been developed from xanthostigmine derivatives through HypoGen and validated using test set, Fischer’s randomization technique. The best acetylcholinesterase and butyrylcholinesterase inhibitors pharmacophore hypotheses Hypo1_A and Hypo1_B, with high correlation coefficient of 0.96 and 0.94, respectively, were used as 3D query for screening the Zinc database. The screened hits were then subjected to the ADME/T and molecular docking study to prioritise the compounds. Finally, 18 compounds were identified as potential leads against AChE enzyme, showing good predicted activities and promising ADME/T properties.

  6. CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-04

    Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.

  7. Vehicle speed detection based on gaussian mixture model using sequential of images

    Science.gov (United States)

    Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang

    2017-09-01

    Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.

  8. Seasonal climate variation and caribou availability: Modeling sequential movement using satellite-relocation data

    Science.gov (United States)

    Nicolson, Craig; Berman, Matthew; West, Colin Thor; Kofinas, Gary P.; Griffith, Brad; Russell, Don; Dugan, Darcy

    2013-01-01

    Livelihood systems that depend on mobile resources must constantly adapt to change. For people living in permanent settlements, environmental changes that affect the distribution of a migratory species may reduce the availability of a primary food source, with the potential to destabilize the regional social-ecological system. Food security for Arctic indigenous peoples harvesting barren ground caribou (Rangifer tarandus granti) depends on movement patterns of migratory herds. Quantitative assessments of physical, ecological, and social effects on caribou distribution have proven difficult because of the significant interannual variability in seasonal caribou movement patterns. We developed and evaluated a modeling approach for simulating the distribution of a migratory herd throughout its annual cycle over a multiyear period. Beginning with spatial and temporal scales developed in previous studies of the Porcupine Caribou Herd of Canada and Alaska, we used satellite collar locations to compute and analyze season-by-season probabilities of movement of animals between habitat zones under two alternative weather conditions for each season. We then built a set of transition matrices from these movement probabilities, and simulated the sequence of movements across the landscape as a Markov process driven by externally imposed seasonal weather states. Statistical tests showed that the predicted distributions of caribou were consistent with observed distributions, and significantly correlated with subsistence harvest levels for three user communities. Our approach could be applied to other caribou herds and could be adapted for simulating the distribution of other ungulates and species with similarly large interannual variability in the use of their range.

  9. Research on manufacturing service behavior modeling based on block chain theory

    Science.gov (United States)

    Zhao, Gang; Zhang, Guangli; Liu, Ming; Yu, Shuqin; Liu, Yali; Zhang, Xu

    2018-04-01

    According to the attribute characteristics of processing craft, the manufacturing service behavior is divided into service attribute, basic attribute, process attribute, resource attribute. The attribute information model of manufacturing service is established. The manufacturing service behavior information is successfully divided into public and private domain. Additionally, the block chain technology is introduced, and the information model of manufacturing service based on block chain principle is established, which solves the problem of sharing and secreting information of processing behavior, and ensures that data is not tampered with. Based on the key pairing verification relationship, the selective publishing mechanism for manufacturing information is established, achieving the traceability of product data, guarantying the quality of processing quality.

  10. Seasonal Climate Variation and Caribou Availability: Modeling Sequential Movement Using Satellite-Relocation Data

    Directory of Open Access Journals (Sweden)

    Craig Nicolson

    2013-06-01

    Full Text Available Livelihood systems that depend on mobile resources must constantly adapt to change. For people living in permanent settlements, environmental changes that affect the distribution of a migratory species may reduce the availability of a primary food source, with the potential to destabilize the regional social-ecological system. Food security for Arctic indigenous peoples harvesting barren ground caribou (Rangifer tarandus granti depends on movement patterns of migratory herds. Quantitative assessments of physical, ecological, and social effects on caribou distribution have proven difficult because of the significant interannual variability in seasonal caribou movement patterns. We developed and evaluated a modeling approach for simulating the distribution of a migratory herd throughout its annual cycle over a multiyear period. Beginning with spatial and temporal scales developed in previous studies of the Porcupine Caribou Herd of Canada and Alaska, we used satellite collar locations to compute and analyze season-by-season probabilities of movement of animals between habitat zones under two alternative weather conditions for each season. We then built a set of transition matrices from these movement probabilities, and simulated the sequence of movements across the landscape as a Markov process driven by externally imposed seasonal weather states. Statistical tests showed that the predicted distributions of caribou were consistent with observed distributions, and significantly correlated with subsistence harvest levels for three user communities. Our approach could be applied to other caribou herds and could be adapted for simulating the distribution of other ungulates and species with similarly large interannual variability in the use of their range.

  11. 3D seismic modeling and reverse‐time migration with the parallel Fourier method using non‐blocking collective communications

    KAUST Repository

    Chu, Chunlei

    2009-01-01

    The major performance bottleneck of the parallel Fourier method on distributed memory systems is the network communication cost. In this study, we investigate the potential of using non‐blocking all‐to‐all communications to solve this problem by overlapping computation and communication. We present the runtime comparison of a 3D seismic modeling problem with the Fourier method using non‐blocking and blocking calls, respectively, on a Linux cluster. The data demonstrate that a performance improvement of up to 40% can be achieved by simply changing blocking all‐to‐all communication calls to non‐blocking ones to introduce the overlapping capability. A 3D reverse‐time migration result is also presented as an extension to the modeling work based on non‐blocking collective communications.

  12. Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models

    OpenAIRE

    Liang, Yuli

    2015-01-01

    This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data....

  13. Comparison of vibration test results for Atucha II NPP and large scale concrete block models

    International Nuclear Information System (INIS)

    Iizuka, S.; Konno, T.; Prato, C.A.

    2001-01-01

    In order to study the soil structure interaction of reactor building that could be constructed on a Quaternary soil, a comparison study of the soil structure interaction springs was performed between full scale vibration test results of Atucha II NPP and vibration test results of large scale concrete block models constructed on Quaternary soil. This comparison study provides a case data of soil structure interaction springs on Quaternary soil with different foundation size and stiffness. (author)

  14. A physical model of laser-assisted blocking of blood flow: I. Rectangular radiation pulses

    CSIR Research Space (South Africa)

    Zheltov, GI

    2007-03-01

    Full Text Available as to the investigation of destructive changes in these objects [1–16]. Various models were considered in these studies: a model of skin as a set of plane layers with different optical and physical properties (epider- mis, dermis, blood layer) [1–9], a similar model... conditions of minimal damage to adjacent healthy tissues. The necessity of local block- ing of the blood flow arises, e.g., upon dissection of tis- sues (stanching blood flow), upon treatment of vascular malformations (including those of diabetic origin...

  15. Sequential application of ligand and structure based modeling approaches to index chemicals for their hH4R antagonism.

    Directory of Open Access Journals (Sweden)

    Matteo Pappalardo

    Full Text Available The human histamine H4 receptor (hH4R, a member of the G-protein coupled receptors (GPCR family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE and Iterative Stochastic Elimination (ISE approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and

  16. Sequential application of ligand and structure based modeling approaches to index chemicals for their hH4R antagonism.

    Science.gov (United States)

    Pappalardo, Matteo; Shachaf, Nir; Basile, Livia; Milardi, Danilo; Zeidan, Mouhammed; Raiyn, Jamal; Guccione, Salvatore; Rayan, Anwar

    2014-01-01

    The human histamine H4 receptor (hH4R), a member of the G-protein coupled receptors (GPCR) family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE) and Iterative Stochastic Elimination (ISE) approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and increase the

  17. A sequential vesicle pool model with a single release sensor and a ca(2+)-dependent priming catalyst effectively explains ca(2+)-dependent properties of neurosecretion

    DEFF Research Database (Denmark)

    Walter, Alexander M; da Silva Pinheiro, Paulo César; Verhage, Matthijs

    2013-01-01

    identified. We here propose a Sequential Pool Model (SPM), assuming a novel Ca(2+)-dependent action: a Ca(2+)-dependent catalyst that accelerates both forward and reverse priming reactions. While both models account for fast fusion from the Readily-Releasable Pool (RRP) under control of synaptotagmin-1...... the simultaneous changes in release rate and amplitude seen when mutating the SNARE-complex. Finally, it can account for the loss of fast- and the persistence of slow release in the synaptotagmin-1 knockout by assuming that the RRP is depleted, leading to slow and Ca(2+)-dependent fusion from the NRP. We conclude...... that the elusive 'alternative Ca(2+) sensor' for slow release might be the upstream priming catalyst, and that a sequential model effectively explains Ca(2+)-dependent properties of secretion without assuming parallel pools or sensors....

  18. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection.

    Science.gov (United States)

    Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  19. Reconstruction of a Phreatic Explosion from Block Dispersion Modeling at King's Bowl, Idaho

    Science.gov (United States)

    Kobs-Nawotniak, S. E.; Sears, D. W. G.; Hughes, S. S.; Borg, C.; Sears, H.; Skok, J. R.; Elphic, R. C.; Lim, D. S. S.; Heldmann, J. L.; Haberle, C. W.; Guy, H.; Kobayashi, L.; Garry, B.; Neish, C.; Kim, K. J.

    2014-12-01

    King's Bowl (KB), located in Idaho's eastern Snake River Plain, was formed by a phreatic blast through a mostly-congealed lava lake. Blocks up to ~2m diameter were ejected from the vent to form a ballistic ejecta blanket extending radially more than 100m. The blocks on the western side of the KB fissure are extraordinarily well exposed, as the fine fraction was blown eastward by ambient winds during the explosion. We present preliminary modeling results using the western ballistic blocks of KB to calculate the energy of the eruption, and the water volume necessary to create the blast. This work is presented in conjunction with two other 2014 AGU conference abstracts submitted by NASA SSERVI funded FINESSE (Field Investigations to Enable Solar System Science and Exploration) team members: Hughes et al., which introduces the geology of KB and Sears et al., which discusses field observation and data trends. Results of this research are extensible to steam-driven pits on other solar system bodies, including those observed on Mars, Phobos, Deimos, and the asteroids. Over 600 blocks ranging from .2 to 2m in diameter were mapped using differential GPS and measured for 3 axial lengths and vesicularity. Mass calculations were corrected using a scaling factor determined from measurements of 100 blocks at KB, coupled with targeted density measurements. The dispersed block trajectories were modeled using a fourth order Runge-Kutta solution of the equations of motion to calculate suites of possible ejection speeds and angles. The resulting characteristic vent velocities were used to calculate the kinetic energy necessary to evacuate the crater at KB; energy required for fragmentation is neglected at this time. Total mass in the kinetic energy calculations was calculated by two separate methods: 1) current volume expression of the KB crater and 2) an additive solution of the ejecta field as determined from radial transect surveys. From the kinetic energy we calculated the

  20. CONSTRUCTION OF A DYNAMIC INPUT-OUTPUT MODEL WITH A HUMAN CAPITAL BLOCK

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2017-03-01

    Full Text Available The accumulation of human capital is an important factor of economic growth. It seems to be useful to include «human capital» as a factor of a macroeconomic model, as it helps to take into account the quality differentiation of the workforce. Most of the models usually distinguish labor force by the levels of education, while some of the factors remain unaccounted. Among them are health status and culture development level, which influence productivity level as well as gross product reproduction. Inclusion of the human capital block to the interindustry model can help to make it more reliable for economic development forecasting. The article presents a mathematical description of the extended dynamic input-output model (DIOM with a human capital block. The extended DIOM is based on the Input-Output Model from The KAMIN system (the System of Integrated Analyses of Interindustrial Information developed at the Institute of Economics and Industrial Engineering of the Siberian Branch of the Academy of Sciences of the Russian Federation and at the Novosibirsk State University. The extended input-output model can be used to analyze and forecast development of Russian economy.

  1. Nerve Blocks

    Science.gov (United States)

    ... News Physician Resources Professions Site Index A-Z Nerve Blocks A nerve block is an injection to ... the limitations of Nerve Block? What is a Nerve Block? A nerve block is an anesthetic and/ ...

  2. Sequential Management of Commercial Rosewood (Aniba rosaeodora Ducke Plantations in Central Amazonia: Seeking Sustainable Models for Essential Oil Production

    Directory of Open Access Journals (Sweden)

    Pedro Medrado Krainovic

    2017-11-01

    Full Text Available Rosewood (Aniba rosaeodora Ducke is an endangered tree that produces essential oil of high commercial value. However, technical-scientific knowledge about cultivation is scarce and studies are needed to examine the management viability. The current study evaluated rosewood aboveground biomass management, measuring the export of nutrients resulting from harvesting and testing sustainable management models. The crown of 36 rosewood trees were pruned and 108 trees cut at 50 cm above the soil in two regions in Central Amazonia. Post-harvest performance of sprouting shoots was evaluated and after, sprouting shoots were pruned so that the development of two, three and all shoots was permitted. Nutrient stock estimation was calculated as the product of mass and nutrient concentration, which allowed nutritional replacement to be estimated. The pruning facilitates regrowth by 40.11% of the initial mass while by cut regrow 1.45%. Chemical attributes of regrowth biomass differed significantly prior to management and regrowth had a significant correlation with the reserves in root tissues and with the pre -management status of the individual tree. Driving sprouts resulted in significantly larger growth increments and may provide a form of management that can viably be adopted. Biomass sequential management resulted in high nutrient exports and the amount of fertilizer needed for replenishment depended on the intensity and frequency of cropping. Compared with the cut of the tree, pruning the canopy reduces fertilizers that are required to replenish amount by 44%, decreasing to 26.37% in the second rotation. The generated knowledge contributes to this silvicultural practice as it becomes ecologically and economically viable.

  3. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers

    Directory of Open Access Journals (Sweden)

    Mi Tian

    2018-01-01

    Full Text Available Children’s block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children’s block building performance. Chinese preschoolers (N = 180 participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children’s block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  4. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers.

    Science.gov (United States)

    Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui

    2018-01-01

    Children's block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children's block building performance. Chinese preschoolers ( N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children's block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  5. An enhanced model for minimizing fuel consumption under block-queuing in a drive-through service system

    Energy Technology Data Exchange (ETDEWEB)

    Reilly, C.H.; Berglin, J. [University of Central Florida, Orlando, FL (United States). Dept. of Industrial Engineering and Management Systems

    2004-05-01

    We present a new model for determining the optimal block-size under block-queuing in a simple, single-channel queue at a drive-through service facility. With block-queuing, a queue is partitioned into an active section and a passive section, where drivers are asked to turn off their engines until the active section clears. Our model prescribes a block-size, i.e., a maximum number of vehicles in the active section, which minimizes the expected amount of fuel consumed in the queue. It can assess the effects of the traffic intensity, the service-time variance, and the proportion of compliant drivers in the passive section on the optimal block- size and on fuel consumption in the queue. (author)

  6. The Use of a Block Diagram Simulation Language for Rapid Model Prototyping

    Science.gov (United States)

    Whitlow, Johnathan E.; Engrand, Peter

    1996-01-01

    The research performed this summer was a continuation of work performed during the 1995 NASA/ASEE Summer Fellowship. The focus of the work was to expand previously generated predictive models for liquid oxygen (LOX) loading into the external fuel tank of the shuttle. The models which were developed using a block diagram simulation language known as VisSim, were evaluated on numerous shuttle flights and found to well in most cases. Once the models were refined and validated, the predictive methods were integrated into the existing Rockwell software propulsion advisory tool (PAT). Although time was not sufficient to completely integrate the models developed into PAT, the ability to predict flows and pressures in the orbiter section and graphically display the results was accomplished.

  7. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  8. Intramolecular structures in a single copolymer chain consisting of flexible and semiflexible blocks: Monte Carlo simulation of a lattice model

    International Nuclear Information System (INIS)

    Martemyanova, Julia A; Ivanov, Victor A; Paul, Wolfgang

    2014-01-01

    We study conformational properties of a single multiblock copolymer chain consisting of flexible and semiflexible blocks. Monomer units of different blocks are equivalent in the sense of the volume interaction potential, but the intramolecular bending potential between successive bonds along the chain is different. We consider a single flexible-semiflexible regular multiblock copolymer chain with equal content of flexible and semiflexible units and vary the length of the blocks and the stiffness parameter. We perform flat histogram type Monte Carlo simulations based on the Wang-Landau approach and employ the bond fluctuation lattice model. We present here our data on different non-trivial globular morphologies which we have obtained in our model for different values of the block length and the stiffness parameter. We demonstrate that the collapse can occur in one or in two stages depending on the values of both these parameters and discuss the role of the inhomogeneity of intraglobular distributions of monomer units of both flexible and semiflexible blocks. For short block length and/or large stiffness the collapse occurs in two stages, because it goes through intermediate (meta-)stable structures, like a dumbbell shaped conformation. In such conformations the semiflexible blocks form a cylinder-like core, and the flexible blocks form two domains at both ends of such a cylinder. For long block length and/or small stiffness the collapse occurs in one stage, and in typical conformations the flexible blocks form a spherical core of a globule while the semiflexible blocks are located on the surface and wrap around this core.

  9. Sequential motor skill: cognition, perception and action

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.

    2013-01-01

    Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role

  10. Final report of the TRUE Block Scale project. 1. Characterisation and model development

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Peter; Byegaard, Johan [Geosigma AB, Uppsala (Sweden); Dershowitz, Bill; Doe, Thomas [Golder Associates Inc., Redmond, WA (United States); Hermanson, Jan [Golder Associates AB (Sweden); Meier, Peter [ANDRA, Chatenay-Malabry (France); Tullborg, Eva-Lena [Terralogica AB (Sweden); Winberg, Anders (ed.) [Conterra AB, Partille (Sweden)

    2002-04-01

    The general objectives of the TRUE Block Scale Project were to 1) increase understanding of tracer transport in a fracture network and to improve predictive capabilities, 2) assess the importance of tracer retention mechanisms (diffusion and sorption) in a fracture network, and 3) assess the link between flow and transport data as a means for predicting transport phenomena. During the period mid 1996 through mid 1999 a 200x250x100 m rock volume was characterised with the purpose of furnishing the basis for successful tracer experiments in a network of conductive structures in the block scale (10-100 m). In total five cored boreholes were drilled as part of the project in an iterative mode with a period of analysis following completion of characterisation, and with a strong component of inter activity with numerical modelling and experimental design, particularly towards the end of the characterisation. The combined use of pressure responses due to drilling and drilling records provided important early information/confirmation of the existence and location of a given structure. Verification of conductors identified from pressure responses was achieved through the use of various flow logging techniques. The usage of the Posiva difference flow log towards the end of the characterisation work enabled identification of discrete conductive fractures with a high resolution. Pressure responses collected during drilling were used to obtain a first assessment of connectivity between boreholes. The transient behaviour of the responses collected during cross-hole interference tests in packed-off boreholes were used to identify families of responses, which correlated well with the identified principal families of structures/fracture networks. The conductive geometry of the investigated rock block is made up of steeply dipping deterministic NW structures and NNW structures. High inflows in the boreholes were for the most part associated with geologically/geometrically identified

  11. Final report of the TRUE Block Scale project. 1. Characterisation and model development

    International Nuclear Information System (INIS)

    Andersson, Peter; Byegaard, Johan; Dershowitz, Bill; Doe, Thomas; Hermanson, Jan; Meier, Peter; Tullborg, Eva-Lena; Winberg, Anders

    2002-04-01

    The general objectives of the TRUE Block Scale Project were to 1) increase understanding of tracer transport in a fracture network and to improve predictive capabilities, 2) assess the importance of tracer retention mechanisms (diffusion and sorption) in a fracture network, and 3) assess the link between flow and transport data as a means for predicting transport phenomena. During the period mid 1996 through mid 1999 a 200x250x100 m rock volume was characterised with the purpose of furnishing the basis for successful tracer experiments in a network of conductive structures in the block scale (10-100 m). In total five cored boreholes were drilled as part of the project in an iterative mode with a period of analysis following completion of characterisation, and with a strong component of inter activity with numerical modelling and experimental design, particularly towards the end of the characterisation. The combined use of pressure responses due to drilling and drilling records provided important early information/confirmation of the existence and location of a given structure. Verification of conductors identified from pressure responses was achieved through the use of various flow logging techniques. The usage of the Posiva difference flow log towards the end of the characterisation work enabled identification of discrete conductive fractures with a high resolution. Pressure responses collected during drilling were used to obtain a first assessment of connectivity between boreholes. The transient behaviour of the responses collected during cross-hole interference tests in packed-off boreholes were used to identify families of responses, which correlated well with the identified principal families of structures/fracture networks. The conductive geometry of the investigated rock block is made up of steeply dipping deterministic NW structures and NNW structures. High inflows in the boreholes were for the most part associated with geologically/geometrically identified

  12. Modular transformations of conformal blocks in WZW models on Riemann surfaces of higher genus

    International Nuclear Information System (INIS)

    Miao Li; Ming Yu.

    1989-05-01

    We derive the modular transformations for conformal blocks in Wess-Zumino-Witten models on Riemann surfaces of higher genus. The basic ingredient consists of using the Chern-Simons theory developed by Witten. We find that the modular transformations generated by Dehn twists are linear combinations of Wilson line operators, which can be expressed in terms of braiding matrices. It can also be shown that modular transformation matrices for g > 0 Riemann surfaces depend only on those for g ≤ 3. (author). 13 refs, 15 figs

  13. Excitation block in a nerve fibre model owing to potassium-dependent changes in myelin resistance

    DEFF Research Database (Denmark)

    Brazhe, Alexey; Maksimov, G. V.; Mosekilde, Erik

    2011-01-01

    . Uptake of potassium leads to Schwann cell swelling and myelin restructuring that impacts the electrical properties of the myelin. In order to further understand the dynamic interaction that takes place between the myelin and the axon, we have modelled submyelin potassium accumulation and related changes...... in myelin resistance during prolonged high-frequency stimulation. We predict that potassium-mediated decrease in myelin resistance leads to a functional excitation block with various patterns of altered spike trains. The patterns are found to depend on stimulation frequency and amplitude and to range from...

  14. Depth geological model building: application to the 3D high resolution 'ANDRA' seismic block

    International Nuclear Information System (INIS)

    Mari, J.L.; Yven, B.

    2012-01-01

    Document available in extended abstract form only. 3D seismic blocks and logging data, mainly acoustic and density logs, are often used for geological model building in time. The geological model must be then converted from time to depth. Geostatistical approach for time-to-depth conversion of seismic horizons is often used in many geo-modelling projects. From a geostatistical point of view, the time-to-depth conversion of seismic horizons is a classical estimation problem involving one or more secondary variables. Bayesian approach [1] provides an excellent estimator which is more general than the traditional kriging with external drift(s) and fits very well to the needs for time-to-depth conversion of seismic horizons. The time-to-depth conversion of the selected seismic horizons is used to compute a time-to-depth conversion model at the time sampling rate (1 ms). The 3D depth conversion model allows the computation of an interval velocity block which is compared with the acoustic impedance block to estimate a density block as QC. Non realistic density values are edited and the interval velocity block as well as the depth conversion model is updated. The proposed procedure has been applied on a 3D data set. The dataset comes from a High Resolution 3D seismic survey recorded in France at the boundary of the Meuse and Haute-Marne departments in the vicinity of the Andra Center (National radioactive waste management Agency). The 3D design is a cross spread. The active spread is composed of 12 receiver lines with 120 stations each. The source lines are perpendicular to the receiver lines. The receiver and source line spacings are respectively 80 m and 120 m. The receiver and source point spacings are 20 m. The source is a Vibroseis source generating a signal in the 14 - 140 Hz frequency bandwidth.. The bin size is 10 x 10 m 2 . The nominal fold is 60. A conventional seismic sequence was applied to the data set. It includes amplitude recovery, deconvolution and wave

  15. Modeling and Optimization of Compressive Strength of Hollow Sandcrete Block with Rice Husk Ash Admixture

    Directory of Open Access Journals (Sweden)

    2016-11-01

    Full Text Available The paper presents the report of an investigation into the model development and optimization of the compressive strength of 55/45 to 70/30 cement/Rice Husk Ash (RHA in hollow sandcrete block. The low cost and local availability potential of RHA, a pozzolanic material gasps for exploitation. The study applies the Scheffe\\'s optimization approach to obtain a mathematical model of the form f(xi1 ,xi2 ,xi3 xi4 , where x are proportions of the concrete components, viz: cement, RHA, sand and water. Scheffe\\'s i experimental design techniques are followed to mould various hollow block samples measuring 450mm x 225mm x 150mm and tested for 28 days strength. The task involved experimentation and design, applying the second order polynomial characterization process of the simplex lattice method. The model adequacy is checked using the control factors. Finally, a software is prepared to handle the design computation process to take the desired property of the mix, and generate the optimal mix ratios. Reversibly, any mix ratios can be desired and the attainable strength obtained.

  16. Modeling of block copolymer dry etching for directed self-assembly lithography

    Science.gov (United States)

    Belete, Zelalem; Baer, Eberhard; Erdmann, Andreas

    2018-03-01

    Directed self-assembly (DSA) of block copolymers (BCP) is a promising alternative technology to overcome the limits of patterning for the semiconductor industry. DSA exploits the self-assembling property of BCPs for nano-scale manufacturing and to repair defects in patterns created during photolithography. After self-assembly of BCPs, to transfer the created pattern to the underlying substrate, selective etching of PMMA (poly (methyl methacrylate)) to PS (polystyrene) is required. However, the etch process to transfer the self-assemble "fingerprint" DSA patterns to the underlying layer is still a challenge. Using combined experimental and modelling studies increases understanding of plasma interaction with BCP materials during the etch process and supports the development of selective process that form well-defined patterns. In this paper, a simple model based on a generic surface model has been developed and an investigation to understand the etch behavior of PS-b-PMMA for Ar, and Ar/O2 plasma chemistries has been conducted. The implemented model is calibrated for etch rates and etch profiles with literature data to extract parameters and conduct simulations. In order to understand the effect of the plasma on the block copolymers, first the etch model was calibrated for polystyrene (PS) and poly (methyl methacrylate) (PMMA) homopolymers. After calibration of the model with the homopolymers etch rate, a full Monte-Carlo simulation was conducted and simulation results are compared with the critical-dimension (CD) and selectivity of etch profile measurement. In addition, etch simulations for lamellae pattern have been demonstrated, using the implemented model.

  17. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  18. Evaluation of Effective thermal conductivity models on the prismatic fuel block of a Very High Temperature Reactor by CFD analysis

    International Nuclear Information System (INIS)

    Shin, Dong-Ho; Cho, Hyoung-Kyu; Tak, Nam-Il; Park, Goon-Cherl

    2014-01-01

    Effective thermal conductivity models which can be used to analyze the heat transfer phenomena of a prismatic fuel block were evaluated by CFD analysis. In the accident condition of VHTR when forced convection is lost, the heat flows in radial direction through the hexagonal fuel blocks that contain the large number of coolant holes and fuel compacts. Due to the complex geometry of fuel block and radiation heat transfer; the detail heat transfer computation on the fuel block needs excessive computation resources. Therefore, the detail computation isn’t appropriate for the lumped parameter code. The system code such as GAMMA+ adopts effective thermal conductivity model. Despite the complexity in heat transfer modes, the accurate analysis on the heat transfer in fuel block is necessary since it is directly relevant to the integrity of nuclear fuel embedded in fuel block. To satisfy the accurate analysis of complex heat transfer modes with limited computing sources, the credible effective thermal conductivity (ETC) models in which the effects of all of heat transfer modes are lumped is necessary. In this study, various ETC models were introduced and they are evaluated with CFD calculations. It is estimated that Maxwell-based model was the most pertinent one among the introduced ETC models. (author)

  19. Ricci time in the Lemaître-Tolman model and the block universe

    Science.gov (United States)

    Elmahalawy, Yasser; Hellaby, Charles; Ellis, George F. R.

    2015-10-01

    It is common to think of our universe according to the "block universe" concept, which says that spacetime consists of many "stacked" three-surfaces, labelled by some kind of proper time, . Standard ideas do not distinguish past and future, but Ellis' "evolving block universe" tries to make a fundamental distinction. One proposal for this proper time is the proper time measured along the timelike Ricci eigenlines, starting from the big bang. This work investigates the shape of the "Ricci time" surfaces relative to the the null surfaces. We use the Lemaître-Tolman metric as our inhomogeneous spacetime model, and we find the necessary and sufficient conditions for these constant surfaces, , to be spacelike or timelike. Furthermore, we look at the effect of strong gravity domains by determining the location of timelike S regions relative to apparent horizons. We find that constant Ricci time surfaces are always spacelike near the big bang, while at late times (near the crunch or the extreme far future), they are only timelike under special circumstances. At intermediate times, timelike S regions are common unless the variation of the bang time is restricted. The regions where these surfaces become timelike are often adjacent to apparent horizons, but always outside them, and in particular timelike S regions do not occur inside the horizons of black-hole-like models.

  20. Simulation modeling analysis of sequential relations among therapeutic alliance, symptoms, and adherence to child-centered play therapy between a child with autism spectrum disorder and two therapists.

    Science.gov (United States)

    Goodman, Geoff; Chung, Hyewon; Fischel, Leah; Athey-Lloyd, Laura

    2017-07-01

    This study examined the sequential relations among three pertinent variables in child psychotherapy: therapeutic alliance (TA) (including ruptures and repairs), autism symptoms, and adherence to child-centered play therapy (CCPT) process. A 2-year CCPT of a 6-year-old Caucasian boy diagnosed with autism spectrum disorder was conducted weekly with two doctoral-student therapists, working consecutively for 1 year each, in a university-based community mental-health clinic. Sessions were video-recorded and coded using the Child Psychotherapy Process Q-Set (CPQ), a measure of the TA, and an autism symptom measure. Sequential relations among these variables were examined using simulation modeling analysis (SMA). In Therapist 1's treatment, unexpectedly, autism symptoms decreased three sessions after a rupture occurred in the therapeutic dyad. In Therapist 2's treatment, adherence to CCPT process increased 2 weeks after a repair occurred in the therapeutic dyad. The TA decreased 1 week after autism symptoms increased. Finally, adherence to CCPT process decreased 1 week after autism symptoms increased. The authors concluded that (1) sequential relations differ by therapist even though the child remains constant, (2) therapeutic ruptures can have an unexpected effect on autism symptoms, and (3) changes in autism symptoms can precede as well as follow changes in process variables.

  1. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  2. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    Science.gov (United States)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  3. Exploring Mixed Membership Stochastic Block Models via Non-negative Matrix Factorization

    KAUST Repository

    Peng, Chengbin

    2014-12-01

    Many real-world phenomena can be modeled by networks in which entities and connections are represented by nodes and edges respectively. When certain nodes are highly connected with each other, those nodes forms a cluster, which is called community in our context. It is usually assumed that each node belongs to one community only, but evidences in biology and social networks reveal that the communities often overlap with each other. In other words, one node can probably belong to multiple communities. In light of that, mixed membership stochastic block models (MMB) have been developed to model those networks with overlapping communities. Such a model contains three matrices: two incidence matrices indicating in and out connections and one probability matrix. When the probability of connections for nodes between communities are significantly small, the parameter inference problem to this model can be solved by a constrained non-negative matrix factorization (NMF) algorithm. In this paper, we explore the connection between the two models and propose an algorithm based on NMF to infer the parameters of MMB. The proposed algorithms can detect overlapping communities regardless of knowing or not the number of communities. Experiments show that our algorithm can achieve a better community detection performance than the traditional NMF algorithm. © 2014 IEEE.

  4. A New Equivalent Statistical Damage Constitutive Model on Rock Block Mixed Up with Fluid Inclusions

    Directory of Open Access Journals (Sweden)

    Xiao Chen

    2018-01-01

    Full Text Available So far, there are few studies concerning the effect of closed “fluid inclusions” on the macroscopic constitutive relation of deep rock. Fluid-matrix element (FME is defined based on rock element in statistical damage model. The properties of FME are related to the size of inclusions, fluid properties, and pore pressure. Using FME, the equivalent elastic modulus of rock block containing fluid inclusions is obtained with Eshelby inclusion theory and the double M-T homogenization method. The new statistical damage model of rock is established on the equivalent elastic modulus. Besides, the porosity and confining pressure are important influencing factors of the model. The model reflects the initial damage (void and fluid inclusion and the macroscopic deformation law of rock, which is an improvement of the traditional statistical damage model. Additionally, the model can not only be consistent with the rock damage experiment date and three-axis compression experiment date of rock containing pore water but also describe the locked-in stress experiment in rock-like material. It is a new fundamental study of the constitutive relation of locked-in stress in deep rock mass.

  5. A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams

    Science.gov (United States)

    Molnar, P.

    2012-04-01

    Coarse grains (cobbles to boulders) are set in motion in steep mountain streams by floods with sufficient energy to erode the particles locally and transport them downstream. During transport, grains are often blocked and form width-spannings structures called steps, separated by pools. The step-pool system is a transient, self-organizing and self-sustaining structure. The temporary storage of sediment in steps and the release of that sediment in avalanche-like pulses when steps collapse, leads to a complex nonlinear threshold-driven dynamics in sediment transport which has been observed in laboratory experiments (e.g., Zimmermann et al., 2010) and in the field (e.g., Turowski et al., 2011). The basic question in this paper is if the emergent statistical properties of sediment transport in step-pool systems may be linked to the transient state of the bed, i.e. sediment storage and morphology, and to the dynamics in sediment input. The hypothesis is that this state, in which sediment transporting events due to the collapse and rebuilding of steps of all sizes occur, is analogous to a critical state in self-organized open dissipative dynamical systems (Bak et al., 1988). To exlore the process of self-organization, a cellular automaton sandpile model is used to simulate the processes of grain blocking and hydraulically-driven step collapse in a 1-d channel. Particles are injected at the top of the channel and are allowed to travel downstream based on various local threshold rules, with the travel distance drawn from a chosen probability distribution. In sandpile modelling this is a simple 1-d limited non-local model, however it has been shown to have nontrivial dynamical behaviour (Kadanoff et al., 1989), and it captures the essence of stochastic sediment transport in step-pool systems. The numerical simulations are used to illustrate the differences between input and output sediment transport rates, mainly focussing on the magnification of intermittency and

  6. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  7. Slip-Size Distribution and Self-Organized Criticality in Block-Spring Models with Quenched Randomness

    Science.gov (United States)

    Sakaguchi, Hidetsugu; Kadowaki, Shuntaro

    2017-07-01

    We study slowly pulling block-spring models in random media. Second-order phase transitions exist in a model pulled by a constant force in the case of velocity-strengthening friction. If external forces are slowly increased, nearly critical states are self-organized. Slips of various sizes occur, and the probability distributions of slip size roughly obey power laws. The exponent is close to that in the quenched Edwards-Wilkinson model. Furthermore, the slip-size distributions are investigated in cases of Coulomb friction, velocity-weakening friction, and two-dimensional block-spring models.

  8. Meso-scale Modeling of Block Copolymers Self-Assembly in Casting Solutions for Membrane Manufacture

    KAUST Repository

    Moreno Chaparro, Nicolas

    2016-05-01

    Isoporous membranes manufactured from diblock copolymer are successfully produced at laboratory scale under controlled conditions. Because of the complex phenomena involved, membrane preparation requires trial and error methodologies to find the optimal conditions, leading to a considerable demand of resources. Experimental insights demonstrate that the self-assembly of the block copolymers in solution has an effect on the final membrane structure. Nevertheless, the complete understanding of these multi-scale phenomena is elusive. Herein we use the coarse-grained method Dissipative Particle Dynamics to study the self-assembly of block copolymers that are used for the preparation of the membranes. To simulate representative time and length scales, we introduce a framework for model reduction of polymer chain representations for dissipative particle dynamics, which preserves the properties governing the phase equilibria. We reduce the number of degrees of freedom by accounting for the correlation between beads in fine-grained models via power laws and the consistent scaling of the simulation parameters. The coarse-graining models are consistent with the experimental evidence, showing a morphological transition of the aggregates as the polymer concentration and solvent affinity change. We show that hexagonal packing of the micelles can occur in solution within different windows of polymer concentration depending on the solvent affinity. However, the shape and size dispersion of the micelles determine the characteristic arrangement. We describe the order of crew-cut micelles using a rigid-sphere approximation and propose different phase parameters that characterize the emergence of monodisperse-spherical micelles in solution. Additionally, we investigate the effect of blending asymmetric diblock copolymers (AB/AC) over the properties of the membranes. We observe that the co-assembly mechanism localizes the AC molecules at the interface of A and B domains, and induces

  9. Sequential Prediction of Literacy Achievement for Specific Learning Disabilities Contrasting in Impaired Levels of Language in Grades 4 to 9.

    Science.gov (United States)

    Sanders, Elizabeth A; Berninger, Virginia W; Abbott, Robert D

    Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in Grades 4 through 9 ( N = 103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n = 25), word reading and spelling (dyslexia, n = 60), or oral and written language (oral and written language learning disabilities, n = 18). That is, SLDs are defined on the basis of cascading level of language impairment (subword, word, and syntax/text). A five-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word-form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block 4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% of variance. Orthographic word-form coding uniquely predicted nearly every measure, whereas attention switching uniquely predicted only reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and oral and written language learning disabilities were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed.

  10. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    In this article, we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms......-based method to over-estimate the co-integration rank in relatively small sample sizes....

  11. The structure and evolution of galacto-detonation waves - Some analytic results in sequential star formation models of spiral galaxies

    Science.gov (United States)

    Cowie, L. L.; Rybicki, G. B.

    1982-01-01

    Waves of star formation in a uniform, differentially rotating disk galaxy are treated analytically as a propagating detonation wave front. It is shown, that if single solitary waves could be excited, they would evolve asymptotically to one of two stable spiral forms, each of which rotates with a fixed pattern speed. Simple numerical solutions confirm these results. However, the pattern of waves that develop naturally from an initially localized disturbance is more complex and dies out within a few rotation periods. These results suggest a conclusive observational test for deciding whether sequential star formation is an important determinant of spiral structure in some class of galaxies.

  12. Final report of the TRUE Block Scale project. 3. Modelling of flow and transport

    Energy Technology Data Exchange (ETDEWEB)

    Poteri, Antti [VTT Processes, Helsinki (Finland); Billaux, Daniel [Itasca Consultants SA, Ecully (France); Dershowitz, William [Golder Associates Inc., Redmond, WA (United States); Gomez-Hernandez, J. Jaime [Univ. Politecnica de Valencia (Spain). Dept. of Hydrahulic and Environmental Engineering; Cvetkovic, Vladimir [Royal Inst. of Tech., Stockholm (Sweden). Div. of Water Resources Engineering; Hautojaervi, Aimo [Posiva Oy, Olkiluoto (Finland); Holton, David [Serco Assurance, Harwell (United Kingdom); Medina, Agustin [UPC, Barcelona (Spain); Winberg, Anders (ed.) [Conterra AB, Uppsala (Sweden)

    2002-12-01

    A series of tracer experiments were performed as part of the TRUE Block Scale experiment over length scales ranging from 10 to 100 m. The in situ experimentation was preceded by a comprehensive iterative characterisation campaign - the results from one borehole was used to update descriptive models and provide the basis for continued characterisation. Apart from core drilling, various types of laboratory investigations, core logging, borehole TV imaging and various types of hydraulic tests (single hole and cross-hole) were performed. Based on the characterisation data a hydro structural model of the investigated rock volume was constructed including deterministic structures and a stochastic background fracture population, and their material properties. In addition, a generic microstructure conceptual model of the investigated structures was developed. Tracer tests with radioactive sorbing tracers performed in three flow paths were preceded by various pre-tests including tracer dilution tests, which were used to select suitable configurations of tracer injection and pumping in the established borehole array. The in situ experimentation was preceded by formulation of basic questions and associated hypotheses to be addressed by the tracer tests and the subsequent evaluation. The hypotheses included address of the validity of the hydro structural model, the effects of heterogeneity and block scale retention. Model predictions and subsequent evaluation modelling was performed using a wide variety of model concepts. These included stochastic continuum, discrete feature network and channel network models formulated in 3D, which also solved the flow problem. In addition, two 'single channel' approaches (Posiva Streamtube and LaSAR extended to the block scale) were employed. A common basis for transport was formulated. The difference between the approaches was found in how heterogeneity is accounted for, both in terms of number of different types of immobile zones

  13. Final report of the TRUE Block Scale project. 3. Modelling of flow and transport

    International Nuclear Information System (INIS)

    Poteri, Antti; Billaux, Daniel; Dershowitz, William; Gomez-Hernandez, J. Jaime; Holton, David; Medina, Agustin; Winberg, Anders

    2002-12-01

    A series of tracer experiments were performed as part of the TRUE Block Scale experiment over length scales ranging from 10 to 100 m. The in situ experimentation was preceded by a comprehensive iterative characterisation campaign - the results from one borehole was used to update descriptive models and provide the basis for continued characterisation. Apart from core drilling, various types of laboratory investigations, core logging, borehole TV imaging and various types of hydraulic tests (single hole and cross-hole) were performed. Based on the characterisation data a hydro structural model of the investigated rock volume was constructed including deterministic structures and a stochastic background fracture population, and their material properties. In addition, a generic microstructure conceptual model of the investigated structures was developed. Tracer tests with radioactive sorbing tracers performed in three flow paths were preceded by various pre-tests including tracer dilution tests, which were used to select suitable configurations of tracer injection and pumping in the established borehole array. The in situ experimentation was preceded by formulation of basic questions and associated hypotheses to be addressed by the tracer tests and the subsequent evaluation. The hypotheses included address of the validity of the hydro structural model, the effects of heterogeneity and block scale retention. Model predictions and subsequent evaluation modelling was performed using a wide variety of model concepts. These included stochastic continuum, discrete feature network and channel network models formulated in 3D, which also solved the flow problem. In addition, two 'single channel' approaches (Posiva Streamtube and LaSAR extended to the block scale) were employed. A common basis for transport was formulated. The difference between the approaches was found in how heterogeneity is accounted for, both in terms of number of different types of immobile zones included

  14. Detection and localization of change points in temporal networks with the aid of stochastic block models

    Science.gov (United States)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  15. Towards a phenomena-based model assessment: The Case of Blocking over Europe

    Science.gov (United States)

    Jury, Martin W.; Barriopedro, David

    2016-04-01

    Atmospheric Blocking (AB) is a main phenomenon influencing the future climate change in Europe. Results of Global Circulation Models (GCMs) state with medium confidence that the frequency of AB over the Northern Hemisphere will not increase, while AB-related regional changes in Europe are uncertain especially in connection to AB intensity and its persistence. Here, we present results of a study connecting GCMs' ability to reproduce AB patterns and its abilities to correctly reproduce Temperature near the surface (tas) and Precipitation (pr). The used method detects AB by localizing high pressure systems between 55°N and 65°N with the use of geopotential height gradients on the 500 hPa level (zg500). Daily fields of tas and pr are connected to the results of the AB detection over continental Europe. The AB detection method accounts for AB frequency, AB duration and AB intensity and henceforth allowing a detailed comparison of AB representations in GCMs. Furthermore, the number of AB episodes, average AB duration, longitudinal extension and longitudinal propagation are taken into account. The AB detection is applied on zg500 fields of 3 Reanalysis (ERA40, JRA55 and NCEP/NCAR) and 10 GCMs of the CMIP5 between 1961 and 1990 over the Atlantic and over Europe. Most of the evaluated models underrepresent the spatial distribution of annual blocking days over Europe. This is also the case on seasonal timescales, with the largest underestimations during winter and only some overestimations during summer. There are indications that biases in the representation of AB are connected to overall GCM biases concerning the representation of surface fields. Especially when taking into account the seasonal as well as localized characteristics of the AB representation and the surface biases.

  16. Statistical Examination of the Resolution of a Block-Scale Urban Drainage Model

    Science.gov (United States)

    Goldstein, A.; Montalto, F. A.; Digiovanni, K. A.

    2009-12-01

    Stormwater drainage models are utilized by cities in order to plan retention systems to prevent combined sewage overflows and design for development. These models aggregate subcatchments and ignore small pipelines providing a coarse representation of a sewage network. This study evaluates the importance of resolution by comparing two models developed on a neighborhood scale for predicting the total quantity and peak flow of runoff to observed runoff measured at the site. The low and high resolution models were designed for a 2.6 ha block in Bronx, NYC in EPA Stormwater Management Model (SWMM) using a single catchment and separate subcatchments based on surface cover, respectively. The surface covers represented included sidewalks, street, buildings, and backyards. Characteristics for physical surfaces and the infrastructure in the high resolution mode were determined from site visits, sewer pipe maps, aerial photographs, and GIS data-sets provided by the NYC Department of City Planning. Since the low resolution model was depicted at a coarser scale, generalizations were assumed about the overall average characteristics of the catchment. Rainfall and runoff data were monitored over a four month period during the summer rainy season. A total of 53 rain fall events were recorded but only 29 storms produced significant amount of runoffs to be evaluated in the simulations. To determine which model was more accurate at predicting the observed runoff, three characteristics for each storm were compared: peak runoff, total runoff, and time to peak. Two statistical tests were used to determine the significance of the results: the percent difference for each storm and the overall Chi-squared Goodness of Fit distribution for both the low and high resolution model. These tests will evaluate if there is a statistical difference depending on the resolution of scale of the stormwater model. The scale of representation is being evaluated because it could have a profound impact on

  17. Mechanical behavior analysis of small-scale modeling of ceramic block masonry structures: geometries effect

    Directory of Open Access Journals (Sweden)

    E. Rizzatti

    Full Text Available This paper presents the experimental results of a research program with ceramic block masonry under compression. Four different block geometries were investigated. Two of them had circular hollows with different net area. The third one had two rectangular hollow and the last block was with rectangular hollows and a double central webs. The prisms and walls were built with two mortar type 1:1:6 (I and 1:0,5:4 (II (proportions by volume of cement: lime: sand. One:three small scale blocks were used to test block, prisms and walls on compression. It was possible to conclude that the block with double central webs gave better results of compressive strength showing to be more efficient. The mortar didn't influenced the compressive strength of prisms and walls.

  18. A blocked takeover in the Polish power sector: A model-based analysis

    International Nuclear Information System (INIS)

    Kamiński, Jacek

    2014-01-01

    As the President of the Office of Competition and Consumer Protection refused to approve a government initiated takeover in the Polish power sector and the Court of Competition and Consumer Protection did not make a ruling on that case, the takeover was finally prohibited. In this context, the main aim of this paper is to carry out a quantitative analysis of the impact of the takeover in question on electricity prices and quantities, consumer and producer surpluses, dead weight loss and emissions. The scope of the study covers the Polish power generation sector and the analysis was carried out for 2009. A game theory-based electricity market equilibrium model developed for Poland was applied. The model includes several country-specific conditions, such as a coal-based power generation fuel-mix, a large share of biomass co-combustion, etc. For the sake of clarity, only four scenarios are assumed. The paper concludes that the declared synergy savings did not compensate for the increase in dead weight loss and the transfer of surplus from consumers to producers caused by increased market power. - Highlights: • A takeover blocked by the President of the Office of Competition and Consumer Protection was analysed. • A game theory-based model of the Polish wholesale electricity market was applied. • The impact of the takeover on electricity prices and generation levels, surplus transfers and dead weight loss was estimated. • The results were compared with the declared synergy savings

  19. A general U-block model-based design procedure for nonlinear polynomial control systems

    Science.gov (United States)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  20. Block effect on HCV infection by HMGB1 released from virus-infected cells: An insight from mathematical modeling

    Science.gov (United States)

    Wang, Wei; Ma, Wanbiao

    2018-06-01

    The nuclear protein high-mobility group box 1 (HMGB1) can have an active role in deoxyribonucleic acid (DNA) organization and the regulation of transcription. Based on the new findings from a recent experimental study, the blocking effect on HCV infection by HMGB1 released from virus-infected cells is investigated using a diffusive model for viral infection dynamics. In the model, the diffusion of the virus depends not only on its concentration gradient, but also on the concentration of HMGB1. The basic reproduction number, threshold dynamics, stability properties of the steady states, travelling wave solutions, and spreading speed for the proposed model are studied. We show that the HMGB1-induced blocking of HCV infection slows the spread of virus compared with random diffusion only. Numerically, it is shown that a high concentration of HMGB1 can block the spread of virus and this confirms, not only qualitatively but also quantitatively, the experimental result.

  1. Effects of a combined parent-student alcohol prevention program on intermediate factors and adolescents' drinking behavior: A sequential mediation model.

    Science.gov (United States)

    Koning, Ina M; Maric, Marija; MacKinnon, David; Vollebergh, Wilma A M

    2015-08-01

    Previous work revealed that the combined parent-student alcohol prevention program (PAS) effectively postponed alcohol initiation through its hypothesized intermediate factors: increase in strict parental rule setting and adolescents' self-control (Koning, van den Eijnden, Verdurmen, Engels, & Vollebergh, 2011). This study examines whether the parental strictness precedes an increase in adolescents' self-control by testing a sequential mediation model. A cluster randomized trial including 3,245 Dutch early adolescents (M age = 12.68, SD = 0.50) and their parents randomized over 4 conditions: (1) parent intervention, (2) student intervention, (3) combined intervention, and (4) control group. Outcome measure was amount of weekly drinking measured at age 12 to 15; baseline assessment (T0) and 3 follow-up assessments (T1-T3). Main effects of the combined and parent intervention on weekly drinking at T3 were found. The effect of the combined intervention on weekly drinking (T3) was mediated via an increase in strict rule setting (T1) and adolescents' subsequent self-control (T2). In addition, the indirect effect of the combined intervention via rule setting (T1) was significant. No reciprocal sequential mediation (self-control at T1 prior to rules at T2) was found. The current study is 1 of the few studies reporting sequential mediation effects of youth intervention outcomes. It underscores the need of involving parents in youth alcohol prevention programs, and the need to target both parents and adolescents, so that change in parents' behavior enables change in their offspring. (c) 2015 APA, all rights reserved).

  2. Gravity Data Analysis and Modelling for Basin Sedimen of Eastern Java Blocks

    International Nuclear Information System (INIS)

    Khoirunnia, Luthfia

    2016-01-01

    The study of Eastern Java Basin was conducted by 3D modelling subsurface structure using gravity anomaly. The aims of this research are to describe and 3D modelling basin sedimentary system of Eastern Java Blocks based on gravity anomaly. The modelling construction was performed by inversion technique applying Singular Value Decomposition (SVD) method and Occam optimization. This projection method used equivalent central mass of Dampney with height 5.5 km and error data 1,84 × 10 -17 . Separation of residual anomaly from the complete Bouguer anomaly on a flat plane was done using the upward continuation. This process uses the principle of low pass filter which passes low frequency. Sedimentary basin appears at a depth of 0.2 km to 1.4 km, is shown by their low anomaly in the area, as well as the visible appearance of basin in 3D modeling shown in figure. The result of inversion with Occam h has an error of 1,2% and the SVD has an error of 11%. Sedimentary basin was dominant in Probolinggo, partially in Besuki and Lumajang. The formation occurs due to tectonic processes where the tectonic evolution of the material without significant lateral shift is called as the otokton models, and accompanied by the formation of the basin that follows the development of the subduction system, which is semi-concentric pattern. Sediments are dominated by volcanic sediment, the result of sedimentation because of volcanism events and types of volcanic sediments pyroclasts generally occur in a process or event explosive volcanic magma degassing (paper)

  3. Numerical modeling of block structure dynamics: Application to the Vrancea region and study of earthquakes sequences in the synthetic catalogs

    International Nuclear Information System (INIS)

    Soloviev, A.A.; Vorobieva, I.A.

    1995-08-01

    A seismically active region is represented as a system of absolutely rigid blocks divided by infinitely thin plane faults. The interaction of the blocks along the fault planes and with the underlying medium is viscous-elastic. The system of blocks moves as a consequence of prescribed motion of boundary blocks and the underlying medium. When for some part of a fault plane the stress surpasses a certain strength level a stress-drop (''a failure'') occurs. It can cause a failure for other parts of fault planes. The failures are considered as earthquakes. As a result of the numerical simulation a synthetic earthquake catalogue is produced. This procedure is applied for numerical modeling of dynamics of the block structure approximating the tectonic structure of the Vrancea region. By numerical experiments the values of the model parameters were obtained which supplied the synthetic earthquake catalog with the space distribution of epicenters close to the real distribution of the earthquake epicenters in the Vrancea region. The frequency-magnitude relations (Gutenberg-Richter curves) obtained for the synthetic and real catalogs have some common features. The sequences of earthquakes arising in the model are studied for some artificial structures. It is found that ''foreshocks'', ''main shocks'', and ''aftershocks'' could be detected among earthquakes forming the sequences. The features of aftershocks, foreshocks, and catalogs of main shocks are analysed. (author). 5 refs, 12 figs, 16 tabs

  4. DIAGRAM SOLVE THE USE OF SIMULINK BLOCK DIAGRAM TO SOLVE MATHEMA THEMATICAL CONTROL EQU MATHEMATICAL MODELS AND CONTROL EQUATIONS

    Directory of Open Access Journals (Sweden)

    N.M. Ghasem

    2003-12-01

    Full Text Available In this paper, the simulink block diagram is used to solve a model consists of a set of ordinary differential and algebraic equations to control the temperature inside a simple stirred tank heater. The flexibility of simulink block diagram gives students a better understanding of the control systems. The simulink also allows solution of mathematical models and easy visualization of the system variables. A polyethylene fluidized bed reactor is considered as an industrial example and the effect of the Proportional, Integral and Derivative control policy is presented for comparison.

  5. Automation of block assignment planning using a diagram-based scenario modeling method

    Directory of Open Access Journals (Sweden)

    Hwang In Hyuck

    2014-03-01

    Full Text Available Most shipbuilding scheduling research so far has focused on the load level on the dock plan. This is be¬cause the dock is the least extendable resource in shipyards, and its overloading is difficult to resolve. However, once dock scheduling is completed, making a plan that makes the best use of the rest of the resources in the shipyard to minimize any additional cost is also important. Block assignment planning is one of the midterm planning tasks; it assigns a block to the facility (factory/shop or surface plate that will actually manufacture the block according to the block characteristics and current situation of the facility. It is one of the most heavily loaded midterm planning tasks and is carried out manu¬ally by experienced workers. In this study, a method of representing the block assignment rules using a diagram was su¬ggested through analysis of the existing manual process. A block allocation program was developed which automated the block assignment process according to the rules represented by the diagram. The planning scenario was validated through a case study that compared the manual assignment and two automated block assignment results.

  6. Automation of block assignment planning using a diagram-based scenario modeling method

    Directory of Open Access Journals (Sweden)

    In Hyuck Hwang

    2014-03-01

    Full Text Available Most shipbuilding scheduling research so far has focused on the load level on the dock plan. This is because the dock is the least extendable resource in shipyards, and its overloading is difficult to resolve. However, once dock scheduling is completed, making a plan that makes the best use of the rest of the resources in the shipyard to minimize any additional cost is also important. Block assignment planning is one of the midterm planning tasks; it assigns a block to the facility (factory/shop or surface plate that will actually manufacture the block according to the block characteristics and current situation of the facility. It is one of the most heavily loaded midterm planning tasks and is carried out manually by experienced workers. In this study, a method of representing the block assignment rules using a diagram was suggested through analysis of the existing manual process. A block allocation program was developed which automated the block assignment process according to the rules represented by the diagram. The planning scenario was validated through a case study that compared the manual assignment and two automated block assignment results.

  7. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  8. Lubrication pressure and fractional viscous damping effects on the spring-block model of earthquakes

    Science.gov (United States)

    Tanekou, G. B.; Fogang, C. F.; Kengne, R.; Pelap, F. B.

    2018-04-01

    We examine the dynamical behaviours of the "single mass-spring" model for earthquakes considering lubrication pressure effects on pre-existing faults and viscous fractional damping. The lubrication pressure supports a part of the load, thereby reducing the normal stress and the associated friction across the gap. During the co-seismic phase, all of the strain accumulated during the inter-seismic duration does not recover; a fraction of this strain remains as a result of viscous relaxation. Viscous damping friction makes it possible to study rocks at depth possessing visco-elastic behaviours. At increasing depths, rock deformation gradually transitions from brittle to ductile. The fractional derivative is based on the properties of rocks, including information about previous deformation events ( i.e., the so-called memory effect). Increasing the fractional derivative can extend or delay the transition from stick-slip oscillation to a stable equilibrium state and even suppress it. For the single block model, the interactions of the introduced lubrication pressure and viscous damping are found to give rise to oscillation death, which corresponds to aseismic fault behaviour. Our result shows that the earthquake occurrence increases with increases in both the damping coefficient and the lubrication pressure. We have also revealed that the accumulation of large stresses can be controlled via artificial lubrication.

  9. Crustal block motion model and interplate coupling along Ecuador-Colombia trench based on GNSS observation network

    Science.gov (United States)

    Ito, T.; Mora-Páez, H.; Peláez-Gaviria, J. R.; Kimura, H.; Sagiya, T.

    2017-12-01

    IntroductionEcuador-Colombia trench is located at the boundary between South-America plate, Nazca Plate and Caribrian plate. This region is very complexes such as subducting Caribrian plate and Nazca plate, and collision between Panama and northern part of the Andes mountains. The previous large earthquakes occurred along the subducting boundary of Nazca plate, such as 1906 (M8.8) and 1979 (M8.2). And also, earthquakes occurred inland, too. So, it is important to evaluate earthquake potentials for preparing huge damage due to large earthquake in near future. GNSS observation In the last decade, the GNSS observation was established in Columbia. The GNSS observation is called by GEORED, which is operated by servicing Geologico Colomiano. The purpose of GEORED is research of crustal deformation. The number of GNSS site of GEORED is consist of 60 continuous GNSS observation site at 2017 (Mora et al., 2017). The sampling interval of almost GNSS site is 30 seconds. These GNSS data were processed by PPP processing using GIPSY-OASYS II software. GEORED can obtain the detailed crustal deformation map in whole Colombia. In addition, we use 100 GNSS data at Ecuador-Peru region (Nocquet et al. 2014). Method We developed a crustal block movements model based on crustal deformation derived from GNSS observation. Our model considers to the block motion with pole location and angular velocity and the interplate coupling between each block boundaries, including subduction between the South-American plate and the Nazca plate. And also, our approach of estimation of crustal block motion and coefficient of interplate coupling are based on MCMC method. The estimated each parameter is obtained probably density function (PDF). Result We tested 11 crustal block models based on geological data, such as active fault trace at surface. The optimal number of crustal blocks is 11 for based on geological and geodetic data using AIC. We use optimal block motion model. And also, we estimate

  10. Geologic characterization of fractures as an aid to hydrologic modeling of the SCV block at the Stripa mine

    International Nuclear Information System (INIS)

    Martel, S.

    1992-04-01

    A series of hydrologic tests have been conducted at the Stripa research mine in Sweden to develop hydrologic characterization techniques for rock masses in which fractures form the primary flow paths. The structural studies reported here were conducted to aid in the hydrologic examination of a cubic block of granite with dimensions of 150 m on a side. This block (the SCV block) is located between the 310- and 460-m depth levels at the Stripa mine. This report describes and interprets the fracture system geology at Stripa as revealed in drift exposures, checks the interpretive model against borehole records and discusses the hydrologic implication of the model, and examines the likely effects of stress redistribution around a drift (the Validation drift) on inflow to the drift along a prominent fracture zone. (72 refs.) (au)

  11. Designing and Implementing Service Learning Projects in an Introductory Oceanography Course Using the ``8-Block Model''

    Science.gov (United States)

    Laine, E. P.; Field, C.

    2010-12-01

    The Campus Compact for New Hampshire (Gordon, 2003) introduced a practical model for designing service-learning exercises or components for new or existing courses. They divided the design and implementation process into eight concrete areas, the “8-Block Model”. Their goal was to demystify the design process of service learning courses by breaking it down into interconnected components. These components include: project design, community partner relations, the problem statement, building community in the classroom, building student capacity, project management, assessment of learning, and reflection and connections. The project design component of the “8-Block Model” asks that the service performed be consistent with the learning goals of the course. For science courses students carry out their work as a way of learning science and the process of science, not solely for the sake of service. Their work supports the goals of a community partner and the community partner poses research problems for the class in a letter on their letterhead. Linking student work to important problems in the community effectively engages students and encourages them to work at more sophisticated levels than usually seen in introductory science classes. Using team-building techniques, the classroom becomes a safe, secure learning environment that encourages sharing and experimentation. Targeted lectures, labs, and demonstrations build the capacity of students to do their research. Behind the scenes project management ensures student success. Learning is assessed using a variety of tools, including graded classroom presentations, poster sessions, and presentations and reports to community partners. Finally, students reflect upon their work and make connections between their research and its importance to the well being of the community. Over the past 10 years, we have used this approach to design and continually modify an introductory oceanography course for majors and non

  12. In Vitro Model for Predicting the Protective Effect of Ultraviolet-Blocking Contact Lens in Human Corneal Epithelial Cells.

    Science.gov (United States)

    Abengózar-Vela, Antonio; Arroyo, Cristina; Reinoso, Roberto; Enríquez-de-Salamanca, Amalia; Corell, Alfredo; González-García, María Jesús

    2015-01-01

    To develop an in vitro method to determine the protective effect of UV-blocking contact lenses (CLs) in human corneal epithelial (HCE) cells exposed to UV-B radiation. SV-40-transformed HCE cells were covered with non-UV-blocking CL, UV-blocking CL or not covered, and exposed to UV-B radiation. As control, HCE cells were covered with both types of CLs or not covered, but not exposed to UV-B radiation. Cell viability at 24, 48 and 72 h, after UV-B exposure and removing CLs, was determined by alamarBlue(®) assay. Percentage of live, dead and apoptotic cells was also assessed by flow cytometry after 24 h of UV-B exposure. Intracellular reactive oxygen species (ROS) production after 1 h of exposure was assessed using the dye H(2)DCF-DA. Cell viability significantly decreased, apoptotic cells and intracellular ROS production significantly increased when UVB-exposed cells were covered with non-UV-blocking CL or not covered compared to non-irradiated cells. When cells were covered with UV-blocking CL, cell viability significantly increased and apoptotic cells and intracellular ROS production did not increase compared to exposed cells. UV-B radiation induces cell death by apoptosis, increases ROS production and decreases viable cells. UV-blocking CL is able to avoid these effects increasing cell viability and protecting HCE cells from apoptosis and ROS production induced by UV-B radiation. This in vitro model is an alternative to in vivo methods to determine the protective effect of UV-blocking ophthalmic biomaterials because it is a quicker, cheaper and reliable model that avoids the use of animals.

  13. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  14. Application of sequential fragmentation/transport theory to deposits of 1723 and 1963-65 eruptions of Volcan Irazu, Costa Rica: positive dispersion case and fractal model

    International Nuclear Information System (INIS)

    Brenes, Jose; Alvarado, Guillermo E.

    2013-01-01

    The theory of Fragmentation and Sequential Transport (FST) was applied to the granulometric analyzes of the deposits from the eruptions of 1723 and 1963-65 of the Volcan Irazu. An appreciable number of cases of positive dispersion was showed, associated in the literature with aggregation processes. A new fractal dimension defined in research has shown to be the product of secondary fragmentation. The application of the new dimension is used in the analyses of the eruptions of 1723 and 1963-65. A fractal model of a volcanic activity is formulated for the first time. The Hurst coefficient and the exponent of the law of powers are incorporated. The existence of values of dissidence near zero have been indicators of an effusive process, as would be the lava pools. The results derived from the model were agreed with field observations. (author) [es

  15. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  16. Blocking beta 2-adrenergic receptor inhibits dendrite ramification in a mouse model of Alzheimer's disease.

    Science.gov (United States)

    Wu, Qin; Sun, Jin-Xia; Song, Xiang-He; Wang, Jing; Xiong, Cun-Quan; Teng, Fei-Xiang; Gao, Cui-Xiang

    2017-09-01

    Dendrite ramification affects synaptic strength and plays a crucial role in memory. Previous studies revealed a correlation between beta 2-adrenergic receptor dysfunction and Alzheimer's disease (AD), although the mechanism involved is still poorly understood. The current study investigated the potential effect of the selective β 2 -adrenergic receptor antagonist, ICI 118551 (ICI), on Aβ deposits and AD-related cognitive impairment. Morris water maze test results demonstrated that the performance of AD-transgenic (TG) mice treated with ICI (AD-TG/ICI) was significantly poorer compared with NaCl-treated AD-TG mice (AD-TG/NaCl), suggesting that β 2 -adrenergic receptor blockage by ICI might reduce the learning and memory abilities of mice. Golgi staining and immunohistochemical staining revealed that blockage of the β 2 -adrenergic receptor by ICI treatment decreased the number of dendritic branches, and ICI treatment in AD-TG mice decreased the expression of hippocampal synaptophysin and synapsin 1. Western blot assay results showed that the blockage of β 2 -adrenergic receptor increased amyloid-β accumulation by downregulating hippocampal α-secretase activity and increasing the phosphorylation of amyloid precursor protein. These findings suggest that blocking the β 2 -adrenergic receptor inhibits dendrite ramification of hippocampal neurons in a mouse model of AD.

  17. Multilayer Stochastic Block Models Reveal the Multilayer Structure of Complex Networks

    Directory of Open Access Journals (Sweden)

    Toni Vallès-Català

    2016-03-01

    Full Text Available In complex systems, the network of interactions we observe between systems components is the aggregate of the interactions that occur through different mechanisms or layers. Recent studies reveal that the existence of multiple interaction layers can have a dramatic impact in the dynamical processes occurring on these systems. However, these studies assume that the interactions between systems components in each one of the layers are known, while typically for real-world systems we do not have that information. Here, we address the issue of uncovering the different interaction layers from aggregate data by introducing multilayer stochastic block models (SBMs, a generalization of single-layer SBMs that considers different mechanisms of layer aggregation. First, we find the complete probabilistic solution to the problem of finding the optimal multilayer SBM for a given aggregate-observed network. Because this solution is computationally intractable, we propose an approximation that enables us to verify that multilayer SBMs are more predictive of network structure in real-world complex systems.

  18. Coarse-grained molecular dynamics modeling of the kinetics of lamellar block copolymer defect annealing

    Science.gov (United States)

    Peters, Andrew J.; Lawson, Richard A.; Nation, Benjamin D.; Ludovice, Peter J.; Henderson, Clifford L.

    2016-01-01

    State-of-the-art block copolymer (BCP)-directed self-assembly (DSA) methods still yield defect densities orders of magnitude higher than is necessary in semiconductor fabrication despite free-energy calculations that suggest equilibrium defect densities are much lower than is necessary for economic fabrication. This disparity suggests that the main problem may lie in the kinetics of defect removal. This work uses a coarse-grained model to study the rates, pathways, and dependencies of healing a common defect to give insight into the fundamental processes that control defect healing and give guidance on optimal process conditions for BCP-DSA. It is found that bulk simulations yield an exponential drop in defect heal rate above χN˜30. Thin films show no change in rate associated with the energy barrier below χN˜50, significantly higher than the χN values found previously for self-consistent field theory studies that neglect fluctuations. Above χN˜50, the simulations show an increase in energy barrier scaling with 1/2 to 1/3 of the bulk systems. This is because thin films always begin healing at the free interface or the BCP-underlayer interface, where the increased A-B contact area associated with the transition state is minimized, while the infinitely thick films cannot begin healing at an interface.

  19. Universal phase transition in community detectability under a stochastic block model.

    Science.gov (United States)

    Chen, Pin-Yu; Hero, Alfred O

    2015-03-01

    We prove the existence of an asymptotic phase-transition threshold on community detectability for the spectral modularity method [M. E. J. Newman, Phys. Rev. E 74, 036104 (2006) and Proc. Natl. Acad. Sci. (USA) 103, 8577 (2006)] under a stochastic block model. The phase transition on community detectability occurs as the intercommunity edge connection probability p grows. This phase transition separates a subcritical regime of small p, where modularity-based community detection successfully identifies the communities, from a supercritical regime of large p where successful community detection is impossible. We show that, as the community sizes become large, the asymptotic phase-transition threshold p* is equal to √[p1p2], where pi(i=1,2) is the within-community edge connection probability. Thus the phase-transition threshold is universal in the sense that it does not depend on the ratio of community sizes. The universal phase-transition phenomenon is validated by simulations for moderately sized communities. Using the derived expression for the phase-transition threshold, we propose an empirical method for estimating this threshold from real-world data.

  20. Sliding contact on the interface of elastic body and rigid surface using a single block Burridge-Knopoff model

    Science.gov (United States)

    Amireghbali, A.; Coker, D.

    2018-01-01

    Burridge and Knopoff proposed a mass-spring model to explore interface dynamics along a fault during an earthquake. The Burridge and Knopoff (BK) model is composed of a series of blocks of equal mass connected to each other by springs of same stiffness. The blocks also are attached to a rigid driver via another set of springs that pulls them at a constant velocity against a rigid substrate. They studied dynamics of interface for an especial case with ten blocks and a specific set of fault properties. In our study effects of Coulomb and rate-state dependent friction laws on the dynamics of a single block BK model is investigated. The model dynamics is formulated as a system of coupled nonlinear ordinary differential equations in state-space form which lends itself to numerical integration methods, e.g. Runge-Kutta procedure for solution. The results show that the rate and state dependent friction law has the potential of triggering dynamic patterns that are different from those under Coulomb law.

  1. Modelling and Analysis of the Excavation Phase by the Theory of Blocks Method of Tunnel 4 Kherrata Gorge, Algeria

    Science.gov (United States)

    Boukarm, Riadh; Houam, Abdelkader; Fredj, Mohammed; Boucif, Rima

    2017-12-01

    The aim of our work is to check the stability during excavation tunnel work in the rock mass of Kherrata, connecting the cities of Bejaia to Setif. The characterization methods through the Q system (method of Barton), RMR (Bieniawski classification) allowed us to conclude that the quality of rock mass is average in limestone, and poor in fractured limestone. Then modelling of excavation phase using the theory of blocks method (Software UNWEDGE) with the parameters from the recommendations of classification allowed us to check stability and to finally conclude that the use of geomechanical classification and the theory of blocks can be considered reliable in preliminary design.

  2. Tectonic drivers of the Wrangell block: Insights on fore-arc sliver processes from 3-D geodynamic models of Alaska

    Science.gov (United States)

    Haynie, K. L.; Jadamec, M. A.

    2017-07-01

    Intracontinental shear zones can play a key role in understanding how plate convergence is manifested in the upper plate in regions of oblique subduction. However, the relative role of the driving forces from the subducting plate and the resisting force from within intracontinental shear zones is not well understood. Results from high-resolution, geographically referenced, instantaneous 3-D geodynamic models of flat slab subduction at the oblique convergent margin of Alaska are presented. These models investigate how viscosity and length of the Denali fault intracontinental shear zone as well as coupling along the plate boundary interface modulate motion of the Wrangell block fore-arc sliver and slip across the Denali fault. Models with a weak Denali fault (1017 Pa s) and strong plate coupling (1021 Pa s) were found to produce the fastest motions of the Wrangell block (˜10 mm/yr). The 3-D models predict along-strike variation in motion along the Denali fault, changing from dextral strike-slip motion in the eastern segment to oblique convergence toward the fault apex. Models further show that the flat slab drives oblique motion of the Wrangell block and contributes to 20% (models with a short fault) and 28% (models with a long fault) of the observed Quaternary slip rates along the Denali fault. The 3-D models provide insight into the general processes of fore-arc sliver mechanics and also offer a 3-D framework for interpreting hazards in regions of flat slab subduction.

  3. 3-D crustal-scale gravity model of the San Rafael Block and Payenia volcanic province in Mendoza, Argentina

    Directory of Open Access Journals (Sweden)

    Daniel Richarte

    2018-01-01

    Based on gravimetric and magnetic data, together with isostatic and elastic thickness analyses, we modeled the crustal structure of the area. Information obtained has allowed us to understand the crust where the SRB and the Payenia volcanic province are located. Bouguer anomalies indicate that the SRB presents higher densities to the North of Cerro Nevado and Moho calculations suggest depths for this block between 40 and 50 km. Determinations of elastic thickness would indicate that the crust supporting the San Rafael Block presents values of approximately 10 km, being enough to support the block loading. However, in the Payenia region, elastic thickness values are close to zero due to the regional temperature increase.

  4. Stratigraphic model deposit Ofi Inf SDZ-2X A1, Jun in block in Orinoco Oil belt

    International Nuclear Information System (INIS)

    Martinez, E.; Sandoval, D.

    2010-01-01

    This work is about the Stratigraphic model deposit O fi I nf SDZ-2X A1, Junin block in Orinoco Oil belt.This model was based on a chrono stratigraphic interpretation and was defined the correlation between the main and secondary surfaces. The wells of the study area pass through the Cambrian, Cretaceous and Miocene sediments. The last is more interesting for the study because of the stratigraphic and sand body surface presence

  5. A new epidemic modeling approach: Multi-regions discrete-time model with travel-blocking vicinity optimal control strategy.

    Science.gov (United States)

    Zakary, Omar; Rachik, Mostafa; Elmouki, Ilias

    2017-08-01

    First, we devise in this paper, a multi-regions discrete-time model which describes the spatial-temporal spread of an epidemic which starts from one region and enters to regions which are connected with their neighbors by any kind of anthropological movement. We suppose homogeneous Susceptible-Infected-Removed (SIR) populations, and we consider in our simulations, a grid of colored cells, which represents the whole domain affected by the epidemic while each cell can represent a sub-domain or region. Second, in order to minimize the number of infected individuals in one region, we propose an optimal control approach based on a travel-blocking vicinity strategy which aims to control only one cell by restricting movements of infected people coming from all neighboring cells. Thus, we show the influence of the optimal control approach on the controlled cell. We should also note that the cellular modeling approach we propose here, can also describes infection dynamics of regions which are not necessarily attached one to an other, even if no empty space can be viewed between cells. The theoretical method we follow for the characterization of the travel-locking optimal controls, is based on a discrete version of Pontryagin's maximum principle while the numerical approach applied to the multi-points boundary value problems we obtain here, is based on discrete progressive-regressive iterative schemes. We illustrate our modeling and control approaches by giving an example of 100 regions.

  6. Metformin blocks progression of obesity-activated thyroid cancer in a mouse model.

    Science.gov (United States)

    Park, Jeongwon; Kim, Won Gu; Zhao, Li; Enomoto, Keisuke; Willingham, Mark; Cheng, Sheue-Yann

    2016-06-07

    Compelling epidemiologic evidence indicates that obesity is associated with a high risk of human malignancies, including thyroid cancer. We previously demonstrated that a high fat diet (HFD) effectively induces the obese phenotype in a mouse model of aggressive follicular thyroid cancer (ThrbPV/PVPten+/-mice). We showed that HFD promotes cancer progression through aberrant activation of the leptin-JAK2-STAT3 signaling pathway. HFD-promoted thyroid cancer progression allowed us to test other molecular targets for therapeutic opportunity for obesity-induced thyroid cancer. Metformin is a widely used drug to treat patients with type II diabetes. It has been shown to reduce incidences of neoplastic diseases and cancer mortality in type II diabetes patients. The present study aimed to test whether metformin could be a therapeutic for obesity-activated thyroid cancer. ThrbPV/PVPten+/-mice were fed HFD together with metformin or vehicle-only, as controls, for 20 weeks. While HFD-ThrbPV/PVPten+/-mice had shorter survival than LFD-treated mice, metformin had no effects on the survival of HFD-ThrbPV/PVPten+/-mice. Remarkably, metformin markedly decreased occurrence of capsular invasion and completely blocked vascular invasion and anaplasia in HFD-ThrbPV/PVPten+/-mice without affecting thyroid tumor growth. The impeded cancer progression was due to the inhibitory effect of metformin on STAT3-ERK-vimentin and fibronectin-integrin signaling to decrease tumor cell invasion and de-differentiation. The present studies provide additional molecular evidence to support the link between obesity and thyroid cancer risk. Importantly, our findings suggest that metformin could be used as an adjuvant in combination with antiproliferative modalities to improve the outcome of patients with obesity-activated thyroid cancer.

  7. Solid images for geostructural mapping and key block modeling of rock discontinuities

    Science.gov (United States)

    Assali, Pierre; Grussenmeyer, Pierre; Villemin, Thierry; Pollet, Nicolas; Viguier, Flavien

    2016-04-01

    Rock mass characterization is obviously a key element in rock fall hazard analysis. Managing risk and determining the most adapted reinforcement method require a proper understanding of the considered rock mass. Description of discontinuity sets is therefore a crucial first step in the reinforcement work design process. The on-field survey is then followed by a structural modeling in order to extrapolate the data collected at the rock surface to the inner part of the massif. Traditional compass survey and manual observations can be undoubtedly surpassed by dense 3D data such as LiDAR or photogrammetric point clouds. However, although the acquisition phase is quite fast and highly automated, managing, handling and exploiting such great amount of collected data is an arduous task and especially for non specialist users. In this study, we propose a combined approached using both 3D point clouds (from LiDAR or image matching) and 2D digital images, gathered into the concept of ''solid image''. This product is the connection between the advantages of classical true colors 2D digital images, accessibility and interpretability, and the particular strengths of dense 3D point clouds, i.e. geometrical completeness and accuracy. The solid image can be considered as the information support for carrying-out a digital survey at the surface of the outcrop without being affected by traditional deficiencies (lack of data and sampling difficulties due to inaccessible areas, safety risk in steep sectors, etc.). Computational tools presented in this paper have been implemented into one standalone software through a graphical user interface helping operators with the completion of a digital geostructural survey and analysis. 3D coordinates extraction, 3D distances and area measurement, planar best-fit for discontinuity orientation, directional roughness profiles, block size estimation, and other tools have been experimented on a calcareous quarry in the French Alps.

  8. Modelling and sequential simulation of multi-tubular metallic membrane and techno-economics of a hydrogen production process employing thin-layer membrane reactor

    KAUST Repository

    Shafiee, Alireza

    2016-09-24

    A theoretical model for multi-tubular palladium-based membrane is proposed in this paper and validated against experimental data for two different sized membrane modules that operate at high temperatures. The model is used in a sequential simulation format to describe and analyse pure hydrogen and hydrogen binary mixture separations, and then extended to simulate an industrial scale membrane unit. This model is used as a sub-routine within an ASPEN Plus model to simulate a membrane reactor in a steam reforming hydrogen production plant. A techno-economic analysis is then conducted using the validated model for a plant producing 300 TPD of hydrogen. The plant utilises a thin (2.5 μm) defect-free and selective layer (Pd75Ag25 alloy) membrane reactor. The economic sensitivity analysis results show usefulness in finding the optimum operating condition that achieves minimum hydrogen production cost at break-even point. A hydrogen production cost of 1.98 $/kg is estimated while the cost of the thin-layer selective membrane is found to constitute 29% of total process capital cost. These results indicate the competiveness of this thin-layer membrane process against conventional methods of hydrogen production. © 2016 Hydrogen Energy Publications LLC

  9. Testing block subdivision algorithms on block designs

    Science.gov (United States)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  10. Epidural block

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/patientinstructions/000484.htm Epidural block - pregnancy To use the sharing features on this page, please enable JavaScript. An epidural block is a numbing medicine given by injection (shot) ...

  11. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  12. Entanglement of two blocks of spins in the critical Ising model

    Science.gov (United States)

    Facchi, P.; Florio, G.; Invernizzi, C.; Pascazio, S.

    2008-11-01

    We compute the entropy of entanglement of two blocks of L spins at a distance d in the ground state of an Ising chain in an external transverse magnetic field. We numerically study the von Neumann entropy for different values of the transverse field. At the critical point we obtain analytical results for blocks of size L=1 and 2. In the general case, the critical entropy is shown to be additive when d→∞ . Finally, based on simple arguments, we derive an expression for the entropy at the critical point as a function of both L and d . This formula is in excellent agreement with numerical results.

  13. The SKY Model of Limited BlockChain in an App Ecosystem

    OpenAIRE

    Hegadekatti, Kartik; S G, Yatish; T J, Satish

    2016-01-01

    Mobile App based market is rapidly becoming popular. As such, it is an opportunity to bring hassle-free transactions to people’s mobile phones. But the multi-billion dollar App market pays a great amount of money in transaction costs and banking services. This paper provides a solution by integrating BlockChain technology with Mobile-App based economy. We first describe the various concepts involved in BlockChain and App technology. Then we deliberate on how the two can be brought together wi...

  14. PSI-BOIL, a building block towards the multi-scale modeling of flow boiling phenomena

    International Nuclear Information System (INIS)

    Niceno, Bojan; Andreani, Michele; Prasser, Horst-Michael

    2008-01-01

    Full text of publication follows: In these work we report the current status of the Swiss project Multi-scale Modeling Analysis (MSMA), jointly financed by PSI and Swissnuclear. The project aims at addressing the multi-scale (down to nano-scale) modelling of convective boiling phenomena, and the development of physically-based closure laws for the physical scales appropriate to the problem considered, to be used within Computational Fluid Dynamics (CFD) codes. The final goal is to construct a new computational tool, called Parallel Simulator of Boiling phenomena (PSI-BOIL) for the direct simulation of processes all the way down to the small-scales of interest and an improved CFD code for the mechanistic prediction of two-phase flow and heat transfer in the fuel rod bundle of a nuclear reactor. An improved understanding of the physics of boiling will be gained from the theoretical work as well as from novel small- and medium scale experiments targeted to assist the development of closure laws. PSI-BOIL is a computer program designed for efficient simulation of turbulent fluid flow and heat transfer phenomena in simple geometries. Turbulence is simulated directly (DNS) and its efficiency plays a vital role in a successful simulation. Having high performance as one of the main prerequisites, PSIBOIL is tailored in such a way to be as efficient a tool as possible, relying on well-established numerical techniques and sacrificing all the features which are not essential for the success of this project and which might slow down the solution procedure. The governing equations are discretized in space with orthogonal staggered finite volume method. Time discretization is performed with projection method, the most obvious a the most widely used choice for DNS. Systems of linearized equation, stemming from the discretization of governing equations, are solved with the Additive Correction Multigrid (ACM). methods. Two distinguished features of PSI-BOIL are the possibility to

  15. Population Blocks.

    Science.gov (United States)

    Smith, Martin H.

    1992-01-01

    Describes an educational game called "Population Blocks" that is designed to illustrate the concept of exponential growth of the human population and some potential effects of overpopulation. The game material consists of wooden blocks; 18 blocks are painted green (representing land), 7 are painted blue (representing water); and the remaining…

  16. Realizing block planning concepts in make-and-pack production using MILP modelling and SAP APO

    DEFF Research Database (Denmark)

    Günther, H.O.; Grunow, M.; Neuhaus, U.

    2006-01-01

    of a major producer of hair dyes as a case study. We present two different implementations of the block planning concept. One utilizes the Production Planning/Detailed Scheduling module of the SAP APO© software. The other approach is based on a mixed-integer linear programming formulation. In contrast...

  17. Fatigue life prediction in composites using progressive damage modelling under block and spectrum loading

    DEFF Research Database (Denmark)

    Passipoularidis, Vaggelis; Philippidis, T.P.; Brøndsted, Povl

    2010-01-01

    series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a Wind Turbine Rotor...

  18. Enrichment with Wood Blocks Does Not Affect Toxicity Assessment in an Exploratory Toxicology Model Using Sprague–Dawley Rats

    Science.gov (United States)

    Ditewig, Amy C; Bratcher, Natalie A; Davila, Donna R; Dayton, Brian D; Ebert, Paige; Lesuisse, Philippe; Liguori, Michael J; Wetter, Jill M; Yang, Hyuna; Buck, Wayne R

    2014-01-01

    Environmental enrichment in rodents may improve animal well-being but can affect neurologic development, immune system function, and aging. We tested the hypothesis that wood block enrichment affects the interpretation of traditional and transcriptomic endpoints in an exploratory toxicology testing model using a well-characterized reference compound, cyclophosphamide. ANOVA was performed to distinguish effects of wood block enrichment separate from effects of 40 mg/kg cyclophosphamide treatment. Biologically relevant and statistically significant effects of wood block enrichment occurred only for body weight gain. ANOVA demonstrated the expected effects of cyclophosphamide on food consumption, spleen weight, and hematology. According to transcriptomic endpoints, cyclophosphamide induced fewer changes in gene expression in liver than in spleen. Splenic transcriptomic pathways affected by cyclophosphamide included: iron hemostasis; vascular tissue angiotensin system; hepatic stellate cell activation and fibrosis; complement activation; TGFβ-induced hypertrophy and fibrosis; monocytes, macrophages, and atherosclerosis; and platelet activation. Changes in these pathways due to cyclophosphamide treatment were consistent with bone marrow toxicity regardless of enrichment. In a second study, neither enrichment nor type of cage flooring altered body weight or food consumption over a 28-d period after the first week. In conclusion, wood block enrichment did not interfere with a typical exploratory toxicology study; the effects of ingested wood on drug level kinetics may require further consideration. PMID:24827566

  19. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  20. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  1. Image Quality of 3rd Generation Spiral Cranial Dual-Source CT in Combination with an Advanced Model Iterative Reconstruction Technique: A Prospective Intra-Individual Comparison Study to Standard Sequential Cranial CT Using Identical Radiation Dose.

    Science.gov (United States)

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas

    2015-01-01

    To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (pspiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.

  2. Morphing the feature-based multi-blocks of normative/healthy vertebral geometries to scoliosis vertebral geometries: development of personalized finite element models.

    Science.gov (United States)

    Hadagali, Prasannaah; Peters, James R; Balasubramanian, Sriram

    2018-03-12

    Personalized Finite Element (FE) models and hexahedral elements are preferred for biomechanical investigations. Feature-based multi-block methods are used to develop anatomically accurate personalized FE models with hexahedral mesh. It is tedious to manually construct multi-blocks for large number of geometries on an individual basis to develop personalized FE models. Mesh-morphing method mitigates the aforementioned tediousness in meshing personalized geometries every time, but leads to element warping and loss of geometrical data. Such issues increase in magnitude when normative spine FE model is morphed to scoliosis-affected spinal geometry. The only way to bypass the issue of hex-mesh distortion or loss of geometry as a result of morphing is to rely on manually constructing the multi-blocks for scoliosis-affected spine geometry of each individual, which is time intensive. A method to semi-automate the construction of multi-blocks on the geometry of scoliosis vertebrae from the existing multi-blocks of normative vertebrae is demonstrated in this paper. High-quality hexahedral elements were generated on the scoliosis vertebrae from the morphed multi-blocks of normative vertebrae. Time taken was 3 months to construct the multi-blocks for normative spine and less than a day for scoliosis. Efforts taken to construct multi-blocks on personalized scoliosis spinal geometries are significantly reduced by morphing existing multi-blocks.

  3. MODELING STRATEGIES FOR THE ANALYSIS OF EXPERIMENTS IN AUGMENTED BLOCK DESIGN IN CLONAL TESTS OF Eucalyptus spp.

    Directory of Open Access Journals (Sweden)

    Paulo Eduardo Rodrigues Prado

    2013-08-01

    Full Text Available http://dx.doi.org/10.5902/1980509810546The objective of this work was to compare analyses of experiment strategies when there is a large number of clones and a reduced number of seedlings to be evaluated. Data from girth at breast height of two seasons of evaluation, 30 and 90 months, from a clonal test of Eucalyptus were analyzed in three locations. The experiments were carried out in the augmented block design with 400 regular clones distributed in 20 blocks and with four common clones (controls.  Each plot consisted of five plants spaced 3 x 3 meters. The individual statistic analyses were carried out by season and local, a combined one by local at each season and a combined one involving the three locals and the two seasons. Each analysis was carried out according to two models: augmented design (AD and one way classification (OWC. The variance components, the heritability, the Speaman’s rank correlation and the coincidence indexes in the clone selection at the two models were estimated. It was found that the augmented block design and the one way classification provide similar results in Eucalyptus clone evaluation. The coincidence indexes between the two models in the clone selection, in general, were high, showing values of 100% in the local combined analyses at 90 months. The Spearman’s rank

  4. Modelling radiation exposure in homes from siporex blocks by using exhalation rates of radon

    Directory of Open Access Journals (Sweden)

    Nikolić Mladen D.

    2015-01-01

    Full Text Available Building materials are the second major source of indoor radon, after soil. The contribution of building materials to indoor radon amount depends upon the radium content and exhalation rates, which can be used as a primary index for radon levels in the dwellings. This paper presents the results of using the experimentally determined exhalation rates of siporex blocks and concrete plates, to assess the radiation exposure in dwellings built of siporex blocks. The annual doses in rooms have been estimated depending on the established modes of ventilation. Realistic scenario was created to predict an annual effective dose for an old person, a housewife, a student, and an employed tenant, who live in the same apartment, spending different periods of time in it. The results indicate the crucial importance of good ventilation of the living space.

  5. Numerical simulations of tests masonry walls from ceramic block using a detailed finite element model

    Directory of Open Access Journals (Sweden)

    V. Salajka

    2017-01-01

    Full Text Available This article deals with an analysis of the behaviour of brick ceramic walls. The behaviour of the walls was analysed experimentally in order to obtain their bearing capacity under static loading and their seismic resistance. Simultaneously, numerical simulations of the experiments were carried out in order to obtain additional information on the behaviour of masonry walls made of ceramic blocks. The results of the geometrically and materially nonlinear computations were compared to the results of the performed tests.

  6. Conformal blocks in Virasoro and W theories: Duality and the Calogero-Sutherland model

    International Nuclear Information System (INIS)

    Estienne, Benoit; Pasquier, Vincent; Santachiara, Raoul; Serban, Didina

    2012-01-01

    We study the properties of the conformal blocks of the conformal field theories with Virasoro or W-extended symmetry. When the conformal blocks contain only second-order degenerate fields, the conformal blocks obey second order differential equations and they can be interpreted as ground-state wave functions of a trigonometric Calogero-Sutherland Hamiltonian with non-trivial braiding properties. A generalized duality property relates the two types of second order degenerate fields. By studying this duality we found that the excited states of the Calogero-Sutherland Hamiltonian are characterized by two partitions, or in the case of WA k-1 theories by k partitions. By extending the conformal field theories under consideration by a u(1) field, we find that we can put in correspondence the states in the Hilbert state of the extended CFT with the excited non-polynomial eigenstates of the Calogero-Sutherland Hamiltonian. When the action of the Calogero-Sutherland integrals of motion is translated on the Hilbert space, they become identical to the integrals of motion recently discovered by Alba, Fateev, Litvinov and Tarnopolsky in Liouville theory in the context of the AGT conjecture. Upon bosonization, these integrals of motion can be expressed as a sum of two, or in general k, bosonic Calogero-Sutherland Hamiltonian coupled by an interaction term with a triangular structure. For special values of the coupling constant, the conformal blocks can be expressed in terms of Jack polynomials with pairing properties, and they give electron wave functions for special Fractional Quantum Hall states.

  7. Evaluation of simplified two source model for relative electron output factor of irregular block shape

    International Nuclear Information System (INIS)

    Lo, Y. E.; Yi, B. Y.; Ahn, S. D.; Kim, J. H.; Lee, S. W.; Choi, E. K.

    2002-01-01

    A practical calculation algorithm which calculates the relative output factor (ROF) for electron irregular shaped-field has been developed and evaluated the accuracy and the effectiveness of the algorithm by comparing the measurements and the calculation results for irregular fields used in clinic. The algorithm assumes that the electron dose can be express as sum of the primary source component and the scattered component from the shielding block. The primary source is assumed to have Gaussian distribution, while the scattered component follows the inverse square law. Depth and angular dependency of the primary and the scattered are ignored for maximizing the practicability by reducing the number of parameters for the algorithm. Electron dose can be calculated with three parameters such as, the effective source distance, the variance of primary source, and the scattering power of the block. The coefficients are obtained from the square shaped-block measurements and these are confirmed from the rectangular or irregular shaped-fields. The results showed less than 1.5% difference between the calculation and measurements. The algorithm is proved to be practical, since one can acquire the full parameters with minimum measurements and generates accurate results within the clinically acceptable range

  8. Charge and angular distributions as well as sequential decay and γ-ray emission in heavy ion collisions viewed in the light of the diffusion model

    International Nuclear Information System (INIS)

    Moretto, L.G.

    1977-08-01

    The hierarchy of the collective relaxation times in heavy ion reactions is briefly reviewed. An improved diffusion model is introduced and applied to interpret the fragment Z and angular distributions for some typical reactions. The equilibrium in the neutron-to-proton ratio as well as the sharing of the excitation energy between fragments is studied by a coincidence method which leads to the measurement of the charge, mass and mean number of nucleons emitted by each fragment. The final destiny of the dissipative energy is determined by measuring the atomic number of two coincident fragments, thus obtaining the missing charge as a function of bombarding energy and the Q of the reaction. The sequential fission probability of the heavy recoil is established as a function of the Z and kinetic energy of the light partner. The out-of-plane angular distribution of the fission fragments is correlated with the fissionability and interpreted in terms of various sources of angular momentum misalignment. The γ-ray multiplicities and the γ-ray angular distributions associated with deep inelastic event are discussed in terms of the angular momentum transfer and in terms of the diffusion model

  9. Quantum Chemical Examination of the Sequential Halogen Incorporation Scheme for the Modeling of Speciation of I/Br/Cl-Containing Trihalomethanes.

    Science.gov (United States)

    Zhang, Chenyang; Li, Maodong; Han, Xuze; Yan, Mingquan

    2018-02-20

    The recently developed three-step ternary halogenation model interprets the incorporation of chlorine, bromine, and iodine ions into natural organic matter (NOM) and formation of iodine-, bromine-, and chlorine-containing trihalomethanes (THMs) based on the competition of iodine, bromine, and chlorine species at each node of the halogenation sequence. This competition is accounted for using the dimensionless ratios (denoted as γ) of kinetic rates of reactions of the initial attack sites or halogenated intermediates with chlorine, bromine, and iodine ions. However, correlations between the model predictions made and mechanistic aspects of the incorporation of halogen species need to be ascertained in more detail. In this study, quantum chemistry calculations were first used to probe the formation mechanism of 10 species of Cl-/Br-/I- THMs. The HOMO energy (E HOMO ) of each mono-, bi-, or trihalomethanes were calculated by B3LYP method in Gaussian 09 software. Linear correlations were found to exist between the logarithms of experimentally determined kinetic preference coefficients γ reported in prior research and, on the other hand, differences of E HOMO values between brominated/iodinated and chlorinated halomethanes. One notable exception from this trend was that observed for the incorporation of iodine into mono- and di-iodinated intermediates. These observations confirm the three-step halogen incorporation sequence and the factor γ in the statistical model. The combined use of quantum chemistry calculations and the ternary sequential halogenation model provides a new insight into the microscopic nature of NOM-halogen interactions and the trends seen in the behavior of γ factors incorporated in the THM speciation models.

  10. High resolution 2D numerical models from rift to break-up: Crustal hyper-extension, Margin asymmetry, Sequential faulting

    Science.gov (United States)

    Brune, Sascha; Heine, Christian; Pérez-Gussinyé, Marta; Sobolev, Stephan

    2013-04-01

    Numerical modelling is a powerful tool to integrate a multitude of geological and geophysical data while addressing fundamental questions of passive margin formation such as the occurrence of crustal hyper-extension, (a-)symmetries between conjugate margin pairs, and the sometimes significant structural differences between adjacent margin segments. This study utilises knowledge gathered from two key examples of non-magmatic, asymmetric, conjugate margin pairs, i.e. Iberia-New Foundland and Southern Africa-Brazil, where many published seismic lines provide solid knowledge on individual margin geometry. While both margins involve crustal hyper-extension, it is much more pronounced in the South Atlantic. We investigate the evolution of these two margin pairs by carefully constraining our models with detailed plate kinematic history, laboratory-based rheology, and melt fraction evaluation of mantle upwelling. Our experiments are consistent with observed fault patterns, crustal thickness, and basin stratigraphy. We conduct 2D thermomechanical rift models using the finite element code SLIM3D that operates with nonlinear stress- and temperature-dependent elasto-visco-plastic rheology, with parameters provided by laboratory experiments on major crustal and upper mantle rocks. In our models we also calculate the melt fraction within the upwelling asthenosphere, which allows us to control whether the model indeed corresponds to the non-magmatic margin type or not. Our modelling highlights two processes as fundamental for the formation of hyper-extension and margin asymmetry at non-magmatic margins: (1) Strain hardening in the rift center due to cooling of upwelling mantle material (2) The formation of a weak crustal domain adjacent to the rift center caused by localized viscous strain softening and heat transfer from the mantle. Simultaneous activity of both processes promotes lateral rift migration in a continuous way that generates a wide layer of hyper-extended crust on

  11. Random and cooperative sequential adsorption

    Science.gov (United States)

    Evans, J. W.

    1993-10-01

    Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

  12. Blocked edges on Eulerian maps and mobiles: application to spanning trees, hard particles and the Ising model

    International Nuclear Information System (INIS)

    Bouttier, J; Francesco, P Di; Guitter, E

    2007-01-01

    We introduce Eulerian maps with blocked edges as a general way to implement statistical matter models on random maps by a modification of intrinsic distances. We show how to code these dressed maps by means of mobiles, i.e. decorated trees with labelled vertices, leading to a closed system of recursion relations for their generating functions. We discuss particular solvable cases in detail, as well as various applications of our method to several statistical systems such as spanning trees on quadrangulations, mutually excluding particles on Eulerian triangulations or the Ising model on quadrangulations

  13. A tribo-mechanical analysis of PVA-based building-blocks for implementation in a 2-layered skin model.

    Science.gov (United States)

    Morales Hurtado, M; de Vries, E G; Zeng, X; van der Heide, E

    2016-09-01

    Poly(vinyl) alcohol hydrogel (PVA) is a well-known polymer widely used in the medical field due to its biocompatibility properties and easy manufacturing. In this work, the tribo-mechanical properties of PVA-based blocks are studied to evaluate their suitability as a part of a structure simulating the length scale dependence of human skin. Thus, blocks of pure PVA and PVA mixed with Cellulose (PVA-Cel) were synthesised via freezing/thawing cycles and their mechanical properties were determined by Dynamic Mechanical Analysis (DMA) and creep tests. The dynamic tests addressed to elastic moduli between 38 and 50kPa for the PVA and PVA-Cel, respectively. The fitting of the creep compliance tests in the SLS model confirmed the viscoelastic behaviour of the samples with retardation times of 23 and 16 seconds for the PVA and PVA-Cel, respectively. Micro indentation tests were also achieved and the results indicated elastic moduli in the same range of the dynamic tests. Specifically, values between 45-55 and 56-81kPa were obtained for the PVA and PVA-Cel samples, respectively. The tribological results indicated values of 0.55 at low forces for the PVA decreasing to 0.13 at higher forces. The PVA-Cel blocks showed lower friction even at low forces with values between 0.2 and 0.07. The implementation of these building blocks in the design of a 2-layered skin model (2LSM) is also presented in this work. The 2LSM was stamped with four different textures and their surface properties were evaluated. The hydration of the 2LSM was also evaluated with a corneometer and the results indicated a gradient of hydration comparable to the human skin. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  15. Predictability of blocking

    International Nuclear Information System (INIS)

    Tosi, E.; Ruti, P.; Tibaldi, S.; D'Andrea, F.

    1994-01-01

    Tibaldi and Molteni (1990, hereafter referred to as TM) had previously investigated operational blocking predictability by the ECMWF model and the possible relationships between model systematic error and blocking in the winter season of the Northern Hemisphere, using seven years of ECMWF operational archives of analyses and day 1 to 10 forecasts. They showed that fewer blocking episodes than in the real atmosphere were generally simulated by the model, and that this deficiency increased with increasing forecast time. As a consequence of this, a major contribution to the systematic error in the winter season was shown to derive from the inability of the model to properly forecast blocking. In this study, the analysis performed in TM for the first seven winter seasons of the ECMWF operational model is extended to the subsequent five winters, during which model development, reflecting both resolution increases and parametrisation modifications, continued unabated. In addition the objective blocking index developed by TM has been applied to the observed data to study the natural low frequency variability of blocking. The ability to simulate blocking of some climate models has also been tested

  16. Investigation of local load effect on damping characteristics of synchronous generator using transfer-function block-diagram model

    Directory of Open Access Journals (Sweden)

    Pichai Aree

    2005-07-01

    Full Text Available The transfer-function block-diagram model of single-machine infinite-bus power system has been a popular analytical tool amongst power engineers for explaining and assessing synchronous generator dynamic behaviors. In previous studies, the effects of local load together with damper circuit on generator damping have not yet been addressed because neither of them was integrated into this model. Since the model only accounts for the generator main field circuit, it may not always yield a realistic damping assessment due to lack of damper circuit representation. This paper presents an extended transfer-function block-diagram model, which includes one of the q-axis damper circuits as well as local load. This allows a more realistic investigation of the local load effect on the generator damping. The extended model is applied to assess thegenerator dynamic performance. The results show that the damping power components mostly derived from the q-axis damper and the field circuits can be improved according to the local load. The frequency response method is employed to carry out the fundamental analysis.

  17. Sequential modelling of ICRF wave near RF fields and asymptotic RF sheaths description for AUG ICRF antennas

    Directory of Open Access Journals (Sweden)

    Jacquot Jonathan

    2017-01-01

    Full Text Available A sequence of simulations is performed with RAPLICASOL and SSWICH to compare two AUG ICRF antennas. RAPLICASOL outputs have been used as input to SSWICH-SW for the AUG ICRF antennas. Using parallel electric field maps and the scattering matrix produced by RAPLICASOL, SSWICH-SW, reduced to its asymptotic part, is able to produce a 2D radial/poloidal map of the DC plasma potential accounting for the antenna input settings (total power, power balance, phasing. Two models of antennas are compared: 2-strap antenna vs 3-strap antenna. The 2D DC potential structures are correlated to structures of the parallel electric field map for different phasing and power balance. The overall DC plasma potential on the 3-strap antenna is lower due to better global RF currents compensation. Spatial proximity between regions of high RF electric field and regions where high DC plasma potentials are observed is an important factor for sheath rectification.

  18. Physical Model of Laser-Assisted Blocking of Blood Flow: II. Pulse Modulation of Radiation

    CSIR Research Space (South Africa)

    Zheltov, GI

    2007-03-01

    Full Text Available and Spectroscopy, 2007, Vol. 102, No. 3, pp. 475–477. © Pleiades Publishing, Ltd., 2007. Original Russian Text © G.I. Zheltov, L.G. Astafyeva, A. Karsten, 2007, published in Optika i Spektroskopiya, 2007, Vol. 102, No. 3, pp. 524–526. 475 INTRODUCTION... This study is a continuation of our preceding inves- tigation [1], where we considered the mechanism of blocking blood flow under laser irradiation and assumed that the experimentally observed contraction of blood vessels [2] is a consequence...

  19. Distinct effects of perceptual quality on auditory word recognition, memory formation and recall in a neural model of sequential memory

    Directory of Open Access Journals (Sweden)

    Paul Miller

    2010-06-01

    Full Text Available Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing dependent plasticity (STDP, which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall.

  20. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  1. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  2. Sequential crystallization and morphology of triple crystalline biodegradable PEO-b-PCL-b-PLLA triblock terpolymers

    KAUST Repository

    Palacios, Jordana; Mugica, Agurtzane; Zubitur, Manuela; Iturrospe, Amaia; Arbe, A.; Liu, Guoming; Wang, Dujin; Zhao, Junpeng; Hadjichristidis, Nikolaos; Muller, Alejandro

    2016-01-01

    The sequential crystallization of poly(ethylene oxide)-b-poly(e-caprolactone)-b-poly(L-lactide) (PEO-b-PCL-b-PLLA) triblock terpolymers, in which the three blocks are able to crystallize separately and sequentially from the melt, is presented. Two

  3. A study of the 1963 Vajont landslide zonation by means of Lagrangian block modelling

    Science.gov (United States)

    Zaniboni, Filippo; Ausilia Paparo, Maria; Tinti, Stefano

    2017-04-01

    The 1963 landslide detaching from Mt. Toc (North-East Italy), that crashing on the underlying Vajont reservoir caused a huge wave that killed over 2000 people, is a well-known event that has been extensively and deeply investigated. Recently, studies appeared in the literature suggesting that the landslide dynamics can be explained in terms of a zonation of the moving mass. In this work, an additional support to the zonation hypothesis is given by focusing on the friction coefficient of the sliding surface, which is one of the chief parameters influencing the slide motion. Numerical simulations of the Vajont slide found in the literature assumed a homogenous value of the friction coefficient. We have systematically investigated a set of heterogeneous configurations. More specifically, we have divided the sliding surface into a number N of zones, and let the corresponding friction coefficient vary in the range 0-0.5. For each configuration we have run the numerical simulation via the Lagrangian block-based code UBO-BLOCK2 and have evaluated the configuration goodness by computing the misfit between the observed and the simulated deposits. The number of simulations required by this approach increases exponentially with the number N of zones. The main finding of this research is that a 4-sector zonation provides the best results in terms of deposit misfit. The zones can be roughly described as west-downhill (WD), west uphill (WU), east downhill (ED) and east uphill (EU). It is found that motion is mainly determined by friction in zones WD and EU, that friction coefficients in zone WD is remarkably smaller than in zone EU and that misfit is rather insensitive to the values of the friction coefficients in zones WU and ED.

  4. Sequential decay of Reggeons

    International Nuclear Information System (INIS)

    Yoshida, Toshihiro

    1981-01-01

    Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)

  5. A bi-objective model for optimizing replacement time of age and block policies with consideration of spare parts’ availability

    Science.gov (United States)

    Alsyouf, Imad

    2018-05-01

    Reliability and availability of critical systems play an important role in achieving the stated objectives of engineering assets. Preventive replacement time affects the reliability of the components, thus the number of system failures encountered and its downtime expenses. On the other hand, spare parts inventory level is a very critical factor that affects the availability of the system. Usually, the decision maker has many conflicting objectives that should be considered simultaneously for the selection of the optimal maintenance policy. The purpose of this research was to develop a bi-objective model that will be used to determine the preventive replacement time for three maintenance policies (age, block good as new, block bad as old) with consideration of spare parts’ availability. It was suggested to use a weighted comprehensive criterion method with two objectives, i.e. cost and availability. The model was tested with a typical numerical example. The results of the model demonstrated its effectiveness in enabling the decision maker to select the optimal maintenance policy under different scenarios and taking into account preferences with respect to contradicting objectives such as cost and availability.

  6. Membrane fouling mechanism of biofilm-membrane bioreactor (BF-MBR): Pore blocking model and membrane cleaning.

    Science.gov (United States)

    Zheng, Yi; Zhang, Wenxiang; Tang, Bing; Ding, Jie; Zheng, Yi; Zhang, Zhien

    2018-02-01

    Biofilm membrane bioreactor (BF-MBR) is considered as an important wastewater treatment technology that incorporates advantages of both biofilm and MBR process, as well as can alleviate membrane fouling, with respect to the conventional activated sludge MBR. But, to be efficient, it necessitates the establishment of proper methods for the assessment of membrane fouling. Four Hermia membrane blocking models were adopted to quantify and evaluate the membrane fouling of BF-MBR. The experiments were conducted with various operational conditions, including membrane types, agitation speeds and transmembrane pressure (TMP). Good agreement between cake formation model and experimental data was found, confirming the validity of the Hermia models for assessing the membrane fouling of BF-MBR and that cake layer deposits on membrane. Moreover, the influences of membrane types, agitation speeds and transmembrane pressure on the Hermia pore blocking coefficient of cake layer were investigated. In addition, the permeability recovery after membrane cleaning at various operational conditions was studied. This work confirms that, unlike conventional activated sludge MBR, BF-MBR possesses a low degree of membrane fouling and a higher membrane permeability recovery after cleaning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Detection block

    International Nuclear Information System (INIS)

    Bezak, A.

    1987-01-01

    A diagram is given of a detection block used for monitoring burnup of nuclear reactor fuel. A shielding block is an important part of the detection block. It stabilizes the fuel assembly in the fixing hole in front of a collimator where a suitable gamma beam is defined for gamma spectrometry determination of fuel burnup. The detector case and a neutron source case are placed on opposite sides of the fixing hole. For neutron measurement for which the water in the tank is used as a moderator, the neutron detector-fuel assembly configuration is selected such that neutrons from spontaneous fission and neutrons induced with the neutron source can both be measured. The patented design of the detection block permits longitudinal travel and rotation of the fuel assembly to any position, and thus more reliable determination of nuclear fuel burnup. (E.S.). 1 fig

  8. Creation of integrated information model of premises (blocks B, G, RDAS and deaerator) state of 'Ukryttia' object to support works

    International Nuclear Information System (INIS)

    Postil, S.D.; Ermolenko, A.I.; Ivanov, V.V.; Kotlyarov, V.T.

    2003-01-01

    The principles of organization of connection between the attachments prepared in Access and AutoCAD are developed,and a technology of transfer from one application into another with displaying of delivered information is realized.Information models of Reactor Department Auxiliary Systems (RDAS) block premises from axes 25 to 51,and from rows 'U' to 'Yu', deaerator stack from axes 34 to 68,and from row 'B' to 'D', and turbine hall from axes 34 to 68 and from row 'A' to 'B',are created.The possibility is shown of using integrated information model to develop and visualize by means of computer animation the access routes in 'Ukryttia' object premises,to integrate raster image of structure and vector computer model of Object

  9. Ergodicity of forward times of the renewal process in a block-based inspection model using the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan

    2012-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practice have been reported in many papers and case studies. For a system subject to a few major failure modes, component based delay time models have been developed under the assumptions of an age-based inspection policy. An age-based inspection assumes that an inspection is scheduled according to the age of the component, and if there is a failure renewal, the next inspection is always, say τ times, from the time of the failure renewal. This applies to certain cases, particularly important plant items where the time since the last renewal or inspection is a key to schedule the next inspection service. However, in most cases, the inspection service is not scheduled according to the need of a particular component, rather it is scheduled according to a fixed calendar time regardless whether the component being inspected was just renewed or not. This policy is called a block-based inspection which has the advantage of easy planning and is particularly useful for plant items which are part of a larger system to be inspected. If a block-based inspection policy is used, the time to failure since the last inspection prior to the failure for a particular item is a random variable. This time is called the forward time in this paper. To optimise the inspection interval for block-based inspections, the usual criterion functions such as expected cost or down time per unit time depend on the distribution of this forward time. We report in this paper the development of a theoretical proof that a limiting distribution for such a forward time exists if certain conditions are met. We also propose a recursive algorithm for determining such a limiting distribution. A numerical example is presented to demonstrate the existence of the limiting distribution.

  10. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  11. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

  12. Multilevel models for evaluating the risk of pedestrian-motor vehicle collisions at intersections and mid-blocks.

    Science.gov (United States)

    Quistberg, D Alex; Howard, Eric J; Ebel, Beth E; Moudon, Anne V; Saelens, Brian E; Hurvitz, Philip M; Curtin, James E; Rivara, Frederick P

    2015-11-01

    Walking is a popular form of physical activity associated with clear health benefits. Promoting safe walking for pedestrians requires evaluating the risk of pedestrian-motor vehicle collisions at specific roadway locations in order to identify where road improvements and other interventions may be needed. The objective of this analysis was to estimate the risk of pedestrian collisions at intersections and mid-blocks in Seattle, WA. The study used 2007-2013 pedestrian-motor vehicle collision data from police reports and detailed characteristics of the microenvironment and macroenvironment at intersection and mid-block locations. The primary outcome was the number of pedestrian-motor vehicle collisions over time at each location (incident rate ratio [IRR] and 95% confidence interval [95% CI]). Multilevel mixed effects Poisson models accounted for correlation within and between locations and census blocks over time. Analysis accounted for pedestrian and vehicle activity (e.g., residential density and road classification). In the final multivariable model, intersections with 4 segments or 5 or more segments had higher pedestrian collision rates compared to mid-blocks. Non-residential roads had significantly higher rates than residential roads, with principal arterials having the highest collision rate. The pedestrian collision rate was higher by 9% per 10 feet of street width. Locations with traffic signals had twice the collision rate of locations without a signal and those with marked crosswalks also had a higher rate. Locations with a marked crosswalk also had higher risk of collision. Locations with a one-way road or those with signs encouraging motorists to cede the right-of-way to pedestrians had fewer pedestrian collisions. Collision rates were higher in locations that encourage greater pedestrian activity (more bus use, more fast food restaurants, higher employment, residential, and population densities). Locations with higher intersection density had a lower

  13. Campbell and moment measures for finite sequential spatial processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2006-01-01

    textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector

  14. Dynamics-based sequential memory: Winnerless competition of patterns

    International Nuclear Information System (INIS)

    Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.

    2003-01-01

    We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns

  15. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

    NARCIS (Netherlands)

    Zamarro, G.

    2006-01-01

    This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

  16. Recovery from distal ulnar motor conduction block injury: serial EMG studies.

    Science.gov (United States)

    Montoya, Liliana; Felice, Kevin J

    2002-07-01

    Acute conduction block injuries often result from nerve compression or trauma. The temporal pattern of clinical, electrophysiologic, and histopathologic changes following these injuries has been extensively studied in experimental animal models but not in humans. Our recent evaluation of a young man with an injury to the deep motor branch of the ulnar nerve following nerve compression from weightlifting exercises provided the opportunity to follow the course and recovery of a severe conduction block injury with sequential nerve conduction studies. The conduction block slowly and completely resolved, as did the clinical deficit, over a 14-week period. The reduction in conduction block occurred at a linear rate of -6.1% per week. Copyright 2002 Wiley Periodicals, Inc.

  17. Comparison of estimation and simulation methods for modeling block 1 of anomaly no.3 in Narigan Uranium mineral deposit

    International Nuclear Information System (INIS)

    Jamali Esfahlan, D.; Madani, H.

    2011-01-01

    Geostatistical methods are applied for modeling the mineral deposits at the final stage of the detailed exploration. By applying the results of these models, the technical and economic feasibility studies are conducted for the deposits. The geostatistical modeling methods are usually consist of estimation and simulation methods. The estimation techniques, such as Kriging, construct spatial relation (geological continuation model) between data, by providing the best unique guesses for unknown features. However, when applying this technique for a grid of drill-holes over a deposit, an obvious discrepancy exists between the real geological features and the Kriging estimation map. Because of the limited number of sampled data applied for Kriging, it could not appear as the same as the real features. Also the spatial continuity estimated by the Kriging maps, are smoother than the real unknown features. On the other hand, the objective of simulation is to provide some functions or sets of variable values, to be compatible with the existing information. This means that the simulated values have an average and the variance similar to the raw data and may even be the same as the measurements. we studied the Anomaly No.3 of Narigan uranium mineral deposit, located in the central Iran region and applied the Kriging estimation and the sequential Gaussian simulation methods, and finally by comparing the results we concluded that the Kriging estimation method is more reliable for long term planning of a mine. Because of the reconstructing random structures, the results of the simulation methods indicate that they could also be applied for short term planning in mine exploitation.

  18. Geostatistical modeling of the gas emission zone and its in-place gas content for Pittsburgh-seam mines using sequential Gaussian simulation

    Science.gov (United States)

    Karacan, C.O.; Olea, R.A.; Goodman, G.

    2012-01-01

    Determination of the size of the gas emission zone, the locations of gas sources within, and especially the amount of gas retained in those zones is one of the most important steps for designing a successful methane control strategy and an efficient ventilation system in longwall coal mining. The formation of the gas emission zone and the potential amount of gas-in-place (GIP) that might be available for migration into a mine are factors of local geology and rock properties that usually show spatial variability in continuity and may also show geometric anisotropy. Geostatistical methods are used here for modeling and prediction of gas amounts and for assessing their associated uncertainty in gas emission zones of longwall mines for methane control.This study used core data obtained from 276 vertical exploration boreholes drilled from the surface to the bottom of the Pittsburgh coal seam in a mining district in the Northern Appalachian basin. After identifying important coal and non-coal layers for the gas emission zone, univariate statistical and semivariogram analyses were conducted for data from different formations to define the distribution and continuity of various attributes. Sequential simulations performed stochastic assessment of these attributes, such as gas content, strata thickness, and strata displacement. These analyses were followed by calculations of gas-in-place and their uncertainties in the Pittsburgh seam caved zone and fractured zone of longwall mines in this mining district. Grid blanking was used to isolate the volume over the actual panels from the entire modeled district and to calculate gas amounts that were directly related to the emissions in longwall mines.Results indicated that gas-in-place in the Pittsburgh seam, in the caved zone and in the fractured zone, as well as displacements in major rock units, showed spatial correlations that could be modeled and estimated using geostatistical methods. This study showed that GIP volumes may

  19. Portal vein ligation is as effective as sequential portal vein and hepatic artery ligation in inducing contralateral liver hypertrophy in a rat model

    NARCIS (Netherlands)

    Veteläinen, Reeta; Dinant, Sander; van Vliet, Arlène; van Gulik, Thomas M.

    2006-01-01

    PURPOSE: Dual embolization of the hepatic artery and portal vein (PV) has been proposed to enhance contralateral liver regeneration before resection. The aim of this study was to evaluate the effect of PV ligation compared with simultaneous or sequential dual ligation on regeneration,

  20. Mechanical analysis of congestive heart failure caused by bundle branch block based on an electromechanical canine heart model

    Energy Technology Data Exchange (ETDEWEB)

    Dou Jianhong; Xia Ling; Zhang Yu; Shou Guofa [Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027 (China); Wei Qing; Liu Feng; Crozier, Stuart [School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, Brisbane, Queensland 4072 (Australia)], E-mail: xialing@zju.edu.cn

    2009-01-21

    Asynchronous electrical activation, induced by bundle branch block (BBB), can cause reduced ventricular function. However, the effects of BBB on the mechanical function of heart are difficult to assess experimentally. Many heart models have been developed to investigate cardiac properties during BBB but have mainly focused on the electrophysiological properties. To date, the mechanical function of BBB has not been well investigated. Based on a three-dimensional electromechanical canine heart model, the mechanical properties of complete left and right bundle branch block (LBBB and RBBB) were simulated. The anatomical model as well as the fiber orientations of a dog heart was reconstructed from magnetic resonance imaging (MRI) and diffusion tensor MRI (DT-MRI). Using the solutions of reaction-diffusion equations and with a strategy of parallel computation, the asynchronous excitation propagation and intraventricular conduction in BBB was simulated. The mechanics of myocardial tissues were computed with time-, sarcomere length-dependent uniaxial active stress initiated at the time of depolarization. The quantification of mechanical intra- and interventricular asynchrony of BBB was then investigated using the finite-element method with an eight-node isoparametric element. The simulation results show that (1) there exists inter- and intraventricular systolic dyssynchrony during BBB; (2) RBBB may have more mechanical synchrony and better systolic function of the left ventricle (LV) than LBBB; (3) the ventricles always move toward the early-activated ventricle; and (4) the septum experiences higher stress than left and right ventricular free walls in BBB. The simulation results validate clinical and experimental recordings of heart deformation and provide regional quantitative estimates of ventricular wall strain and stress. The present work suggests that an electromechanical heart model, incorporating real geometry and fiber orientations, may be helpful for better

  1. Spontaneous abrupt climate change due to an atmospheric blocking-sea-ice-ocean feedback in an unforced climate model simulation.

    Science.gov (United States)

    Drijfhout, Sybren; Gleeson, Emily; Dijkstra, Henk A; Livina, Valerie

    2013-12-03

    Abrupt climate change is abundant in geological records, but climate models rarely have been able to simulate such events in response to realistic forcing. Here we report on a spontaneous abrupt cooling event, lasting for more than a century, with a temperature anomaly similar to that of the Little Ice Age. The event was simulated in the preindustrial control run of a high-resolution climate model, without imposing external perturbations. Initial cooling started with a period of enhanced atmospheric blocking over the eastern subpolar gyre. In response, a southward progression of the sea-ice margin occurred, and the sea-level pressure anomaly was locked to the sea-ice margin through thermal forcing. The cold-core high steered more cold air to the area, reinforcing the sea-ice concentration anomaly east of Greenland. The sea-ice surplus was carried southward by ocean currents around the tip of Greenland. South of 70 °N, sea ice already started melting and the associated freshwater anomaly was carried to the Labrador Sea, shutting off deep convection. There, surface waters were exposed longer to atmospheric cooling and sea surface temperature dropped, causing an even larger thermally forced high above the Labrador Sea. In consequence, east of Greenland, anomalous winds changed from north to south, terminating the event with similar abruptness to its onset. Our results imply that only climate models that possess sufficient resolution to correctly represent atmospheric blocking, in combination with a sensitive sea-ice model, are able to simulate this kind of abrupt climate change.

  2. Discovering block-structured process models from event logs containing infrequent behaviour

    NARCIS (Netherlands)

    Leemans, S.J.J.; Fahland, D.; Aalst, van der W.M.P.; Lohmann, N.; Song, M.; Wohed, P.

    2014-01-01

    Given an event log describing observed behaviour, process discovery aims to find a process model that ‘best’ describes this behaviour. A large variety of process discovery algorithms has been proposed. However, no existing algorithm returns a sound model in all cases (free of deadlocks and other

  3. Changes in the Intensity and Frequency of Atmospheric Blocking and Associated Heat Waves During Northern Summer Over Eurasia in the CMIP5 Model Simulations

    Science.gov (United States)

    Kim, Kyu-Myong; Lau, K. M.; Wu, H. T.; Kim, Maeng-Ki; Cho, Chunho

    2012-01-01

    The Russia heat wave and wild fires of the summer of 2010 was the most extreme weather event in the history of the country. Studies show that the root cause of the 2010 Russia heat wave/wild fires was an atmospheric blocking event which started to develop at the end of June and peaked around late July and early August. Atmospheric blocking in the summer of 2010 was anomalous in terms of the size, duration, and the location, which shifted to the east from the normal location. This and other similar continental scale severe summertime heat waves and blocking events in recent years have raised the question of whether such events are occurring more frequently and with higher intensity in a warmer climate induced by greenhouse gases. We studied the spatial and temporal distributions of the occurrence and intensity of atmospheric blocking and associated heat waves for northern summer over Eurasia based on CMIPS model simulations. To examine the global warming induced change of atmospheric blocking and heat waves, experiments for a high emissions scenario (RCP8.S) and a medium mitigation scenario (RCP4.S) are compared to the 20th century simulations (historical). Most models simulate the mean distributions of blockings reasonably well, including major blocking centers over Eurasia, northern Pacific, and northern Atlantic. However, the models tend to underestimate the number of blockings compared to MERRA and NCEPIDOE reanalysis, especially in western Siberia. Models also reproduced associated heat waves in terms of the shifting in the probability distribution function of near surface temperature. Seven out of eight models used in this study show that the frequency of atmospheric blocking over the Europe will likely decrease in a warmer climate, but slightly increase over the western Siberia. This spatial pattern resembles the blocking in the summer of 2010, indicating the possibility of more frequent occurrences of heat waves in western Siberia. In this talk, we will also

  4. Block Tridiagonal Matrices in Electronic Structure Calculations

    DEFF Research Database (Denmark)

    Petersen, Dan Erik

    in the Landauer–Büttiker ballistic transport regime. These calculations concentrate on determining the so– called Green’s function matrix, or portions thereof, which is the inverse of a block tridiagonal general complex matrix. To this end, a sequential algorithm based on Gaussian elimination named Sweeps...

  5. Inhibition of apoptosis blocks human motor neuron cell death in a stem cell model of spinal muscular atrophy.

    Directory of Open Access Journals (Sweden)

    Dhruv Sareen

    Full Text Available Spinal muscular atrophy (SMA is a genetic disorder caused by a deletion of the survival motor neuron 1 gene leading to motor neuron loss, muscle atrophy, paralysis, and death. We show here that induced pluripotent stem cell (iPSC lines generated from two Type I SMA subjects-one produced with lentiviral constructs and the second using a virus-free plasmid-based approach-recapitulate the disease phenotype and generate significantly fewer motor neurons at later developmental time periods in culture compared to two separate control subject iPSC lines. During motor neuron development, both SMA lines showed an increase in Fas ligand-mediated apoptosis and increased caspase-8 and-3 activation. Importantly, this could be mitigated by addition of either a Fas blocking antibody or a caspase-3 inhibitor. Together, these data further validate this human stem cell model of SMA, suggesting that specific inhibitors of apoptotic pathways may be beneficial for patients.

  6. Modeling and Optimization of the Thermal Performance of a Wood-Cement Block in a Low-Energy House Construction

    Directory of Open Access Journals (Sweden)

    Iole Nardi

    2016-08-01

    Full Text Available The reduction of building energy consumption requires appropriate planning and design of the building’s envelope. In the last years, new innovative materials and construction technologies used in new or refurbished buildings have been developed in order to achieve this objective, which are also needed for reducing greenhouse gases emissions and building maintenance costs. In this work, the thermal conductance of a brick, made of wood and cement, used in a low-rise building, was investigated with a heat flow meter (HFM and with numerical simulation using the Ansys® software package (Canonsburg, PA, USA. Due to their influence on the buildings’ thermal efficiency, it is important to choose an appropriate design for the building blocks. Results obtained by the finite element modeling of the construction material and by in-situ analysis conducted on a real building are compared, and furthermore a thermal optimization of the shape of the material is suggested.

  7. Measurement and Modeling of Blocking Contacts for Cadmium Telluride Gamma Ray Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Beck, Patrick R. [California Polytechnic State Univ. (CalPoly), San Luis Obispo, CA (United States)

    2010-01-07

    Gamma ray detectors are important in national security applications, medicine, and astronomy. Semiconductor materials with high density and atomic number, such as Cadmium Telluride (CdTe), offer a small device footprint, but their performance is limited by noise at room temperature; however, improved device design can decrease detector noise by reducing leakage current. This thesis characterizes and models two unique Schottky devices: one with an argon ion sputter etch before Schottky contact deposition and one without. Analysis of current versus voltage characteristics shows that thermionic emission alone does not describe these devices. This analysis points to reverse bias generation current or leakage through an inhomogeneous barrier. Modeling the devices in reverse bias with thermionic field emission and a leaky Schottky barrier yields good agreement with measurements. Also numerical modeling with a finite-element physics-based simulator suggests that reverse bias current is a combination of thermionic emission and generation. This thesis proposes further experiments to determine the correct model for reverse bias conduction. Understanding conduction mechanisms in these devices will help develop more reproducible contacts, reduce leakage current, and ultimately improve detector performance.

  8. Explicit Foreground and Background Modeling in The Classification of Text Blocks in Scene Images

    NARCIS (Netherlands)

    Sriman, Bowornrat; Schomaker, Lambertus

    2015-01-01

    Achieving high accuracy for classifying foreground and background is an interesting challenge in the field of scene image analysis because of the wide range of illumination, complex background, and scale changes. Classifying fore- ground and background using bag-of-feature model gives a good result.

  9. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  10. Sequential error concealment for video/images by weighted template matching

    DEFF Research Database (Denmark)

    Koloda, Jan; Østergaard, Jan; Jensen, Søren Holdt

    2012-01-01

    In this paper we propose a novel spatial error concealment algorithm for video and images based on convex optimization. Block-based coding schemes in packet loss environment are considered. Missing macro blocks are sequentially reconstructed by filling them with a weighted set of templates...

  11. Rapid relief of block by mecamylamine of neuronal nicotinic acetylcholine receptors of rat chromaffin cells in vitro: an electrophysiological and modeling study.

    Science.gov (United States)

    Giniatullin, R A; Sokolova, E M; Di Angelantonio, S; Skorinkin, A; Talantova, M V; Nistri, A

    2000-10-01

    The mechanism responsible for the blocking action of mecamylamine on neuronal nicotinic acetylcholine receptors (nAChRs) was studied on rat isolated chromaffin cells recorded under whole-cell patch clamp. Mecamylamine strongly depressed (IC(50) = 0.34 microM) inward currents elicited by short pulses of nicotine, an effect slowly reversible on wash. The mecamylamine block was voltage-dependent and promptly relieved by a protocol combining membrane depolarization with a nicotine pulse. Either depolarization or nicotine pulses were insufficient per se to elicit block relief. Block relief was transient; response depression returned in a use-dependent manner. Exposure to mecamylamine failed to block nAChRs if they were not activated by nicotine or if they were activated at positive membrane potentials. These data suggest that mecamylamine could not interact with receptors either at rest or at depolarized level. Other nicotinic antagonists like dihydro-beta-erythroidine or tubocurarine did not share this action of mecamylamine although proadifen partly mimicked it. Mecamylamine is suggested to penetrate and block open nAChRs that would subsequently close and trap this antagonist. Computer modeling indicated that the mechanism of mecamylamine blocking action could be described by assuming that 1) mecamylamine-blocked receptors possessed a much slower, voltage-dependent isomerization rate, 2) the rate constant for mecamylamine unbinding was large and poorly voltage dependent. Hence, channel reopening plus depolarization allowed mecamylamine escape and block relief. In the presence of mecamylamine, therefore, nAChRs acquire the new property of operating as coincidence detectors for concomitant changes in membrane potential and receptor occupancy.

  12. Dietary folate deficiency blocks prostate cancer progression in the TRAMP model

    OpenAIRE

    Bistulfi, Gaia; Foster, Barbara A; Karasik, Ellen; Gillard, Bryan; Miecznikowski, Jeff; Dhiman, Vineet K; Smiraglia, Dominic J

    2011-01-01

    Dietary folate is essential in all tissues to maintain several metabolite pools and cellular proliferation. Prostate cells, due to specific metabolic characteristics, have increased folate demand to support proliferation and prevent genetic and epigenetic damage. Although several studies found that dietary folate interventions can affect colon cancer biology in rodent models, impact on prostate is unknown. The purpose of this study was to determine if dietary folate manipulation, possibly bei...

  13. Un-Building Blocks: A Model of Reverse Engineering and Applicable Heuristics

    Science.gov (United States)

    2015-12-01

    Inclusion relationships of root events, events, and subevents .................179  Table 5.  Formal specification of reverse engineering model using Monterey...intend to stay technologically competitive at personal as well as societal levels. Third, reverse engineering is important for pedagogical reasons. It is...increasingly blurred (Anderson, 2012).4 Third, reverse engineering can be a pedagogical tool (Otto & Wood, 2000; O’Brien, 2010; Halsmer, 2013

  14. Accuracy of finite-element models for the stress analysis of multiple-holed moderator blocks

    International Nuclear Information System (INIS)

    Smith, P.D.; Sullivan, R.M.; Lewis, A.C.; Yu, H.J.

    1981-01-01

    Two steps have been taken to quantify and improve the accuracy in the analysis. First, the limitations of various approximation techniques have been studied with the aid of smaller benchmark problems containing fewer holes. Second, a new family of computer programs has been developed for handling such large problems. This paper describes the accuracy studies and the benchmark problems. A review is given of some proposed modeling techniques including local mesh refinement, homogenization, a special-purpose finite element, and substructuring. Some limitations of these approaches are discussed. The new finite element programs and the features that contribute to their efficiency are discussed. These include a standard architecture for out-of-core data processing and an equation solver that operates on a peripheral array processor. The central conclusions of the paper are: (1) modeling approximation methods such as local mesh refinement and homogenization tend to be unreliable, and they should be justified by a fine mesh benchmark analysis; and (2) finite element codes are now available that can achieve accurate solutions at a reasonable cost, and there is no longer a need to employ modeling approximations in the two-dimensional analysis of HTGR fuel elements. 10 figures

  15. Adaptive sequential controller

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  16. Adaptive sequential controller

    Science.gov (United States)

    El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  17. 3D Geological Modeling of CoalBed Methane (CBM) Resources in the Taldykuduk Block Karaganda Coal Basin, Kazakhstan

    Science.gov (United States)

    Sadykov, Raman; Kiponievich Ogay, Evgeniy; Royer, Jean-Jacques; Zhapbasbayev, Uzak; Panfilova, Irina

    2015-04-01

    Coal Bed Methane (CBM) is gas stored in coal layers. It can be extracted from wells after hydraulic fracturing and/or solvent injection, and secondary recovery techniques such as CO2 injection. Karaganda Basin is a very favorable candidate region to develop CBM production for the following reasons: (i) Huge gas potential; (ii) Available technologies for extracting and commercializing the gas produced by CBM methods; (iii) Experience in degassing during underground mining operations for safety reasons; (iv) Local needs in energy for producing electricity for the industrial and domestic market. The objectives of this work are to model the Taldykuduk block coal layers and their properties focusing on Coal Bed Methane production. It is motivated by the availability of large coal bed methane resources in Karaganda coal basin which includes 4 300 Bm3 equivalent 2 billion tons of coal (B = billion = 109) with gas content 15-25 m3/t of coal (for comparison San Juan basin (USA) has production in a double porosity model considering two domains: the matrix (m) and the fracture (f) for which the initial and boundary conditions are different. The resulting comprehensive 3D models had helped in better understanding the tectonic structures of the region, especially the relationships between the fault systems.

  18. Soliton wave model for simulating the slug formation in vertical-to-horizontal partially blocked pipes

    International Nuclear Information System (INIS)

    Nihan Onder; Alberto Teyssedou; Danila Roubtsov

    2005-01-01

    velocity and the slug predominant frequency were obtained from the void fraction signals. The waves were filmed using a digital video camera and the frame images were used to extract their amplitudes. Even though, for co-current flows, the formation of slugs has been explained in terms of the Kelvin-Helmholtz instability criterion, we did not observe that the slugging phenomena were triggered by this type of instability. Thus, the objective of this paper is to provide a model that explain the formation of slugs in a CCF. The model is based on the Boussinesq nonlinear system of equations that are discretized by using leap-frog scheme and solved numerically. The results have been used to obtain the slug frequency and propagation velocity. We have calculated the slug frequency from the lag time between the instant a train of solitons are formed in the horizontal leg and the instant that two trains of solitons collide with each other to form a slug. The slug propagation velocity has been estimated by using a control volume approach, the average horizontal velocity given by the model and the velocity of gravitational waves. The predictions of the model were compared with the slug data; in general, a good agreement between the predictions and the data was found. (authors)

  19. Resistance to fire of walls constituted by hollow blocks: Experiments and thermal modeling

    International Nuclear Information System (INIS)

    Al Nahhas, F.; Ami Saada, R.; Bonnet, G.; Delmotte, P.

    2007-01-01

    The thermo-mechanical behavior of masonry walls is investigated from both experimental and theoretical points of view. Fire tests have been performed in order to evaluate the thermo-mechanical resistance of masonry wall submitted to a vertical load (13 ton/m) and exposed to temperatures ranging from 20 to 1200 o C. As a result we measure the temperature evolution inside the wall and evaluate the vertical and lateral displacements of this wall during heating for a period of 6 h. These results are affected significantly by phase-change phenomena which appeared as a plateau around o C in temperature-time curves. A theoretical model was then developed to describe the experimental results taking in to account convection, conduction and radiation phenomena inside the wall. In addition, liquid water migration using an enthalpic method is considered

  20. Interferon lambda (IFN-λ) efficiently blocks norovirus transmission in a mouse model.

    Science.gov (United States)

    Rocha-Pereira, Joana; Jacobs, Sophie; Noppen, Sam; Verbeken, Eric; Michiels, Thomas; Neyts, Johan

    2018-01-01

    Human noroviruses are highly efficient in person to person transmission thus associated with explosive outbreaks of acute gastroenteritis. Outbreak control is limited to disinfection and isolation measures. Strategies to control the spread of noroviruses should be developed and models to study norovirus transmission will greatly facilitate this. Here, a mouse-to-mouse transmission model, in which mice develop acute murine norovirus (MNV)-induced diarrhea, was used to explore the role of interferon lambda (IFN-λ) in the control of a norovirus infection. Sentinel AG129 mice [deficient in IFN-α/β and IFN-γ receptors] that were co-housed with MNV-infected mice shedding high amounts of virus in their stool, developed a MNV-infection with associated diarrhea. Inoculation of such sentinel mice with an IFN-λ expression plasmid resulted in the production of circulating IFN-λ and upregulation of the expression of IFN-stimulated genes (ISGs) of the gut. Injection of the IFN-λ-expressing plasmid to sentinels prevents MNV-induced disease upon exposure to MNV-infected mice, as well as MNV replication in the small intestine, the associated signs of inflammation and the mounting of a specific IgG-based immune response. This demonstrates that IFN-λ can alone mediate protection against transmission of norovirus. The development of a simple delivery method for IFN-λ could be explored as a strategy to control norovirus outbreaks and protect vulnerable populations such as the elderly and immunocompromised. Copyright © 2017. Published by Elsevier B.V.

  1. Structures of PEP–PEO Block Copolymer Micelles: Effects of Changing Solvent and PEO Length and Comparison to a Thermodynamic Model

    DEFF Research Database (Denmark)

    Jensen, Grethe Vestergaard; Shi, Qing; Deen, G. Roshan

    2012-01-01

    Structures of poly(ethylene propylene)–poly(ethylene oxide) (PEP–PEO) block copolymer micelles were determined from small-angle X-ray scattering and static light scattering and compared to predictions from a thermodynamic model. Both the corona block length and the solvent water–ethanol ratio were...... changed, leading to a thorough test of this model. With increasing ethanol fraction, the PEP core–solvent interfacial tension decreases, and the solvent quality for PEO changes. The weight-average block masses were 5.0 kDa for PEP and 2.8–49 kDa for PEO. For the lowest PEO molar mass and samples in pure...... water (except for the highest PEO molar mass), the micelles were cylindrical; for other conditions they were spherical. The structural parameters can be reasonably well described by the thermodynamic model by Zhulina et al. [Macromolecules2005, 38 (12), 5330–5351]; however, they have a stronger...

  2. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  3. Novel LIMK2 Inhibitor Blocks Panc-1 Tumor Growth in a mouse xenograft model.

    Science.gov (United States)

    Rak, Roni; Haklai, Roni; Elad-Tzfadia, Galit; Wolfson, Haim J; Carmeli, Shmuel; Kloog, Yoel

    2014-01-01

    LIM kinases (LIMKs) are important cell cytoskeleton regulators that play a prominent role in cancer manifestation and neuronal diseases. The LIMK family consists of two homologues, LIMK1 and LIMK2, which differ from one another in expression profile, intercellular localization, and function. The main substrate of LIMK is cofilin, a member of the actin-depolymerizing factor (ADF) protein family. When phosphorylated by LIMK, cofilin is inactive. LIMKs play a contributory role in several neurodevelopmental disorders and in cancer growth and metastasis. We recently reported the development and validation of a novel LIMK inhibitor, referred to here as T56-LIMKi, using a combination of computational methods and classical biochemistry techniques. Here we report that T56-LIMKi inhibits LIMK2 with high specificity, and shows little or no cross-reactivity with LIMK1. We found that T56-LIMKi decreases phosphorylated cofilin (p-cofilin) levels and thus inhibits growth of several cancerous cell lines, including those of pancreatic cancer, glioma and schwannoma. Because the most promising in-vitro effect of T56-LIMKi was observed in the pancreatic cancer cell line Panc-1, we tested the inhibitor on a nude mouse Panc-1 xenograft model. T56-LIMKi reduced tumor size and p-cofilin levels in the Panc-1 tumors, leading us to propose T56-LIMKi as a candidate drug for cancer therapy.

  4. Dietary folate deficiency blocks prostate cancer progression in the TRAMP model.

    Science.gov (United States)

    Bistulfi, Gaia; Foster, Barbara A; Karasik, Ellen; Gillard, Bryan; Miecznikowski, Jeff; Dhiman, Vineet K; Smiraglia, Dominic J

    2011-11-01

    Dietary folate is essential in all tissues to maintain several metabolite pools and cellular proliferation. Prostate cells, due to specific metabolic characteristics, have increased folate demand to support proliferation and prevent genetic and epigenetic damage. Although several studies have found that dietary folate interventions can affect colon cancer biology in rodent models, its impact on prostate is unknown. The purpose of this study was to determine whether dietary folate manipulation, possibly being of primary importance for prostate epithelial cell metabolism, could significantly affect prostate cancer progression. Strikingly, mild dietary folate depletion arrested prostate cancer progression in 25 of 26 transgenic adenoma of the mouse prostate (TRAMP) mice, in which tumorigenesis is prostate-specific and characteristically aggressive. The significant effect on prostate cancer growth was characterized by size, grade, proliferation, and apoptosis analyses. Folate supplementation had a mild, nonsignificant, beneficial effect on grade. In addition, characterization of folate pools (correlated with serum), metabolite pools (polyamines and nucleotides), genetic and epigenetic damage, and expression of key biosynthetic enzymes in prostate tissue revealed interesting correlations with tumor progression. These findings indicate that prostate cancer is highly sensitive to folate manipulation and suggest that antifolates, paired with current therapeutic strategies, might significantly improve treatment of prostate cancer, the most commonly diagnosed cancer in American men.

  5. Virulence Inhibitors from Brazilian Peppertree Block Quorum Sensing and Abate Dermonecrosis in Skin Infection Models

    Science.gov (United States)

    Muhs, Amelia; Lyles, James T.; Parlet, Corey P.; Nelson, Kate; Kavanaugh, Jeffery S.; Horswill, Alexander R.; Quave, Cassandra L.

    2017-01-01

    Widespread antibiotic resistance is on the rise and current therapies are becoming increasingly limited in both scope and efficacy. Methicillin-resistant Staphylococcus aureus (MRSA) represents a major contributor to this trend. Quorum sensing controlled virulence factors include secreted toxins responsible for extensive damage to host tissues and evasion of the immune system response; they are major contributors to morbidity and mortality. Investigation of botanical folk medicines for wounds and infections led us to study Schinus terebinthifolia (Brazilian Peppertree) as a potential source of virulence inhibitors. Here, we report the inhibitory activity of a flavone rich extract “430D-F5” against all S. aureus accessory gene regulator (agr) alleles in the absence of growth inhibition. Evidence for this activity is supported by its agr-quenching activity (IC50 2–32 μg mL−1) in transcriptional reporters, direct protein outputs (α-hemolysin and δ-toxin), and an in vivo skin challenge model. Importantly, 430D-F5 was well tolerated by human keratinocytes in cell culture and mouse skin in vivo; it also demonstrated significant reduction in dermonecrosis following skin challenge with a virulent strain of MRSA. This study provides an explanation for the anti-infective activity of peppertree remedies and yields insight into the potential utility of non-biocide virulence inhibitors in treating skin infections. PMID:28186134

  6. Cyclooxygenase-2 inhibition blocks M2 macrophage differentiation and suppresses metastasis in murine breast cancer model.

    Directory of Open Access Journals (Sweden)

    Yi-Rang Na

    Full Text Available Tumor cells are often associated with abundant macrophages that resemble the alternatively activated M2 subset. Tumor-associated macrophages (TAMs inhibit anti-tumor immune responses and promote metastasis. Cyclooxygenase-2 (COX-2 inhibition is known to prevent breast cancer metastasis. This study hypothesized that COX-2 inhibition affects TAM characteristics potentially relevant to tumor cell metastasis. We found that the specific COX-2 inhibitor, etodolac, inhibited human M2 macrophage differentiation, as determined by decreased CD14 and CD163 expressions and increased TNFα production. Several key metastasis-related mediators, such as vascular endothelial growth factor-A, vascular endothelial growth factor-C, and matrix metalloproteinase-9, were inhibited in the presence of etodolac as compared to untreated M2 macrophages. Murine bone marrow derived M2 macrophages also showed enhanced surface MHCII IA/IE and CD80, CD86 expressions together with enhanced TNFα expressions with etodolac treatment during differentiation. Using a BALB/c breast cancer model, we found that etodolac significantly reduced lung metastasis, possibly due to macrophages expressing increased IA/IE and TNFα, but decreased M2 macrophage-related genes expressions (Ym1, TGFβ. In conclusion, COX-2 inhibition caused loss of the M2 macrophage characteristics of TAMs and may assist prevention of breast cancer metastasis.

  7. Sequential Foreign Investments, Regional Technology Platforms and the Evolution of Japanese Multinationals in East Asia

    OpenAIRE

    Song, Jaeyong

    2001-01-01

    IVABSTRACTIn this paper, we investigate the firm-level mechanisms that underlie the sequential foreign direct investment (FDI) decisions of multinational corporations (MNCs). To understand inter-firm heterogeneity in the sequential FDI behaviors of MNCs, we develop a firm capability-based model of sequential FDI decisions. In the setting of Japanese electronics MNCs in East Asia, we empirically examine how prior investments in firm capabilities affect sequential investments into existingprodu...

  8. Self-assembled structures of amphiphilic ionic block copolymers: Theory, self-consistent field modeling and experiment

    NARCIS (Netherlands)

    Borisov, O.V.; Zhulina, E.B.; Leermakers, F.A.M.; Muller, A.H.E.

    2011-01-01

    We present an overview of statistical thermodynamic theories that describe the self-assembly of amphiphilic ionic/hydrophobic diblock copolymers in dilute solution. Block copolymers with both strongly and weakly dissociating (pH-sensitive) ionic blocks are considered. We focus mostly on structural

  9. Exploration of government policy structure which support and block energy transition process in indonesia using system dynamics model

    Science.gov (United States)

    Destyanto, A. R.; Silalahi, T. D.; Hidayatno, A.

    2017-11-01

    System dynamic modeling is widely used to predict and simulate the energy system in several countries. One of the applications of system dynamics is to evaluate national energy policy alternatives, and energy efficiency analysis. Using system dynamic modeling, this research aims to evaluate the energy transition policy that has been implemented in Indonesia on the past conversion program of kerosene to LPG for household cook fuel consumption, which considered as successful energy transition program implemented since 2007. This research is important since Indonesia considered not yet succeeded to execute another energy transition program on conversion program of oil fuel to gas fuel for transportation that has started since 1989. The aim of this research is to explore which policy intervention that has significant contribution to support or even block the conversion program. Findings in this simulation show that policy intervention to withdraw the kerosene supply and government push to increase production capacity of the support equipment industries (gas stove, regulator, and LPG Cylinder) is the main influence on the success of the program conversion program.

  10. Blocking and Blending: Different Assembly Models of Cyclodextrin and Sodium Caseinate at the Oil/Water Interface.

    Science.gov (United States)

    Xu, Hua-Neng; Liu, Huan-Huan; Zhang, Lianfu

    2015-08-25

    The stability of cyclodextrin (CD)-based emulsions is attributed to the formation of a solid film of oil-CD complexes at the oil/water interface. However, competitive interactions between CDs and other components at the interface still need to be understood. Here we develop two different routes that allow the incorporation of a model protein (sodium caseinate, SC) into emulsions based on β-CD. One route is the components adsorbed simultaneously from a mixed solution to the oil/water interface (route I), and the other is SC was added to a previously established CD-stabilized interface (route II). The adsorption mechanism of β-CD modified by SC at the oil/water interface is investigated by rheological and optical methods. Strong sensitivity of the rheological behavior to the routes is indicated by both steady-state and small-deformation oscillatory experiments. Possible β-CD/SC interaction models at the interface are proposed. In route I, the protein, due to its higher affinity for the interface, adsorbs strongly at the interface with blocking of the adsorption of β-CD and formation of oil-CD complexes. In route II, the protein penetrates and blends into the preadsorbed layer of oil-CD complexes already formed at the interface. The revelation of interfacial assembly is expected to help better understand CD-based emulsions in natural systems and improve their designs in engineering applications.

  11. Sequential infiltration synthesis for advanced lithography

    Energy Technology Data Exchange (ETDEWEB)

    Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing

    2017-10-10

    A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.

  12. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  13. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  14. Rule-Blocking and Forward-Looking Conditions in the Computational Modelling of Pāṇinian Derivation

    Science.gov (United States)

    Scharf, Peter M.

    Attempting to model Pāṇinian procedure computationally forces one to clarify concepts explicitly and allows one to test various versions and interpretations of his grammar against each other and against bodies of extant Sanskrit texts. To model Pāṇinian procedure requires creating data structures and a framework that allow one to approximate the statement of Pāṇinian rules in an executable language. Scharf (2009: 117-125) provided a few examples of how rules would be formulated in a computational model of Pāṇinian grammar as opposed to in software that generated speech forms without regard to Pāṇinian procedure. Mishra (2009) described the extensive use of attributes to track classification, marking and other features of phonetic strings. Goyal, Kulkarni, and Behera (2009, especially sec. 3.5) implemented a model of the asiddhavat section of rules (6.4.22-129) in which the state of the data passed to rules of the section is maintained unchanged and is utilized by those rules as conditions, yet the rules of the section are applied in parallel, and the result of all applicable rules applying exits the section. The current paper describes Scharf and Hyman's implementation of rule blocking and forward-looking conditions. The former deals with complex groups of rules concerned with domains included within the scope of a general rule. The latter concerns a case where a decision at an early stage in the derivation requires evaluation of conditions that do not obtain until a subsequent stage in the derivation.

  15. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  16. Rapamycin targeting mTOR and hedgehog signaling pathways blocks human rhabdomyosarcoma growth in xenograft murine model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylani, Samer Z. [Division of Hematology and Oncology, Department of Pediatrics, University of Alabama at Birmingham, 1600 7th Avenue South, ACC 414, Birmingham, AL 35233 (United States); Xu, Jianmin; Srivastava, Ritesh K. [Department of Dermatology and Skin Diseases Research Center, University of Alabama at Birmingham, 1530 3rd Avenue South, VH 509, Birmingham, AL 35294-0019 (United States); Kopelovich, Levy [Division of Cancer Prevention, National Cancer Institute, Bethesda (United States); Pressey, Joseph G. [Division of Hematology and Oncology, Department of Pediatrics, University of Alabama at Birmingham, 1600 7th Avenue South, ACC 414, Birmingham, AL 35233 (United States); Athar, Mohammad, E-mail: mathar@uab.edu [Department of Dermatology and Skin Diseases Research Center, University of Alabama at Birmingham, 1530 3rd Avenue South, VH 509, Birmingham, AL 35294-0019 (United States)

    2013-06-14

    Graphical abstract: Intervention of poorly differentiated RMS by rapamycin: In poorly differentiated RMS, rapamycin blocks mTOR and Hh signaling pathways concomitantly. This leads to dampening in cell cycle regulation and induction of apoptosis. This study provides a rationale for the therapeutic intervention of poorly differentiated RMS by treating patients with rapamycin alone or in combination with other chemotherapeutic agents. -- Highlights: •Rapamycin abrogates RMS tumor growth by modulating proliferation and apoptosis. •Co-targeting mTOR/Hh pathways underlie the molecular basis of effectiveness. •Reduction in mTOR/Hh pathways diminish EMT leading to reduced invasiveness. -- Abstract: Rhabdomyosarcomas (RMS) represent the most common childhood soft-tissue sarcoma. Over the past few decades outcomes for low and intermediate risk RMS patients have slowly improved while patients with metastatic or relapsed RMS still face a grim prognosis. New chemotherapeutic agents or combinations of chemotherapies have largely failed to improve the outcome. Based on the identification of novel molecular targets, potential therapeutic approaches in RMS may offer a decreased reliance on conventional chemotherapy. Thus, identification of effective therapeutic agents that specifically target relevant pathways may be particularly beneficial for patients with metastatic and refractory RMS. The PI3K/AKT/mTOR pathway has been found to be a potentially attractive target in RMS therapy. In this study, we provide evidence that rapamycin (sirolimus) abrogates growth of RMS development in a RMS xenograft mouse model. As compared to a vehicle-treated control group, more than 95% inhibition in tumor growth was observed in mice receiving parenteral administration of rapamycin. The residual tumors in rapamycin-treated group showed significant reduction in the expression of biomarkers indicative of proliferation and tumor invasiveness. These tumors also showed enhanced apoptosis

  17. Seamount characteristics and mine-site model applied to exploration- and mining-lease-block selection for cobalt-rich ferromanganese crusts

    Science.gov (United States)

    Hein, James R.; Conrad, Tracey A.; Dunham, Rachel E.

    2009-01-01

    Regulations are being developed through the International Seabed Authority (ISBA) for the exploration and mining of cobalt-rich ferromanganese crusts. This paper lays out geologic and geomorphologic criteria that can be used to determine the size and number of exploration and mine-site blocks that will be the focus of much discussion within the ISBA Council deliberations. The surface areas of 155 volcanic edifices in the central equatorial Pacific were measured and used to develop a mine-site model. The mine-site model considers areas above 2,500 m water depth as permissive, and narrows the general area available for exploration and mining to 20% of that permissive area. It is calculated that about eighteen 100 km2 explora-tion blocks, each composed of five 20km2 contiguous sub-blocks, would be adequate to identify a 260 km2 20-year-mine site; the mine site would be composed of thirteen of the 20km2 sub-blocks. In this hypothetical example, the 260 km2 mine site would be spread over four volcanic edifices and comprise 3.7% of the permissive area of the four edifices and 0.01% of the total area of those four edifices. The eighteen 100km2 exploration blocks would be selected from a limited geographic area. That confinement area is defined as having a long dimension of not more than 1,000 km and an area of not more than 300,000 km2.

  18. Sequential use of the STICS crop model and of the MACRO pesticide fate model to simulate pesticides leaching in cropping systems.

    Science.gov (United States)

    Lammoglia, Sabine-Karen; Moeys, Julien; Barriuso, Enrique; Larsbo, Mats; Marín-Benito, Jesús-María; Justes, Eric; Alletto, Lionel; Ubertosi, Marjorie; Nicolardot, Bernard; Munier-Jolain, Nicolas; Mamy, Laure

    2017-03-01

    The current challenge in sustainable agriculture is to introduce new cropping systems to reduce pesticides use in order to reduce ground and surface water contamination. However, it is difficult to carry out in situ experiments to assess the environmental impacts of pesticide use for all possible combinations of climate, crop, and soils; therefore, in silico tools are necessary. The objective of this work was to assess pesticides leaching in cropping systems coupling the performances of a crop model (STICS) and of a pesticide fate model (MACRO). STICS-MACRO has the advantage of being able to simulate pesticides fate in complex cropping systems and to consider some agricultural practices such as fertilization, mulch, or crop residues management, which cannot be accounted for with MACRO. The performance of STICS-MACRO was tested, without calibration, from measurements done in two French experimental sites with contrasted soil and climate properties. The prediction of water percolation and pesticides concentrations with STICS-MACRO was satisfactory, but it varied with the pedoclimatic context. The performance of STICS-MACRO was shown to be similar or better than that of MACRO. The improvement of the simulation of crop growth allowed better estimate of crop transpiration therefore of water balance. It also allowed better estimate of pesticide interception by the crop which was found to be crucial for the prediction of pesticides concentrations in water. STICS-MACRO is a new promising tool to improve the assessment of the environmental risks of pesticides used in cropping systems.

  19. A Slicing Tree Representation and QCP-Model-Based Heuristic Algorithm for the Unequal-Area Block Facility Layout Problem

    Directory of Open Access Journals (Sweden)

    Mei-Shiang Chang

    2013-01-01

    Full Text Available The facility layout problem is a typical combinational optimization problem. In this research, a slicing tree representation and a quadratically constrained program model are combined with harmony search to develop a heuristic method for solving the unequal-area block layout problem. Because of characteristics of slicing tree structure, we propose a regional structure of harmony memory to memorize facility layout solutions and two kinds of harmony improvisation to enhance global search ability of the proposed heuristic method. The proposed harmony search based heuristic is tested on 10 well-known unequal-area facility layout problems from the literature. The results are compared with the previously best-known solutions obtained by genetic algorithm, tabu search, and ant system as well as exact methods. For problems O7, O9, vC10Ra, M11*, and Nug12, new best solutions are found. For other problems, the proposed approach can find solutions that are very similar to previous best-known solutions.

  20. Experimental study of core bypass flow in a prismatic VHTR based on a two-layer block model

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Huhu, E-mail: huhuwang@tamu.edu; Hassan, Yassin A., E-mail: y-hassan@tamu.edu; Dominguez-Ontiveros, Elvis, E-mail: elvisdom@tamu.edu

    2016-09-15

    Bypass flow in a prismatic very high temperature gas-cooled nuclear reactor (VHTR) plays an important role in determining the coolant distribution in the core region. Efficient removal of heat from the core relies on the majority of coolant passing through the coolant channels instead of the bypass gaps. Consequently, the bypass flow fraction and its flow characteristic are important in the design process of the prismatic VHTR. The objective of this study is to experimentally investigate the flow behavior including the turbulence characteristics inside the bypass gaps using laser Doppler velocimetry (LDV), bypass fraction and pressure drops in the system. The experiment facility constructed at Texas A&M University is a scaled model consisting of two layers of fuel blocks. The distributions of the mean streamwise velocity, turbulence intensity and turbulence kinetic energy within the bypass gap at two different elevations under different Reynolds number were investigated. Uncertainties in the bypass flow fraction estimation were evaluated. The velocity and turbulence study in this work is considered to be unique, and may serve as a benchmark for the related numerical calculations.

  1. Molecular modeling of directed self-assembly of block copolymers: Fundamental studies of processing conditions and evolutionary pattern design

    Science.gov (United States)

    Khaira, Gurdaman Singh

    Rapid progress in the semi-conductor industry has pushed for smaller feature sizes on integrated electronic circuits. Current photo-lithographic techniques for nanofabrication have reached their technical limit and are problematic when printing features small enough to meet future industrial requirements. "Bottom-up'' techniques, such as the directed self-assembly (DSA) of block copolymers (BCP), are the primary contenders to compliment current "top-down'' photo-lithography ones. For industrial requirements, the defect density from DSA needs to be less than 1 defect per 10 cm by 10 cm. Knowledge of both material synthesis and the thermodynamics of the self-assembly process are required before optimal operating conditions can be found to produce results adequate for industry. The work present in this thesis is divided into three chapters, each discussing various aspects of DSA as studied via a molecular model that contains the essential physics of BCP self-assembly. Though there are various types of guiding fields that can be used to direct BCPs over large wafer areas with minimum defects, this study focuses only on chemically patterned substrates. The first chapter addresses optimal pattern design by describing a framework where molecular simulations of various complexities are coupled with an advanced optimization technique to find a pattern that directs a target morphology. It demonstrates the first ever study where BCP self-assembly on a patterned substrate is optimized using a three-dimensional description of the block-copolymers. For problems pertaining to DSA, the methodology is shown to converge much faster than the traditional random search approach. The second chapter discusses the metrology of BCP thin films using TEM tomography and X-ray scattering techniques, such as CDSAXS and GISAXS. X-ray scattering has the advantage of being able to quickly probe the average structure of BCP morphologies over large wafer areas; however, deducing the BCP morphology

  2. 3D seismic modeling and reverse‐time migration with the parallel Fourier method using non‐blocking collective communications

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.; Seif, Roustam

    2009-01-01

    The major performance bottleneck of the parallel Fourier method on distributed memory systems is the network communication cost. In this study, we investigate the potential of using non‐blocking all‐to‐all communications to solve this problem

  3. Examining the Self-Assembly of Rod-Coil Block Copolymers via Physics Based Polymer Models and Polarized X-Ray Scattering

    Science.gov (United States)

    Hannon, Adam; Sunday, Daniel; Windover, Donald; Liman, Christopher; Bowen, Alec; Khaira, Gurdaman; de Pablo, Juan; Delongchamp, Dean; Kline, R. Joseph

    Photovoltaics, flexible electronics, and stimuli-responsive materials all require enhanced methodology to examine their nanoscale molecular orientation. The mechanical, electronic, optical, and transport properties of devices made from these materials are all a function of this orientation. The polymer chains in these materials are best modeled as semi-flexible to rigid rods. Characterizing the rigidity and molecular orientation of these polymers non-invasively is currently being pursued by using polarized resonant soft X-ray scattering (P-RSoXS). In this presentation, we show recent work on implementing such a characterization process using a rod-coil block copolymer system in the rigid-rod limit. We first demonstrate how we have used physics based models such as self-consistent field theory (SCFT) in non-polarized RSoXS work to fit scattering profiles for thin film coil-coil PS- b-PMMA block copolymer systems. We then show by using a wormlike chain partition function in the SCFT formulism to model the rigid-rod block, the methodology can be used there as well to extract the molecular orientation of the rod block from a simulated P-RSoXS experiment. The results from the work show the potential of the technique to extract thermodynamic and morphological sample information.

  4. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  5. An efficient, block-by-block algorithm for inverting a block tridiagonal, nearly block Toeplitz matrix

    International Nuclear Information System (INIS)

    Reuter, Matthew G; Hill, Judith C

    2012-01-01

    We present an algorithm for computing any block of the inverse of a block tridiagonal, nearly block Toeplitz matrix (defined as a block tridiagonal matrix with a small number of deviations from the purely block Toeplitz structure). By exploiting both the block tridiagonal and the nearly block Toeplitz structures, this method scales independently of the total number of blocks in the matrix and linearly with the number of deviations. Numerical studies demonstrate this scaling and the advantages of our method over alternatives.

  6. Sequential flavor symmetry breaking

    International Nuclear Information System (INIS)

    Feldmann, Thorsten; Jung, Martin; Mannel, Thomas

    2009-01-01

    The gauge sector of the standard model exhibits a flavor symmetry that allows for independent unitary transformations of the fermion multiplets. In the standard model the flavor symmetry is broken by the Yukawa couplings to the Higgs boson, and the resulting fermion masses and mixing angles show a pronounced hierarchy. In this work we connect the observed hierarchy to a sequence of intermediate effective theories, where the flavor symmetries are broken in a stepwise fashion by vacuum expectation values of suitably constructed spurion fields. We identify the possible scenarios in the quark sector and discuss some implications of this approach.

  7. Sequential flavor symmetry breaking

    Science.gov (United States)

    Feldmann, Thorsten; Jung, Martin; Mannel, Thomas

    2009-08-01

    The gauge sector of the standard model exhibits a flavor symmetry that allows for independent unitary transformations of the fermion multiplets. In the standard model the flavor symmetry is broken by the Yukawa couplings to the Higgs boson, and the resulting fermion masses and mixing angles show a pronounced hierarchy. In this work we connect the observed hierarchy to a sequence of intermediate effective theories, where the flavor symmetries are broken in a stepwise fashion by vacuum expectation values of suitably constructed spurion fields. We identify the possible scenarios in the quark sector and discuss some implications of this approach.

  8. Convergence and resolution recovery of block-iterative EM algorithms modeling 3D detector response in SPECT

    International Nuclear Information System (INIS)

    Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.

    1996-01-01

    We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly

  9. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  10. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  11. Estimation of interplate coupling along Nankai trough considering the block motion model based on onland GNSS and seafloor GPS/A observation data using MCMC method

    Science.gov (United States)

    Kimura, H.; Ito, T.; Tadokoro, K.

    2017-12-01

    Introduction In southwest Japan, Philippine sea plate is subducting under the overriding plate such as Amurian plate, and mega interplate earthquakes has occurred at about 100 years interval. There is no occurrence of mega interplate earthquakes in southwest Japan, although it has passed about 70 years since the last mega interplate earthquakes: 1944 and 1946 along Nankai trough, meaning that the strain has been accumulated at plate interface. Therefore, it is essential to reveal the interplate coupling more precisely for predicting or understanding the mechanism of next occurring mega interplate earthquake. Recently, seafloor geodetic observation revealed the detailed interplate coupling distribution in expected source region of Nankai trough earthquake (e.g., Yokota et al. [2016]). In this study, we estimated interplate coupling in southwest Japan, considering block motion model and using seafloor geodetic observation data as well as onland GNSS observation data, based on Markov Chain Monte Carlo (MCMC) method. Method Observed crustal deformation is assumed that sum of rigid block motion and elastic deformation due to coupling at block boundaries. We modeled this relationship as a non-linear inverse problem that the unknown parameters are Euler pole of each block and coupling at each subfault, and solved them simultaneously based on MCMC method. Input data we used in this study are 863 onland GNSS observation data and 24 seafloor GPS/A observation data. We made some block division models based on the map of active fault tracing and selected the best model based on Akaike's Information Criterion (AIC): that is consist of 12 blocks. Result We find that the interplate coupling along Nankai trough has heterogeneous spatial distribution, strong at the depth of 0 to 20km at off Tokai region, and 0 to 30km at off Shikoku region. Moreover, we find that observed crustal deformation at off Tokai region is well explained by elastic deformation due to subducting Izu Micro

  12. Analysis of Block OMP using Block RIP

    OpenAIRE

    Wang, Jun; Li, Gang; Zhang, Hao; Wang, Xiqin

    2011-01-01

    Orthogonal matching pursuit (OMP) is a canonical greedy algorithm for sparse signal reconstruction. When the signal of interest is block sparse, i.e., it has nonzero coefficients occurring in clusters, the block version of OMP algorithm (i.e., Block OMP) outperforms the conventional OMP. In this paper, we demonstrate that a new notion of block restricted isometry property (Block RIP), which is less stringent than standard restricted isometry property (RIP), can be used for a very straightforw...

  13. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  14. Generalized infimum and sequential product of quantum effects

    International Nuclear Information System (INIS)

    Li Yuan; Sun Xiuhong; Chen Zhengli

    2007-01-01

    The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

  15. Assessment of left ventricular mechanical dyssynchrony in left bundle branch block canine model: Comparison between cine and tagged MRI.

    Science.gov (United States)

    Saporito, Salvatore; van Assen, Hans C; Houthuizen, Patrick; Aben, Jean-Paul M M; Strik, Marc; van Middendorp, Lars B; Prinzen, Frits W; Mischi, Massimo

    2016-10-01

    To compare cine and tagged magnetic resonance imaging (MRI) for left ventricular dyssynchrony assessment in left bundle branch block (LBBB), using the time-to-peak contraction timing, and a novel approach based on cross-correlation. We evaluated a canine model dataset (n = 10) before (pre-LBBB) and after induction of isolated LBBB (post-LBBB). Multislice short-axis tagged and cine MRI images were acquired using a 1.5 T scanner. We computed contraction time maps by cross-correlation, based on the timing of radial wall motion and of circumferential strain. Finally, we estimated dyssynchrony as the standard deviation of the contraction time over the different regions of the myocardium. Induction of LBBB resulted in a significant increase in dyssynchrony (cine: 13.0 ± 3.9 msec for pre-LBBB, and 26.4 ± 5.0 msec for post-LBBB, P = 0.005; tagged: 17.1 ± 5.0 msec at for pre-LBBB, and 27.9 ± 9.8 msec for post-LBBB, P = 0.007). Dyssynchrony assessed by cine and tagged MRI were in agreement (r = 0.73, P = 0.0003); differences were in the order of time difference between successive frames of 20 msec (bias: -2.9 msec; limit of agreement: 10.1 msec). Contraction time maps were derived; agreement was found in the contraction patterns derived from cine and tagged MRI (mean difference in contraction time per segment: 3.6 ± 13.7 msec). This study shows that the proposed method is able to quantify dyssynchrony after induced LBBB in an animal model. Cine-assessed dyssynchrony agreed with tagged-derived dyssynchrony, in terms of magnitude and spatial direction. J. MAGN. RESON. IMAGING 2016;44:956-963. © 2016 International Society for Magnetic Resonance in Medicine.

  16. Effects of channel blocking on information transmission and energy efficiency in squid giant axons.

    Science.gov (United States)

    Liu, Yujiang; Yue, Yuan; Yu, Yuguo; Liu, Liwei; Yu, Lianchun

    2018-04-01

    Action potentials are the information carriers of neural systems. The generation of action potentials involves the cooperative opening and closing of sodium and potassium channels. This process is metabolically expensive because the ions flowing through open channels need to be restored to maintain concentration gradients of these ions. Toxins like tetraethylammonium can block working ion channels, thus affecting the function and energy cost of neurons. In this paper, by computer simulation of the Hodgkin-Huxley neuron model, we studied the effects of channel blocking with toxins on the information transmission and energy efficiency in squid giant axons. We found that gradually blocking sodium channels will sequentially maximize the information transmission and energy efficiency of the axons, whereas moderate blocking of potassium channels will have little impact on the information transmission and will decrease the energy efficiency. Heavy blocking of potassium channels will cause self-sustained oscillation of membrane potentials. Simultaneously blocking sodium and potassium channels with the same ratio increases both information transmission and energy efficiency. Our results are in line with previous studies suggesting that information processing capacity and energy efficiency can be maximized by regulating the number of active ion channels, and this indicates a viable avenue for future experimentation.

  17. Device-independent two-party cryptography secure against sequential attacks

    Science.gov (United States)

    Kaniewski, Jędrzej; Wehner, Stephanie

    2016-05-01

    The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser-Horne-Shimony-Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation.

  18. Device-independent two-party cryptography secure against sequential attacks

    International Nuclear Information System (INIS)

    Kaniewski, Jędrzej; Wehner, Stephanie

    2016-01-01

    The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser–Horne–Shimony–Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation. (paper)

  19. Evaluation Using Sequential Trials Methods.

    Science.gov (United States)

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  20. Moisture and temperature in a proppant-enveloped silt block of a recharge dam reservoir: Laboratory experiment and 1-D mathematical modelling

    Directory of Open Access Journals (Sweden)

    Anvar Kacimov

    2018-01-01

    Full Text Available Mosaic 3-D cascade of parallelepiped-shaped silt blocks, which sandwich sand- lled cracks, has been discovered in the eld and tested in lab experiments. Controlled wetting-drying of these blocks, collected from a dam reservoir, mimics field ponding-desiccation conditions of the topsoil layer subject to caustic solar radiation, high temperature and wind, typical in the Batinah region of Oman. In 1-D analytical modelling of a transient Richards’ equation for vertical evaporation, the method of small perturbations is applied, assuming that the relative permeability is Avery-anov’s 3.5-power function of the moisture content and capillary pressure is a given (measured function. A linearized advective dispersion equation is solved with respect to the second term in the series expansion of the moisture content as a function of spatial coordinates and time. For a single block of a nite thickness we solve a boundary value problem with a no- ow condition at the bottom and a constant moisture content at the surface. Preliminary comparisons with theta-, TDR- probes measuring the moisture content and temperature at several in-block points are made. Results corroborate that a 3-D heterogeneity of soil physical properties, in particular, horizontal and vertical capillary barriers emerging on the interfaces between silt and sand generate eco-niches with stored soil water compartments favourable for lush vegetation in desert conditions. Desiccation significantly increases the temperature in the blocks and re-wetting of the blocks reduces the daily average and peak temperatures, the latter by almost 15°C. This is important for planning irrigation in smartly designed soil substrates and sustainability of wild plants in the region where the top soil peak temperature in the study area exceeds 70°C in Summer but smartly structured soils maintain lash vegetation. Thee layer of dry top-blocks acts as a thermal insulator for the subjacent layers of wet blocks that

  1. A Relational Account of Call-by-Value Sequentiality

    DEFF Research Database (Denmark)

    Riecke, Jon Gary; Sandholm, Anders Bo

    2002-01-01

    We construct a model for FPC, a purely functional, sequential, call-by-value language. The model is built from partial continuous functions, in the style of Plotkin, further constrained to be uniform with respect to a class of logical relations. We prove that the model is fully abstract....

  2. Standardized method for reproducing the sequential X-rays flap

    International Nuclear Information System (INIS)

    Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia

    2009-01-01

    A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es

  3. A tribo-mechanical analysis of PVA-based building-blocks for implementation in a 2-layered skin model

    NARCIS (Netherlands)

    Morales Hurtado, Marina; de Vries, Erik G.; Zeng, Xiangqiong; van der Heide, Emile

    2016-01-01

    Poly(vinyl) alcohol hydrogel (PVA) is a well-known polymer widely used in the medical field due to its biocompatibility properties and easy manufacturing. In this work, the tribo-mechanical properties of PVA-based blocks are studied to evaluate their suitability as a part of a structure simulating

  4. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  5. An upper limit for slow-earthquake zones: self-oscillatory behavior through the Hopf bifurcation mechanism from a spring-block model under lubricated surfaces

    Science.gov (United States)

    Castellanos-Rodríguez, Valentina; Campos-Cantón, Eric; Barboza-Gudiño, Rafael; Femat, Ricardo

    2017-08-01

    The complex oscillatory behavior of a spring-block model is analyzed via the Hopf bifurcation mechanism. The mathematical spring-block model includes Dieterich-Ruina's friction law and Stribeck's effect. The existence of self-sustained oscillations in the transition zone - where slow earthquakes are generated within the frictionally unstable region - is determined. An upper limit for this region is proposed as a function of seismic parameters and frictional coefficients which are concerned with presence of fluids in the system. The importance of the characteristic length scale L, the implications of fluids, and the effects of external perturbations in the complex dynamic oscillatory behavior, as well as in the stationary solution, are take into consideration.

  6. Hydration effects on the electronic properties of eumelanin building blocks

    International Nuclear Information System (INIS)

    Assis Oliveira, Leonardo Bruno; Fonseca, Tertius L.; Costa Cabral, Benedito J.; Coutinho, Kaline; Canuto, Sylvio

    2016-01-01

    Theoretical results for the electronic properties of eumelanin building blocks in the gas phase and water are presented. The building blocks presently investigated include the monomeric species DHI (5,6-dihydroxyindole) or hydroquinone (HQ), DHICA (5,6-dihydroxyindole-2-carboxylic acid), indolequinone (IQ), quinone methide (MQ), two covalently bonded dimers [HM ≡ HQ + MQ and IM ≡ IQ + MQ], and two tetramers [HMIM ≡ HQ + IM, IMIM ≡ IM + IM]. The electronic properties in water were determined by carrying out sequential Monte Carlo/time dependent density functional theory calculations. The results illustrate the role played by hydrogen bonding and electrostatic interactions in the electronic properties of eumelanin building blocks in a polar environment. In water, the dipole moments of monomeric species are significantly increased ([54–79]%) relative to their gas phase values. Recently, it has been proposed that the observed enhancement of the higher-energy absorption intensity in eumelanin can be explained by excitonic coupling among eumelanin protomolecules [C.-T. Chen et al., Nat. Commun. 5, 3859 (2014)]. Here, we are providing evidence that for DHICA, IQ, and HMIM, the electronic absorption toward the higher-energy end of the spectrum ([180–220] nm) is enhanced by long-range Coulombic interactions with the water environment. It was verified that by superposing the absorption spectra of different eumelanin building blocks corresponding to the monomers, dimers, and tetramers in liquid water, the behaviour of the experimental spectrum, which is characterised by a nearly monotonic decay from the ultraviolet to the infrared, is qualitatively reproduced. This result is in keeping with a “chemical disorder model,” where the broadband absorption of eumelanin pigments is determined by the superposition of the spectra associated with the monomeric and oligomeric building blocks.

  7. Hydration effects on the electronic properties of eumelanin building blocks

    Energy Technology Data Exchange (ETDEWEB)

    Assis Oliveira, Leonardo Bruno [Instituto de Física da Universidade Federal de Goiás, 74690-900 Goiânia, GO (Brazil); Departamento de Física - CEPAE, Universidade Federal de Goiás, 74690-900 Goiânia, GO (Brazil); Escola de Ciências Exatas e da Computação, Pontifícia Universidade Católica de Goiás, 74605-010 Goiânia, GO (Brazil); Fonseca, Tertius L. [Instituto de Física da Universidade Federal de Goiás, 74690-900 Goiânia, GO (Brazil); Costa Cabral, Benedito J., E-mail: ben@cii.fc.ul.pt [Grupo de Física Matemática da Universidade de Lisboa and Departamento de Química e Bioquímica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa (Portugal); Coutinho, Kaline; Canuto, Sylvio [Instituto de Física da Universidade de São Paulo, CP 66318, 05314-970 São Paulo, SP (Brazil)

    2016-08-28

    Theoretical results for the electronic properties of eumelanin building blocks in the gas phase and water are presented. The building blocks presently investigated include the monomeric species DHI (5,6-dihydroxyindole) or hydroquinone (HQ), DHICA (5,6-dihydroxyindole-2-carboxylic acid), indolequinone (IQ), quinone methide (MQ), two covalently bonded dimers [HM ≡ HQ + MQ and IM ≡ IQ + MQ], and two tetramers [HMIM ≡ HQ + IM, IMIM ≡ IM + IM]. The electronic properties in water were determined by carrying out sequential Monte Carlo/time dependent density functional theory calculations. The results illustrate the role played by hydrogen bonding and electrostatic interactions in the electronic properties of eumelanin building blocks in a polar environment. In water, the dipole moments of monomeric species are significantly increased ([54–79]%) relative to their gas phase values. Recently, it has been proposed that the observed enhancement of the higher-energy absorption intensity in eumelanin can be explained by excitonic coupling among eumelanin protomolecules [C.-T. Chen et al., Nat. Commun. 5, 3859 (2014)]. Here, we are providing evidence that for DHICA, IQ, and HMIM, the electronic absorption toward the higher-energy end of the spectrum ([180–220] nm) is enhanced by long-range Coulombic interactions with the water environment. It was verified that by superposing the absorption spectra of different eumelanin building blocks corresponding to the monomers, dimers, and tetramers in liquid water, the behaviour of the experimental spectrum, which is characterised by a nearly monotonic decay from the ultraviolet to the infrared, is qualitatively reproduced. This result is in keeping with a “chemical disorder model,” where the broadband absorption of eumelanin pigments is determined by the superposition of the spectra associated with the monomeric and oligomeric building blocks.

  8. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  9. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences

    Directory of Open Access Journals (Sweden)

    Ji-Yong An

    2016-05-01

    Full Text Available Protein-Protein Interactions (PPIs play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM model and Average Blocks (AB to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM, reducing the influence of noise using a Principal Component Analysis (PCA, and using a Relevance Vector Machine (RVM based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed

  10. Decoding restricted participation in sequential electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Knaut, Andreas; Paschmann, Martin

    2017-06-15

    Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.

  11. From sequential to parallel programming with patterns

    CERN Document Server

    CERN. Geneva

    2018-01-01

    To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

  12. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  13. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  14. Challenges in transformation of the "traditional block rotation" medical student clinical education into a longitudinal integrated clerkship model.

    Science.gov (United States)

    Heddle, William; Roberton, Gayle; Mahoney, Sarah; Walters, Lucie; Strasser, Sarah; Worley, Paul

    2014-01-01

    Longitudinal integrated clerkships (LIC) in the first major clinical year in medical student training have been demonstrated to be at least equivalent to and in some areas superior to the "traditional block rotation" (TBR). Flinders University School of Medicine is starting a pilot changing the traditional teaching at the major Academic Medical Centre from TBR to LIC (50% of students in other locations in the medical school already have a partial or full LIC programme). This paper summarises the expected challenges presented at the "Rendez-Vous" Conference in October 2012: (a) creating urgency, (b) training to be a clinician rather than imparting knowledge, (c) resistance to change. We discuss the unexpected challenges that have evolved since then: (a) difficulty finalising the precise schedule, (b) underestimating time requirements, (c) managing the change process inclusively. Transformation of a "block rotation" to "LIC" medical student education in a tertiary academic teaching hospital has many challenges, many of which can be anticipated, but some are unexpected.

  15. Establishing an Appropriate Level of Detail (LoD) for a Building Information Model (BIM) - West Block, Parliament Hill, Ottawa, Canada

    Science.gov (United States)

    Fai, S.; Rafeiro, J.

    2014-05-01

    In 2011, Public Works and Government Services Canada (PWGSC) embarked on a comprehensive rehabilitation of the historically significant West Block of Canada's Parliament Hill. With over 17 thousand square meters of floor space, the West Block is one of the largest projects of its kind in the world. As part of the rehabilitation, PWGSC is working with the Carleton Immersive Media Studio (CIMS) to develop a building information model (BIM) that can serve as maintenance and life-cycle management tool once construction is completed. The scale and complexity of the model have presented many challenges. One of these challenges is determining appropriate levels of detail (LoD). While still a matter of debate in the development of international BIM standards, LoD is further complicated in the context of heritage buildings because we must reconcile the LoD of the BIM with that used in the documentation process (terrestrial laser scan and photogrammetric survey data). In this paper, we will discuss our work to date on establishing appropriate LoD within the West Block BIM that will best serve the end use. To facilitate this, we have developed a single parametric model for gothic pointed arches that can be used for over seventy-five unique window types present in the West Block. Using the AEC (CAN) BIM as a reference, we have developed a workflow to test each of these window types at three distinct levels of detail. We have found that the parametric Gothic arch significantly reduces the amount of time necessary to develop scenarios to test appropriate LoD.

  16. Model Amphiphilic Block Copolymers with Tailored Molecular Weight and Composition in PDMS-Based Films to Limit Soft Biofouling

    Energy Technology Data Exchange (ETDEWEB)

    Wenning, Brandon M. [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Pisa 56124, Italy; Martinelli, Elisa [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Pisa 56124, Italy; Mieszkin, Sophie [School of Biosciences, The University of Birmingham, Edgbaston, Birmingham B15 5TT, U.K.; Finlay, John A. [School of Biosciences, The University of Birmingham, Edgbaston, Birmingham B15 5TT, U.K.; Fischer, Daniel [National Institute of Standards and Technology, Gaithersburg, Maryland 20899, United States; Callow, James A. [School of Biosciences, The University of Birmingham, Edgbaston, Birmingham B15 5TT, U.K.; Callow, Maureen E. [School of Biosciences, The University of Birmingham, Edgbaston, Birmingham B15 5TT, U.K.; Leonardi, Amanda K.; Ober, Christopher K.; Galli, Giancarlo [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Pisa 56124, Italy

    2017-05-02

    A set of controlled surface composition films was produced utilizing amphiphilic block copolymers dispersed in a cross-linked poly(dimethylsiloxane) network. These block copolymers contained oligo(ethylene glycol) (PEGMA) and fluoroalkyl (AF6) side chains in selected ratios and molecular weights to control surface chemistry including antifouling and fouling-release performance. Such properties were assessed by carrying out assays using two algae, the green macroalga Ulva linza (favors attachment to polar surfaces) and the unicellular diatom Navicula incerta (favors attachment to nonpolar surfaces). All films performed well against U. linza and exhibited high removal of attached sporelings (young plants) under an applied shear stress, with the lower molecular weight block copolymers being the best performing in the set. The composition ratios from 50:50 to 60:40 of the AF6/PEGMA side groups were shown to be more effective, with several films exhibiting spontaneous removal of the sporelings. The cells of N. incerta were also removed from several coating compositions. All films were characterized by surface techniques including captive bubble contact angle, atomic force microscopy, and near edge X-ray absorption fine structure spectroscopy to correlate surface chemistry and morphology with biological performance.

  17. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  18. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  19. Ultrasound guided supraclavicular block.

    LENUS (Irish Health Repository)

    Hanumanthaiah, Deepak

    2013-09-01

    Ultrasound guided regional anaesthesia is becoming increasingly popular. The supraclavicular block has been transformed by ultrasound guidance into a potentially safe superficial block. We reviewed the techniques of performing supraclavicular block with special focus on ultrasound guidance.

  20. Mining Emerging Sequential Patterns for Activity Recognition in Body Sensor Networks

    DEFF Research Database (Denmark)

    Gu, Tao; Wang, Liang; Chen, Hanhua

    2010-01-01

    Body Sensor Networks oer many applications in healthcare, well-being and entertainment. One of the emerging applications is recognizing activities of daily living. In this paper, we introduce a novel knowledge pattern named Emerging Sequential Pattern (ESP)|a sequential pattern that discovers...... signicant class dierences|to recognize both simple (i.e., sequential) and complex (i.e., interleaved and concurrent) activities. Based on ESPs, we build our complex activity models directly upon the sequential model to recognize both activity types. We conduct comprehensive empirical studies to evaluate...

  1. Development of computer-aided software engineering tool for sequential control of JT-60U

    International Nuclear Information System (INIS)

    Shimono, M.; Akasaka, H.; Kurihara, K.; Kimura, T.

    1995-01-01

    Discharge sequential control (DSC) is an essential control function for the intermittent and pulse discharge operation of a tokamak device, so that many subsystems may work with each other in correct order and/or synchronously. In the development of the DSC program, block diagrams of logical operation for sequential control are illustrated in its design at first. Then, the logical operators and I/O's which are involved in the block diagrams are compiled and converted to a certain particular form. Since the block diagrams of the sequential control amounts to about 50 sheets in the case of the JT-60 upgrade tokamak (JT-60U) high power discharge and the above steps of the development have been performed manually so far, a great effort has been required for the program development. In order to remove inefficiency in such development processes, a computer-aided software engineering (CASE) tool has been developed on a UNIX workstation. This paper reports how the authors design it for the development of the sequential control programs. The tool is composed of the following three tools: (1) Automatic drawing tool, (2) Editing tool, and (3) Trace tool. This CASE tool, an object-oriented programming tool having graphical formalism, can powerfully accelerate the cycle for the development of the sequential control function commonly associated with pulse discharge in a tokamak fusion device

  2. Making Career Decisions--A Sequential Elimination Approach.

    Science.gov (United States)

    Gati, Itamar

    1986-01-01

    Presents a model for career decision making based on the sequential elimination of occupational alternatives, an adaptation for career decisions of Tversky's (1972) elimination-by-aspects theory of choice. The expected utility approach is reviewed as a representative compensatory model for career decisions. Advantages, disadvantages, and…

  3. Sequential Computerized Mastery Tests--Three Simulation Studies

    Science.gov (United States)

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  4. Assessing the 2D Models of Geo-technological Variables in a Block of a Cuban Laterite Ore Body. Part IV Local Polynomial Method

    Directory of Open Access Journals (Sweden)

    Arístides Alejandro Legrá-Lobaina

    2016-10-01

    Full Text Available The local polynomial method is based on assuming that is possible to estimate the value of a U variable in any of the P coordinate through local polynomials estimated based on approximate data. This investigation analyzes the probability of modeling in two dimensions the thickness and nickel, iron and cobalt concentrations in a block of Cuban laterite ores by using the mentioned method. It was also analyzed if the results of modeling these variables depend on the estimation method that is used.

  5. Cardiac tissue geometry as a determinant of unidirectional conduction block: assessment of microscopic excitation spread by optical mapping in patterned cell cultures and in a computer model.

    Science.gov (United States)

    Fast, V G; Kléber, A G

    1995-05-01

    Unidirectional conduction block (UCB) and reentry may occur as a consequence of an abrupt tissue expansion and a related change in the electrical load. The aim of this study was to evaluate critical dimensions of the tissue necessary for establishing UCB in heart cell culture. Neonatal rat heart cell cultures with cell strands of variable width emerging into a large cell area were grown using a technique of patterned cell growth. Action potential upstrokes were measured using a voltage sensitive dye (RH-237) and a linear array of 10 photodiodes with a 15 microns resolution. A mathematical model was used to relate action potential wave shapes to underlying ionic currents. UCB (block of a single impulse in anterograde direction - from a strand to a large area - and conduction in the retrograde direction) occurred in narrow cell strands with a width of 15(SD 4) microns (1-2 cells in width, n = 7) and there was no conduction block in strands with a width of 31(8) microns (n = 9, P multiple rising phases. Mathematical modelling showed that two rising phases were caused by electronic current flow, whereas local ionic current did not coincide with the rising portions of the upstrokes. (1) High resolution optical mapping shows multiphasic action potential upstrokes at the region of abrupt expansion. At the site of the maximum decrement in conduction, these peaks were largely determined by the electrotonus and not by the local ionic current. (2) Unidirectional conduction block occurred in strands with a width of 15(4) microns (1-2 cells).

  6. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    Iooss, Christine

    1991-01-01

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

  7. Dexamethasone as Adjuvant to Bupivacaine Prolongs the Duration of Thermal Antinociception and Prevents Bupivacaine-Induced Rebound Hyperalgesia via Regional Mechanism in a Mouse Sciatic Nerve Block Model

    Science.gov (United States)

    An, Ke; Elkassabany, Nabil M.; Liu, Jiabin

    2015-01-01

    Background Dexamethasone has been studied as an effective adjuvant to prolong the analgesia duration of local anesthetics in peripheral nerve block. However, the route of action for dexamethasone and its potential neurotoxicity are still unclear. Methods A mouse sciatic nerve block model was used. The sciatic nerve was injected with 60ul of combinations of various medications, including dexamethasone and/or bupivacaine. Neurobehavioral changes were observed for 2 days prior to injection, and then continuously for up to 7 days after injection. In addition, the sciatic nerves were harvested at either 2 days or 7 days after injection. Toluidine blue dyeing and immunohistochemistry test were performed to study the short-term and long-term histopathological changes of the sciatic nerves. There were six study groups: normal saline control, bupivacaine (10mg/kg) only, dexamethasone (0.5mg/kg) only, bupivacaine (10mg/kg) combined with low-dose (0.14mg/kg) dexamethasone, bupivacaine (10mg/kg) combined with high-dose (0.5mg/kg) dexamethasone, and bupivacaine (10mg/kg) combined with intramuscular dexamethasone (0.5mg/kg). Results High-dose perineural dexamethasone, but not systemic dexamethasone, combined with bupivacaine prolonged the duration of both sensory and motor block of mouse sciatic nerve. There was no significant difference on the onset time of the sciatic nerve block. There was “rebound hyperalgesia” to thermal stimulus after the resolution of plain bupivacaine sciatic nerve block. Interestingly, both low and high dose perineural dexamethasone prevented bupivacaine-induced hyperalgesia. There was an early phase of axon degeneration and Schwann cell response as represented by S-100 expression as well as the percentage of demyelinated axon and nucleus in the plain bupivacaine group compared with the bupivacaine plus dexamethasone groups on post-injection day 2, which resolved on post-injection day 7. Furthermore, we demonstrated that perineural dexamethasone

  8. Dynamic swelling of tunable full-color block copolymer photonic gels via counterion exchange.

    Science.gov (United States)

    Lim, Ho Sun; Lee, Jae-Hwang; Walish, Joseph J; Thomas, Edwin L

    2012-10-23

    One-dimensionally periodic block copolymer photonic lamellar gels with full-color tunability as a result of a direct exchange of counteranions were fabricated via a two-step procedure comprising the self-assembly of a hydrophobic block-hydrophilic polyelectrolyte block copolymer, polystyrene-b-poly(2-vinyl pyridine) (PS-b-P2VP), followed by sequential quaternization of the P2VP layers in 1-bromoethane solution. Depending on the hydration characteristics of each counteranion, the selective swelling of the block copolymer lamellar structures leads to large tunability of the photonic stop band from blue to red wavelengths. More extensive quaternization of the P2VP block allows the photonic lamellar gels to swell more and red shift to longer wavelength. Here, we investigate the dynamic swelling behavior in the photonic gel films through time-resolved in situ measurement of UV-vis transmission. We model the swelling behavior using the transfer matrix method based on the experimentally observed reflectivity data with substitution of appropriate counterions. These tunable structural color materials may be attractive for numerous applications such as high-contrast displays without using a backlight, color filters, and optical mirrors for flexible lasing.

  9. An evaluation of the active fracture concept with modeling unsaturated flow and transport in a fractured meter-sized block of rock

    International Nuclear Information System (INIS)

    Seol, Yongkoo; Kneafsey, Timothy J.; Ito, Kazumasa

    2003-01-01

    Numerical simulation is an effective and economical tool for optimally designing laboratory experiments and deriving practical experimental conditions. We executed a detailed numerical simulation study to examine the active fracture concept (AFC, Liu et al., 1998) using a cubic meter-sized block model. The numerical simulations for this study were performed by applying various experimental conditions, including different bottom flow boundaries, varying injection rates, and different fracture-matrix interaction (by increasing absolute matrix permeability at the fracture matrix boundary) for a larger fracture interaction under transient or balanced-state flow regimes. Two conceptual block models were developed based on different numerical approaches: a two-dimensional discrete-fracture-network model (DFNM) and a one-dimensional dual continuum model (DCM). The DFNM was used as a surrogate for a natural block to produce synthetic breakthrough curves of water and tracer concentration under transient or balanced-state conditions. The DCM is the approach typically used for the Yucca Mountain Project because of its computational efficiency. The AFC was incorporated into the DCM to capture heterogeneous flow patterns that occur in unsaturated fractured rocks. The simulation results from the DCM were compared with the results from the DFNM to determine whether the DCM could predict the water flow and tracer transport observed in the DFNM at the scale of the experiment. It was found that implementing the AFC in the DCM improved the prediction of unsaturated flow and that the flow and transport experiments with low injection rates in the DFNM were compared better with the AFC implemented DCM at the meter scale. However, the estimated AFC parameter varied from 0.38 to 1.0 with different flow conditions, suggesting that the AFC parameter was not a sufficient to fully capture the complexity of the flow processes in a one meter sized discrete fracture network

  10. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    regularities. In combination with a wide range of auditory MMN studies, the present study highlights the critical role of sensory systems in automatically encoding sequential regularities when modeling the world.

  11. Integrated 3D geology modeling constrained by facies and horizontal well data for Block M of the Orinoco heavy oil belt

    Energy Technology Data Exchange (ETDEWEB)

    Longxin, M.; Baojun, X.; Shancheng, Z.; Guoqing, H. [CNPC America Ltd., Caracas (Venezuela)

    2008-10-15

    Horizontal well drilling with cold production were used to develop most of heavy oil fields in Venezuela's Orinoco heavy oil belt. This study interpreted the horizontal well logs of Block M of the Orinoco heavy oil belt in an effort to improve production from this highly porous and permeable reservoir. The reservoir is comprised primarily of non-consolidated sandstones. A porosity calculation formula for the horizontal well without porosity logs was established based on the study of horizontal well logging data of block M in the Orinoco heavy oil belt. A high quality 3-D simulation tool was used to separate the block into several different sections. A set of methods were presented in order to identify if the well track was approaching an adjacent formation, to estimate the distance between the well track and the adjacent formation, and to correct the deep resistivity of the horizontal section affected by the adjacent formation. A set of interpretation techniques were established, based on the combination of well logging data, seismic data and the oilfield development performance data. It was concluded that the development of the precise 3D geological model helped to establish a solid foundation for guiding the well position design and the drilling of the horizontal well. It also contributed to the reservoir numerical simulation and the effective development of the oil field. 6 refs., 2 tabs., 14 figs.

  12. Mesomorphic structure of poly(styrene)-block-poly(4-vinylpyridine) with oligo(ethylene oxide)sulfonic acid side chains as a model for molecularly reinforced polymer electrolyte

    NARCIS (Netherlands)

    Kosonen, H; Valkama, S; Hartikainen, J; Eerikainen, H; Torkkeli, M; Jokela, K; Serimaa, R; Sundholm, F; ten Brinke, G; Ikkala, O; Eerikäinen, Hannele

    2002-01-01

    We report self-organized polymer electrolytes based on poly(styrene)-block-poly(4-vinylpyridine) (PS-block-P4VP). Liquidlike ethylene oxide (EO) oligomers with sulfonic acid end groups are bonded to the P4VP block, leading to comb-shaped supramolecules with the PS-block-P4VP backbone. Lithium

  13. Homogeneous bilateral block shifts

    Indian Academy of Sciences (India)

    Douglas class were classified in [3]; they are unilateral block shifts of arbitrary block size (i.e. dim H(n) can be anything). However, no examples of irreducible homogeneous bilateral block shifts of block size larger than 1 were known until now.

  14. Microstructure history effect during sequential thermomechanical processing

    International Nuclear Information System (INIS)

    Yassar, Reza S.; Murphy, John; Burton, Christina; Horstemeyer, Mark F.; El kadiri, Haitham; Shokuhfar, Tolou

    2008-01-01

    The key to modeling the material processing behavior is the linking of the microstructure evolution to its processing history. This paper quantifies various microstructural features of an aluminum automotive alloy that undergoes sequential thermomechanical processing which is comprised hot rolling of a 150-mm billet to a 75-mm billet, rolling to 3 mm, annealing, and then cold rolling to a 0.8-mm thickness sheet. The microstructural content was characterized by means of electron backscatter diffraction, scanning electron microscopy, and transmission electron microscopy. The results clearly demonstrate the evolution of precipitate morphologies, dislocation structures, and grain orientation distributions. These data can be used to improve material models that claim to capture the history effects of the processing materials

  15. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Science.gov (United States)

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  16. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Directory of Open Access Journals (Sweden)

    Fangjun Qin

    2018-05-01

    Full Text Available In this paper, a sequential multiplicative extended Kalman filter (SMEKF is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  17. The EC BIOCLIM Project (2000-2003), 5. Euratom Framework Programme - Modelling sequential biosphere systems under climate change for radioactive waste disposal

    International Nuclear Information System (INIS)

    Calvez, Marianne

    2002-01-01

    Marianne Calvez (ANDRA, France) presented the new EC BIOCLIM project that started in 2001. Its main objective is to provide a scientific basis and practical methodology for assessing the possible long-term impacts on the safety of radioactive waste repositories in deep formations due to climate driven changes. She explained that BIOCLIM objective is not to predict what will be the future but will correspond to an illustration of how people could use the knowledge. The BIOCLIM project will use the outcomes from the Biomass project. Where Biomass considered discrete biospheres, the BIOCLIM project will consider the evolution of climate with a focus on the European climate for three regions in the United Kingdom, France and Spain. The consortium of BIOCLIM participants consists of various experts in climate modelling and various experts and organisations in performance assessment. The intent is to build an integrated dynamic climate model that represents all the important mechanisms for long-term climate evolution. The modelling will primarily address the next 200000 years. The final outcome will be an enhancement of the state-of-the-art treatment of biosphere system change over long periods of time through the use of a number of innovative climate modelling approaches and the application of the climate model outputs in performance assessments

  18. Effect of blocking TNF on IL-6 levels and metastasis in a B16-BL6 melanoma/mouse model.

    Science.gov (United States)

    Cubillos, S; Scallon, B; Feldmann, M; Taylor, P

    1997-01-01

    We studied the relationship between tumour necrosis factor (TNF) and interleukin 6 (IL-6) levels, and the metastatic process in C57BL/6 mice after intravenous inoculation of B16-BL6 melanoma cells. Bioactive TNF was not detectable in the sera of inoculated mice, but these animals did show higher TNF levels following intraperitoneal challenge with lipopolysaccharide (LPS) compared to control animals. Serum IL-6 levels were increased in inoculated animals. Injection of a hybrid molecule (p55-sf2) composed of the human p55 TNF receptor extracellular domain coupled to a human constant region backbone, decreased serum TNF (after LPS challenge) and IL-6 levels in inoculated animals. Lung metastases at 7-14 days were reduced, compared to human IgG-injected control animals, but this effect was lost at day 21 postinoculation. The results suggest that the reduction in the number of metastases may be related to the effect of blocking TNF activity.

  19. Molecular modeling of the elastomeric properties of repeating units and building blocks of resilin, a disordered elastic protein.

    Science.gov (United States)

    Khandaker, Md Shahriar K; Dudek, Daniel M; Beers, Eric P; Dillard, David A; Bevan, David R

    2016-08-01

    The mechanisms responsible for the properties of disordered elastomeric proteins are not well known. To better understand the relationship between elastomeric behavior and amino acid sequence, we investigated resilin, a disordered rubber-like protein, found in specialized regions of the cuticle of insects. Resilin of Drosophila melanogaster contains Gly-rich repetitive motifs comprised of the amino acids, PSSSYGAPGGGNGGR, which confer elastic properties to resilin. The repetitive motifs of insect resilin can be divided into smaller partially conserved building blocks: PSS, SYGAP, GGGN and GGR. Using molecular dynamics (MD) simulations, we studied the relative roles of SYGAP, and its less common variants SYSAP and TYGAP, on the elastomeric properties of resilin. Results showed that SYGAP adopts a bent structure that is one-half to one-third the end-to-end length of the other motifs having an equal number of amino acids but containing SYSAP or TYGAP substituted for SYGAP. The bent structure of SYGAP forms due to conformational freedom of glycine, and hydrogen bonding within the motif apparently plays a role in maintaining this conformation. These structural features of SYGAP result in higher extensibility compared to other motifs, which may contribute to elastic properties at the macroscopic level. Overall, the results are consistent with a role for the SYGAP building block in the elastomeric properties of these disordered proteins. What we learned from simulating the repetitive motifs of resilin may be applicable to the biology and mechanics of other elastomeric biomaterials, and may provide us the deeper understanding of their unique properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Sequential series for nuclear reactions

    International Nuclear Information System (INIS)

    Izumo, Ko

    1975-01-01

    A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)

  1. Silk-collagen-like block copolymers with charged blocks : self-assembly into nanosized ribbons and macroscopic gels

    NARCIS (Netherlands)

    Martens, A.A.

    2008-01-01

    The research described in this thesis concerns the design, biotechnological production, and physiochemical study of large water-soluble (monodisperse) protein triblock-copolymers with sequential blocks, some of which are positively or negatively charged and self-assemble in response to a change in

  2. Modelling and sequential simulation of multi-tubular metallic membrane and techno-economics of a hydrogen production process employing thin-layer membrane reactor

    KAUST Repository

    Shafiee, Alireza; Arab, Mobin; Lai, Zhiping; Liu, Zongwen; Abbas, Ali

    2016-01-01

    reforming hydrogen production plant. A techno-economic analysis is then conducted using the validated model for a plant producing 300 TPD of hydrogen. The plant utilises a thin (2.5 μm) defect-free and selective layer (Pd75Ag25 alloy) membrane reactor

  3. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  4. Monitoring sequential electron transfer with EPR

    International Nuclear Information System (INIS)

    Thurnauer, M.C.; Feezel, L.L.; Snyder, S.W.; Tang, J.; Norris, J.R.; Morris, A.L.; Rustandi, R.R.

    1989-01-01

    A completely general model which treats electron spin polarization (ESP) found in a system in which radical pairs with different magnetic interactions are formed sequentially has been described. This treatment has been applied specifically to the ESP found in the bacterial reaction center. Test cases show clearly how parameters such as structure, lifetime, and magnetic interactions within the successive radical pairs affect the ESP, and demonstrate that previous treatments of this problem have been incomplete. The photosynthetic bacterial reaction center protein is an ideal system for testing the general model of ESP. The radical pair which exhibits ESP, P 870 + Q - (P 870 + is the oxidized, primary electron donor, a bacteriochlorophyll special pair and Q - is the reduced, primary quinone acceptor) is formed via sequential electron transport through the intermediary radical pair P 870 + I - (I - is the reduced, intermediary electron acceptor, a bacteriopheophytin). In addition, it is possible to experimentally vary most of the important parameters, such as the lifetime of the intermediary radical pair and the magnetic interactions in each pair. It has been shown how selective isotopic substitution ( 1 H or 2 H) on P 870 , I and Q affects the ESP of the EPR spectrum of P 870 + Q - , observed at two different microwave frequencies, in Fe 2+ -depleted bacterial reaction centers of Rhodobacter sphaeroides R26. Thus, the relative magnitudes of the magnetic properties (nuclear hyperfine and g-factor differences) which influence ESP development were varied. The results support the general model of ESP in that they suggest that the P 870 + Q - radical pair interactions are the dominant source of ESP production in 2 H bacterial reaction centers

  5. Poly(ferrocenylsilane)-block-Polylactide Block Copolymers

    NARCIS (Netherlands)

    Roerdink, M.; van Zanten, Thomas S.; Hempenius, Mark A.; Zhong, Zhiyuan; Feijen, Jan; Vancso, Gyula J.

    2007-01-01

    A PFS/PLA block copolymer was studied to probe the effect of strong surface interactions on pattern formation in PFS block copolymer thin films. Successful synthesis of PFS-b-PLA was demonstrated. Thin films of these polymers show phase separation to form PFS microdomains in a PLA matrix, and

  6. Long-term dynamic and pseudo-state modeling of complete partial nitrification process at high nitrogen loading rates in a sequential batch reactor (SBR).

    Science.gov (United States)

    Soliman, Moomen; Eldyasti, Ahmed

    2017-06-01

    Recently, partial nitrification has been adopted widely either for the nitrite shunt process or intermediate nitrite generation step for the Anammox process. However, partial nitrification has been hindered by the complexity of maintaining stable nitrite accumulation at high nitrogen loading rates (NLR) which affect the feasibility of the process for high nitrogen content wastewater. Thus, the operational data of a lab scale SBR performing complete partial nitrification as a first step of nitrite shunt process at NLRs of 0.3-1.2kg/(m 3 d) have been used to calibrate and validate a process model developed using BioWin® in order to describe the long-term dynamic behavior of the SBR. Moreover, an identifiability analysis step has been introduced to the calibration protocol to eliminate the needs of the respirometric analysis for SBR models. The calibrated model was able to predict accurately the daily effluent ammonia, nitrate, nitrite, alkalinity concentrations and pH during all different operational conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  8. Sequential crystallization and morphology of triple crystalline biodegradable PEO-b-PCL-b-PLLA triblock terpolymers

    KAUST Repository

    Palacios, Jordana

    2016-01-05

    The sequential crystallization of poly(ethylene oxide)-b-poly(e-caprolactone)-b-poly(L-lactide) (PEO-b-PCL-b-PLLA) triblock terpolymers, in which the three blocks are able to crystallize separately and sequentially from the melt, is presented. Two terpolymers with identical PEO and PCL block lengths and two different PLLA block lengths were prepared, thus the effect of increasing PLLA content on the crystallization behavior and morphology was evaluated. Wide angle X-Ray scattering (WAXS) experiments performed on cooling from the melt confirmed the triple crystalline nature of these terpolymers and revealed that they crystallize in sequence: the PLLA block crystallizes first, then the PCL block, and finally the PEO block. Differential scanning calorimetry (DSC) analysis further demonstrated that the three blocks can crystallize from the melt when a low cooling rate is employed. The crystallization process takes place from a homogenous melt as indicated by small angle X-Ray scattering (SAXS) experiments. The crystallization and melting enthalpies and temperatures of both PEO and PCL blocks decrease as PLLA content in the terpolymer increases. Polarized light optical microscopy (PLOM) demonstrated that the PLLA block templates the morphology of the terpolymer, as it forms spherulites upon cooling from the melt. The subsequent crystallization of PCL and PEO blocks occurs inside the interlamellar regions of the previously formed PLLA block spherulites. In this way, unique triple crystalline mixed spherulitic superstructures have been observed for the first time. As the PLLA content in the terpolymer is reduced the superstructural morphology changes from spherulites to a more axialitic-like structure.

  9. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  10. Multiplicity distributions and multiplicity correlations in sequential, off-equilibrium fragmentation process

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    A new kinetic fragmentation model, the Fragmentation - Inactivation -Binary (FIB) model is described where a dissipative process stops randomly the sequential, conservative and off-equilibrium fragmentation process. (K.A.)

  11. Sequential simulation approach to modeling of multi-seam coal deposits with an application to the assessment of a Louisiana lignite

    Science.gov (United States)

    Olea, Ricardo A.; Luppens, James A.

    2012-01-01

    There are multiple ways to characterize uncertainty in the assessment of coal resources, but not all of them are equally satisfactory. Increasingly, the tendency is toward borrowing from the statistical tools developed in the last 50 years for the quantitative assessment of other mineral commodities. Here, we briefly review the most recent of such methods and formulate a procedure for the systematic assessment of multi-seam coal deposits taking into account several geological factors, such as fluctuations in thickness, erosion, oxidation, and bed boundaries. A lignite deposit explored in three stages is used for validating models based on comparing a first set of drill holes against data from infill and development drilling. Results were fully consistent with reality, providing a variety of maps, histograms, and scatterplots characterizing the deposit and associated uncertainty in the assessments. The geostatistical approach was particularly informative in providing a probability distribution modeling deposit wide uncertainty about total resources and a cumulative distribution of coal tonnage as a function of local uncertainty.

  12. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.

  13. Immediate Sequential Bilateral Cataract Surgery

    DEFF Research Database (Denmark)

    Kessel, Line; Andresen, Jens; Erngaard, Ditte

    2015-01-01

    The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...

  14. Multicore Performance of Block Algebraic Iterative Reconstruction Methods

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik B.; Hansen, Per Christian

    2014-01-01

    Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely on semiconv......Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....

  15. Differential effectiveness of tianeptine, clonidine and amitriptyline in blocking traumatic memory expression, anxiety and hypertension in an animal model of PTSD.

    Science.gov (United States)

    Zoladz, Phillip R; Fleshner, Monika; Diamond, David M

    2013-07-01

    Individuals exposed to life-threatening trauma are at risk for developing post-traumatic stress disorder (PTSD), a debilitating condition that involves persistent anxiety, intrusive memories and several physiological disturbances. Current pharmacotherapies for PTSD manage only a subset of these symptoms and typically have adverse side effects which limit their overall effectiveness. We evaluated the effectiveness of three different pharmacological agents to ameliorate a broad range of PTSD-like symptoms in our established predator-based animal model of PTSD. Adult male Sprague-Dawley rats were given 1-h cat exposures on two occasions that were separated by 10 days, in conjunction with chronic social instability. Beginning 24 h after the first cat exposure, rats received daily injections of amitriptyline, clonidine, tianeptine or vehicle. Three weeks after the second cat exposure, all rats underwent a battery of behavioral and physiological tests. The vehicle-treated, psychosocially stressed rats demonstrated a robust fear memory for the two cat exposures, as well as increased anxiety expressed on the elevated plus maze, an exaggerated startle response, elevated heart rate and blood pressure, reduced growth rate and increased adrenal gland weight, relative to the vehicle-treated, non-stressed (control) rats. Neither amitriptyline nor clonidine was effective at blocking the entire cluster of stress-induced sequelae, and each agent produced adverse side effects in control subjects. Only the antidepressant tianeptine completely blocked the effects of psychosocial stress on all of the physiological and behavioral measures that were examined. These findings illustrate the differential effectiveness of these three treatments to block components of PTSD-like symptoms in rats, and in particular, reveal the profile of tianeptine as the most effective of all three agents. Published by Elsevier Inc.

  16. 'Sequential' Boron Neutron Capture Therapy (BNCT): A Novel Approach to BNCT for the Treatment of Oral Cancer in the Hamster Cheek Pouch Model

    International Nuclear Information System (INIS)

    Molinari, Ana J.; Pozzi, Emiliano C.C.; Hughes, Andrea Monti; Heber, Elisa M.; Garabalino, Marcela A.; Thorp, Silvia I.; Miller, Marcelo; Itoiz, Maria E.; Aromando, Romina F.; Nigg, David W.; Quintana, Jorge; Santa Cruz, Gustavo A.; Trivillin, Veronica A.; Schwint, Amanda E.

    2011-01-01

    In the present study we evaluated the therapeutic effect and/or potential radiotoxicity of the novel 'Tandem' Boron Neutron Capture Therapy (T-BNCT) for the treatment of oral cancer in the hamster cheek pouch model at RA-3 Nuclear Reactor. Two groups of animals were treated with 'Tandem BNCT', i.e. BNCT mediated by boronophenylalanine (BPA) followed by BNCT mediated by sodium decahydrodecaborate (GB-10) either 24 h (T-24h-BNCT) or 48 h (T-48h-BNCT) later. A total tumor dose-matched single application of BNCT mediated by BPA and GB-10 administered jointly ((BPA + GB-10)-BNCT) was administered to an additional group of animals. At 28 days post-treatment, T-24h-BNCT and T-48h-BNCT induced, respectively, overall tumor control (OTC) of 95% and 91%, with no statistically significant differences between protocols. Tumor response for the single application of (BPA + GB-10)-BNCT was 75%, significantly lower than for T-BNCT. The T-BNCT protocols and (BPA + GB-10)-BNCT induced reversible mucositis in dose-limiting precancerous tissue around treated tumors, reaching Grade 3/4 mucositis in 47% and 60% of the animals respectively. No normal tissue radiotoxicity was associated to tumor control for any of the protocols. 'Tandem' BNCT enhances tumor control in oral cancer and reduces or, at worst, does not increase, mucositis in dose-limiting precancerous tissue.

  17. Comparative Study of Compensatory Liver Regeneration in a Rat Model: Portal Vein Ligation Only versus Sequential Ligation of the Portal Vein and Hepatic Artery

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Soo Young [Dept. of Pathology, Dongnam Institute of Radiological and Medical Sciences, Busan (Korea, Republic of); Jeon, Gyeong Sik [Dept. of Radiology, CHA Bundang Medical Center, College of Medicine, CHA University, Seongnam (Korea, Republic of); Lee, Byung Mo [Dept. of Surgery, Seoul Paik Hospital, Inje University College of Medicine, Seoul (Korea, Republic of)

    2013-04-15

    To compare the volume change and the regenerative capacity between portal vein ligation (embolization) (PVL) and heterochronous PVL with hepatic artery ligation (HAL) in a rodent model. The animals were separated into three groups: group I, ligation of the left lateral and median portal vein branches; group II, completion of PVL, followed by ligation of the same branches of the hepatic artery after 48 h; control group, laparotomy without ligation was performed. Five rats from each group were sacrificed on 1, 3, 5, and 7 days after the operation. Volume change measurement, liver function tests and immunohistochemical analysis were performed. The volume of the nonligated lobe between groups I and II was not significantly different by day 5 and day 7. Mean alanine aminotransferase and total bilirubin levels were significantly higher in group II, while the albumin level was higher in group I. Both c-kit- and MIB-5-positive cells used in the activity detection of regeneration were more prevalent in group I on day 1, 3, and 5, with statistical significance. There was no operation related mortality. PVL alone is safe and effective in compensatory liver regeneration. Performing both PVL and HAL does not confer any additional benefits.

  18. Block That Pain!

    Science.gov (United States)

    Skip Navigation Bar Home Current Issue Past Issues Block That Pain! Past Issues / Fall 2007 Table of ... contrast, most pain relievers used for surgical procedures block activity in all types of neurons. This can ...

  19. Bundle Branch Block

    Science.gov (United States)

    ... known cause. Causes can include: Left bundle branch block Heart attacks (myocardial infarction) Thickened, stiffened or weakened ... myocarditis) High blood pressure (hypertension) Right bundle branch block A heart abnormality that's present at birth (congenital) — ...

  20. Selective nickel-catalyzed conversion of model and lignin-derived phenolic compounds to cyclohexanone-based polymer building blocks.

    Science.gov (United States)

    Schutyser, Wouter; Van den Bosch, Sander; Dijkmans, Jan; Turner, Stuart; Meledina, Maria; Van Tendeloo, Gustaaf; Debecker, Damien P; Sels, Bert F

    2015-05-22

    Valorization of lignin is essential for the economics of future lignocellulosic biorefineries. Lignin is converted into novel polymer building blocks through four steps: catalytic hydroprocessing of softwood to form 4-alkylguaiacols, their conversion into 4-alkylcyclohexanols, followed by dehydrogenation to form cyclohexanones, and Baeyer-Villiger oxidation to give caprolactones. The formation of alkylated cyclohexanols is one of the most difficult steps in the series. A liquid-phase process in the presence of nickel on CeO2 or ZrO2 catalysts is demonstrated herein to give the highest cyclohexanol yields. The catalytic reaction with 4-alkylguaiacols follows two parallel pathways with comparable rates: 1) ring hydrogenation with the formation of the corresponding alkylated 2-methoxycyclohexanol, and 2) demethoxylation to form 4-alkylphenol. Although subsequent phenol to cyclohexanol conversion is fast, the rate is limited for the removal of the methoxy group from 2-methoxycyclohexanol. Overall, this last reaction is the rate-limiting step and requires a sufficient temperature (>250 °C) to overcome the energy barrier. Substrate reactivity (with respect to the type of alkyl chain) and details of the catalyst properties (nickel loading and nickel particle size) on the reaction rates are reported in detail for the Ni/CeO2 catalyst. The best Ni/CeO2 catalyst reaches 4-alkylcyclohexanol yields over 80 %, is even able to convert real softwood-derived guaiacol mixtures and can be reused in subsequent experiments. A proof of principle of the projected cascade conversion of lignocellulose feedstock entirely into caprolactone is demonstrated by using Cu/ZrO2 for the dehydrogenation step to produce the resultant cyclohexanones (≈80 %) and tin-containing beta zeolite to form 4-alkyl-ε-caprolactones in high yields, according to a Baeyer-Villiger-type oxidation with H2 O2 . © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The sequential structure of brain activation predicts skill.

    Science.gov (United States)

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  3. Distribution of lithostratigraphic units within the central block of Yucca Mountain, Nevada: A three-dimensional computer-based model, Version YMP.R2.0

    International Nuclear Information System (INIS)

    Buesch, D.C.; Nelson, J.E.; Dickerson, R.P.; Drake, R.M. II; San Juan, C.A.; Spengler, R.W.; Geslin, J.K.; Moyer, T.C.

    1996-01-01

    Yucca Mountain, Nevada is underlain by 14.0 to 11.6 Ma volcanic rocks tilted eastward 3 degree to 20 degree and cut by faults that were primarily active between 12.7 and 11.6 Ma. A three-dimensional computer-based model of the central block of the mountain consists of seven structural subblocks composed of six formations and the interstratified-bedded tuffaceous deposits. Rocks from the 12.7 Ma Tiva Canyon Tuff, which forms most of the exposed rocks on the mountain, to the 13.1 Ma Prow Pass Tuff are modeled with 13 surfaces. Modeled units represent single formations such as the Pah Canyon Tuff, grouped units such as the combination of the Yucca Mountain Tuff with the superjacent bedded tuff, and divisions of the Topopah Spring Tuff such as the crystal-poor vitrophyre interval. The model is based on data from 75 boreholes from which a structure contour map at the base of the Tiva Canyon Tuff and isochore maps for each unit are constructed to serve as primary input. Modeling consists of an iterative cycle that begins with the primary structure-contour map from which isochore values of the subjacent model unit are subtracted to produce the structure contour map on the base of the unit. This new structure contour map forms the input for another cycle of isochore subtraction to produce the next structure contour map. In this method of solids modeling, the model units are presented by surfaces (structure contour maps), and all surfaces are stored in the model. Surfaces can be converted to form volumes of model units with additional effort. This lithostratigraphic and structural model can be used for (1) storing data from, and planning future, site characterization activities, (2) preliminary geometry of units for design of Exploratory Studies Facility and potential repository, and (3) performance assessment evaluations

  4. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

    Directory of Open Access Journals (Sweden)

    Huber-Wagner S

    2010-05-01

    Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

  5. Application of multi-block methods in cement production

    DEFF Research Database (Denmark)

    Svinning, K.; Høskuldsson, Agnar

    2008-01-01

    distribution and the two last blocks the superficial microstructure analysed by differential thermo gravimetric analysis. The multi-block method is used to identify the role of each part. The score vectors of each block can be analysed separately or together with score vectors of other blocks. Stepwise......Compressive strength at 1 day of Portland cement as a function of the microstructure of cement was statistically modelled by application of multi-block regression method. The observation X-matrix was partitioned into four blocks, the first block representing the mineralogy, the second particle size...... regression is used to find minimum number of variables of each block. The multi-block method proved useful in determining the modelling strength of each data block and finding minimum number of variables within each data block....

  6. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  7. Inhibition of Pyk2 blocks lung inflammation and injury in a mouse model of acute lung injury

    Directory of Open Access Journals (Sweden)

    Duan Yingli

    2012-01-01

    Full Text Available Abstract Background Proline-rich tyrosine kinase 2 (Pyk2 is essential in neutrophil degranulation and chemotaxis in vitro. However, its effect on the process of lung inflammation and edema formation during LPS induced acute lung injury (ALI remains unknown. The goal of the present study was to determine the effect of inhibiting Pyk2 on LPS-induced acute lung inflammation and injury in vivo. Methods C57BL6 mice were given either 10 mg/kg LPS or saline intratracheally. Inhibition of Pyk2 was effected by intraperitoneal administration TAT-Pyk2-CT 1 h before challenge. Bronchoalveolar lavage analysis of cell counts, lung histology and protein concentration in BAL were analyzed at 18 h after LPS treatment. KC and MIP-2 concentrations in BAL were measured by a mouse cytokine multiplex kit. The static lung compliance was determined by pressure-volume curve using a computer-controlled small animal ventilator. The extravasated Evans blue concentration in lung homogenate was determined spectrophotometrically. Results Intratracheal instillation of LPS induced significant neutrophil infiltration into the lung interstitium and alveolar space, which was attenuated by pre-treatment with TAT-Pyk2-CT. TAT-Pyk2-CT pretreatment also attenuated 1 myeloperoxidase content in lung tissues, 2 vascular leakage as measured by Evans blue dye extravasation in the lungs and the increase in protein concentration in bronchoalveolar lavage, and 3 the decrease in lung compliance. In each paradigm, treatment with control protein TAT-GFP had no blocking effect. By contrast, production of neutrophil chemokines MIP-2 and keratinocyte-derived chemokine in the bronchoalveolar lavage was not reduced by TAT-Pyk2-CT. Western blot analysis confirmed that tyrosine phosphorylation of Pyk2 in LPS-challenged lungs was reduced to control levels by TAT-Pyk2-CT pretreatment. Conclusions These results suggest that Pyk2 plays an important role in the development of acute lung injury in mice and

  8. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  9. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  10. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  11. Coastal protection using topological interlocking blocks

    Science.gov (United States)

    Pasternak, Elena; Dyskin, Arcady; Pattiaratchi, Charitha; Pelinovsky, Efim

    2013-04-01

    The coastal protection systems mainly rely on the self-weight of armour blocks to ensure its stability. We propose a system of interlocking armour blocks, which form plate-shape assemblies. The shape and the position of the blocks are chosen in such a way as to impose kinematic constraints that prevent the blocks from being removed from the assembly. The topological interlocking shapes include simple convex blocks such as platonic solids, the most practical being tetrahedra, cubes and octahedra. Another class of topological interlocking blocks is so-called osteomorphic blocks, which form plate-like assemblies tolerant to random block removal (almost 25% of blocks need to be removed for the assembly to loose integrity). Both classes require peripheral constraint, which can be provided either by the weight of the blocks or post-tensioned internal cables. The interlocking assemblies provide increased stability because lifting one block involves lifting (and bending) the whole assembly. We model the effect of interlocking by introducing an equivalent additional self-weight of the armour blocks. This additional self-weight is proportional to the critical pressure needed to cause bending of the interlocking assembly when it loses stability. Using beam approximation we find an equivalent stability coefficient for interlocking. It is found to be greater than the stability coefficient of a structure with similar blocks without interlocking. In the case when the peripheral constraint is provided by the weight of the blocks and for the slope angle of 45o, the effective stability coefficient for a structure of 100 blocks is 33% higher than the one for a similar structure without interlocking. Further increase in the stability coefficient can be reached by a specially constructed peripheral constraint system, for instance by using post-tension cables.

  12. Generalized Block Failure

    DEFF Research Database (Denmark)

    Jönsson, Jeppe

    2015-01-01

    Block tearing is considered in several codes as a pure block tension or a pure block shear failure mechanism. However in many situations the load acts eccentrically and involves the transfer of a substantial moment in combination with the shear force and perhaps a normal force. A literature study...... shows that no readily available tests with a well-defined substantial eccentricity have been performed. This paper presents theoretical and experimental work leading towards generalized block failure capacity methods. Simple combination of normal force, shear force and moment stress distributions along...... yield lines around the block leads to simple interaction formulas similar to other interaction formulas in the codes....

  13. Lyme Carditis: An Interesting Trip to Third-Degree Heart Block and Back

    Directory of Open Access Journals (Sweden)

    Maxwell Eyram Afari

    2016-01-01

    Full Text Available Carditis is an uncommon presentation of the early disseminated phase of Lyme disease. We present the case of a young female who presented with erythema migrans and was found to have first-degree heart block which progressed to complete heart block within hours. After receiving ceftriaxone, there was complete resolution of the heart block in sequential fashion. Our case illustrates the importance of early recognition and anticipation of progressive cardiac conduction abnormalities in patients presenting with Lyme disease.

  14. Mathematical Modeling of Electrolyte Filtration through the Porous Cathode Blocks during Aluminum Electrolysis with Regard Interblock Seams

    Directory of Open Access Journals (Sweden)

    Orlov Anton S.

    2015-01-01

    Full Text Available This article investigates electrolyte filtration in the bottom of the aluminum electrolyzer cathode device using the mathematical modeling. Penetration of molten electrolyte in the heat insulation part of the lining is one of the main reasons of electrolyzer premature shutdown, because it leads to bottom destruction and excessive heat loss. This problem is considered a two-phase filtration of incompressible immiscible liquids in an inhomogeneous non-deformable porous body. The verification of the model on the problem of water filtration pin a porous medium has confirmed its adequacy. With the help of the developed mathematical model the dynamics of the impregnation of the lining of the cathode and electrolyte device defined thermal balance baths. Research has identified the speed of penetration of the melt in the bottom of the bath during service of the electrolyzer.

  15. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  16. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  17. Sequential Scintigraphy in Renal Transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Winkel, K. zum; Harbst, H.; Schenck, P.; Franz, H. E.; Ritz, E.; Roehl, L.; Ziegler, M.; Ammann, W.; Maier-Borst, W. [Institut Fuer Nuklearmedizin, Deutsches Krebsforschungszentrum, Heidelberg, Federal Republic of Germany (Germany)

    1969-05-15

    Based on experience gained from more than 1600 patients with proved or suspected kidney diseases and on results on extended studies with dogs, sequential scintigraphy was performed after renal transplantation in dogs. After intravenous injection of 500 {mu}Ci. {sup 131}I-Hippuran scintiphotos were taken during the first minute with an exposure time of 15 sec each and thereafter with an exposure of 2 min up to at least 16 min.. Several examinations were evaluated digitally. 26 examinations were performed on 11 dogs with homotransplanted kidneys. Immediately after transplantation the renal function was almost normal arid the bladder was filled in due time. At the beginning of rejection the initial uptake of radioactive Hippuran was reduced. The intrarenal transport became delayed; probably the renal extraction rate decreased. Corresponding to the development of an oedema in the transplant the uptake area increased in size. In cases of thrombosis of the main artery there was no evidence of any uptake of radioactivity in the transplant. Similar results were obtained in 41 examinations on 15 persons. Patients with postoperative anuria due to acute tubular necrosis showed still some uptake of radioactivity contrary to those with thrombosis of the renal artery, where no uptake was found. In cases of rejection the most frequent signs were a reduced initial uptake and a delayed intrarenal transport of radioactive Hippuran. Infarction could be detected by a reduced uptake in distinct areas of the transplant. (author)

  18. Sequential provisional implant prosthodontics therapy.

    Science.gov (United States)

    Zinner, Ira D; Markovits, Stanley; Jansen, Curtis E; Reid, Patrick E; Schnader, Yale E; Shapiro, Herbert J

    2012-01-01

    The fabrication and long-term use of first- and second-stage provisional implant prostheses is critical to create a favorable prognosis for function and esthetics of a fixed-implant supported prosthesis. The fixed metal and acrylic resin cemented first-stage prosthesis, as reviewed in Part I, is needed for prevention of adjacent and opposing tooth movement, pressure on the implant site as well as protection to avoid micromovement of the freshly placed implant body. The second-stage prosthesis, reviewed in Part II, should be used following implant uncovering and abutment installation. The patient wears this provisional prosthesis until maturation of the bone and healing of soft tissues. The second-stage provisional prosthesis is also a fail-safe mechanism for possible early implant failures and also can be used with late failures and/or for the necessity to repair the definitive prosthesis. In addition, the screw-retained provisional prosthesis is used if and when an implant requires removal or other implants are to be placed as in a sequential approach. The creation and use of both first- and second-stage provisional prostheses involve a restorative dentist, dental technician, surgeon, and patient to work as a team. If the dentist alone cannot do diagnosis and treatment planning, surgery, and laboratory techniques, he or she needs help by employing the expertise of a surgeon and a laboratory technician. This team approach is essential for optimum results.

  19. Social Influences in Sequential Decision Making.

    Directory of Open Access Journals (Sweden)

    Markus Schöbel

    Full Text Available People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

  20. Social Influences in Sequential Decision Making

    Science.gov (United States)

    Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

    2016-01-01

    People often make decisions in a social environment. The present work examines social influence on people’s decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others’ authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions. PMID:26784448

  1. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  2. Social Influences in Sequential Decision Making.

    Science.gov (United States)

    Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

    2016-01-01

    People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

  3. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  4. Ultrasound-guided direct delivery of 3-bromopyruvate blocks tumor progression in an orthotopic mouse model of human pancreatic cancer.

    Science.gov (United States)

    Ota, Shinichi; Geschwind, Jean-Francois H; Buijs, Manon; Wijlemans, Joost W; Kwak, Byung Kook; Ganapathy-Kanniappan, Shanmugasundaram

    2013-06-01

    Studies in animal models of cancer have demonstrated that targeting tumor metabolism can be an effective anticancer strategy. Previously, we showed that inhibition of glucose metabolism by the pyruvate analog, 3-bromopyruvate (3-BrPA), induces anticancer effects both in vitro and in vivo. We have also documented that intratumoral delivery of 3-BrPA affects tumor growth in a subcutaneous tumor model of human liver cancer. However, the efficacy of such an approach in a clinically relevant orthotopic tumor model has not been reported. Here, we investigated the feasibility of ultrasound (US) image-guided delivery of 3-BrPA in an orthotopic mouse model of human pancreatic cancer and evaluated its therapeutic efficacy. In vitro, treatment of Panc-1 cells with 3-BrPA resulted in a dose-dependent decrease in cell viability. The loss of viability correlated with a dose-dependent decrease in the intracellular ATP level and lactate production confirming that disruption of energy metabolism underlies these 3-BrPA-mediated effects. In vivo, US-guided delivery of 3-BrPA was feasible and effective as demonstrated by a marked decrease in tumor size on imaging. Further, the antitumor effect was confirmed by (1) a decrease in the proliferative potential by Ki-67 immunohistochemical staining and (2) the induction of apoptosis by terminal deoxynucleotidyl transferase-mediated deoxyuridine 5-triphospate nick end labeling staining. We therefore demonstrate the technical feasibility of US-guided intratumoral injection of 3-BrPA in a mouse model of human pancreatic cancer as well as its therapeutic efficacy. Our data suggest that this new therapeutic approach consisting of a direct intratumoral injection of antiglycolytic agents may represent an exciting opportunity to treat patients with pancreas cancer.

  5. Losing a dime with a satisfied mind: positive affect predicts less search in sequential decision making.

    Science.gov (United States)

    von Helversen, Bettina; Mata, Rui

    2012-12-01

    We investigated the contribution of cognitive ability and affect to age differences in sequential decision making by asking younger and older adults to shop for items in a computerized sequential decision-making task. Older adults performed poorly compared to younger adults partly due to searching too few options. An analysis of the decision process with a formal model suggested that older adults set lower thresholds for accepting an option than younger participants. Further analyses suggested that positive affect, but not fluid abilities, was related to search in the sequential decision task. A second study that manipulated affect in younger adults supported the causal role of affect: Increased positive affect lowered the initial threshold for accepting an attractive option. In sum, our results suggest that positive affect is a key factor determining search in sequential decision making. Consequently, increased positive affect in older age may contribute to poorer sequential decisions by leading to insufficient search. 2013 APA, all rights reserved

  6. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  7. 31 CFR 595.301 - Blocked account; blocked property.

    Science.gov (United States)

    2010-07-01

    ... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY TERRORISM SANCTIONS REGULATIONS General Definitions § 595.301 Blocked account; blocked property. The terms blocked account and blocked...

  8. A Block-Based Linear MMSE Noise Reduction with a High Temporal Resolution Modeling of the Speech Excitation

    DEFF Research Database (Denmark)

    Li, Chunjian; Andersen, S. V.

    2005-01-01

    A comprehensive linear minimum mean squared error (LMMSE) approach for parametric speech enhancement is developed. The proposed algorithms aim at joint LMMSE estimation of signal power spectra and phase spectra, as well as exploitation of correlation between spectral components. The major cause...... of this interfrequency correlation is shown to be the prominent temporal power localization in the excitation of voiced speech. LMMSE estimators in time domain and frequency domain are first formulated. To obtain the joint estimator, we model the spectral signal covariance matrix as a full covariancematrix instead...... of a diagonal covariance matrix as is the case in the Wiener filter derived under the quasi-stationarity assumption. To accomplish this, we decompose the signal covariance matrix into a synthesis filter matrix and an excitation matrix. The synthesis filter matrix is built from estimates of the all-pole model...

  9. Exploring the Spatial-Temporal Disparities of Urban Land Use Economic Efficiency in China and Its Influencing Factors under Environmental Constraints Based on a Sequential Slacks-Based Model

    Directory of Open Access Journals (Sweden)

    Hualin Xie

    2015-07-01

    Full Text Available Using a sequential slack-based measure (SSBM model, this paper analyzes the spatiotemporal disparities of urban land use economic efficiency (ULUEE under environmental constraints, and its influencing factors in 270 cities across China from 2003–2012. The main results are as follows: (1 The average ULUEE for Chinese cities is only 0.411, and out of the 270 cities, only six cities are always efficient in urban land use in the study period. Most cities have a lot of room to improve the economic output of secondary and tertiary industries, as well as environmental protection work; (2 The eastern region of China enjoys the highest ULUEE, followed by the western and central regions. Super-scale cities show the best performance of all four city scales, followed by large-scale, small-scale and medium-scale cities. Cities with relatively developed economies and less pollutant discharge always have better ULUEE; (3 The results of slack variables analysis show that most cities have problems such as the labor surplus, over-development, excessive pollutant discharge, economic output shortage, and unreasonable use of funds is the most serious one; (4 The regression results of the influencing factors show that improvements of the per capita GDP and land use intensity are helpful to raise ULUEE. The urbanization rate and the proportion of foreign enterprises’ output account for the total output in the secondary and tertiary industries only have the same effect in some regions and city scales. The land management policy and land leasing policy have negative impact on the ULUEE in all the three regions and four city scales; (5 Some targeted policy goals are proposed, including the reduction of surplus labor, and pay more attention to environmental protection. Most importantly, effective implementation of land management policies from the central government, and stopping blind leasing of land to make up the local government’s financial deficit would be very

  10. Advances in sequential data assimilation and numerical weather forecasting: An Ensemble Transform Kalman-Bucy Filter, a study on clustering in deterministic ensemble square root filters, and a test of a new time stepping scheme in an atmospheric model

    Science.gov (United States)

    Amezcua, Javier

    This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn't represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion

  11. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  12. Blocking antibodies induced by immunization with a hypoallergenic parvalbumin mutant reduce allergic symptoms in a mouse model of fish allergy.

    Science.gov (United States)

    Freidl, Raphaela; Gstoettner, Antonia; Baranyi, Ulrike; Swoboda, Ines; Stolz, Frank; Focke-Tejkl, Margarete; Wekerle, Thomas; van Ree, Ronald; Valenta, Rudolf; Linhart, Birgit

    2017-06-01

    Fish is a frequent elicitor of severe IgE-mediated allergic reactions. Beside avoidance, there is currently no allergen-specific therapy available. Hypoallergenic variants of the major fish allergen, parvalbumin, for specific immunotherapy based on mutation of the 2 calcium-binding sites have been developed. This study sought to establish a mouse model of fish allergy resembling human disease and to investigate whether mouse and rabbit IgG antibodies induced by immunization with a hypoallergenic mutant of the major carp allergen protect against allergic symptoms in sensitized mice. C3H/HeJ mice were sensitized with recombinant wildtype Cyp c 1 or carp extract by intragastric gavage. Antibody, cellular immune responses, and epitope specificity in sensitized mice were investigated by ELISA, rat basophil leukemia assay, T-cell proliferation experiments using recombinant wildtype Cyp c 1, and overlapping peptides spanning the Cyp c 1 sequence. Anti-hypoallergenic Cyp c 1 mutant mouse and rabbit sera were tested for their ability to inhibit IgE recognition of Cyp c 1, Cyp c 1-specific basophil degranulation, and Cyp c 1-induced allergic symptoms in the mouse model. A mouse model of fish allergy mimicking human disease regarding IgE epitope recognition and symptoms as close as possible was established. Administration of antisera generated in mice and rabbits by immunization with a hypoallergenic Cyp c 1 mutant inhibited IgE binding to Cyp c 1, Cyp c 1-induced basophil degranulation, and allergic symptoms caused by allergen challenge in sensitized mice. Antibodies induced by immunization with a hypoallergenic Cyp c 1 mutant protect against allergic reactions in a murine model of fish allergy. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Blocking antibodies induced by immunization with a hypoallergenic parvalbumin mutant reduce allergic symptoms in a mouse model of fish allergy

    OpenAIRE

    Freidl, Raphaela; Gstoettner, Antonia; Baranyi, Ulrike; Swoboda, Ines; Stolz, Frank; Focke-Tejkl, Margarete; Wekerle, Thomas; van Ree, Ronald; Valenta, Rudolf; Linhart, Birgit

    2016-01-01

    Background Fish is a frequent elicitor of severe IgE-mediated allergic reactions. Beside avoidance, there is currently no allergen-specific therapy available. Hypoallergenic variants of the major fish allergen, parvalbumin, for specific immunotherapy based on mutation of the 2 calcium-binding sites have been developed. Objectives This study sought to establish a mouse model of fish allergy resembling human disease and to investigate whether mouse and rabbit IgG antibodies induced by immunizat...

  14. Tranilast prevents renal interstitial fibrosis by blocking mast cell infiltration in a rat model of diabetic kidney disease.

    Science.gov (United States)

    Yin, Dan-Dan; Luo, Jun-Hui; Zhao, Zhu-Ye; Liao, Ying-Jun; Li, Ying

    2018-05-01

    Renal interstitial fibrosis is a final pathway that is observed in various types of kidney diseases, including diabetic kidney disease (DKD). The present study investigated the effect of tranilast on renal interstitial fibrosis and the association between its role and mast cell infiltration in a rat model of DKD. A total of 30 healthy 6‑week‑old male Sprague‑Dawley rats were randomly divided into the following four groups: Normal control group; DKD model group; low‑dose tranilast group (200 mg/kg/day); and high‑dose tranilast group (400 mg/kg/day). The morphological alterations of tubulointerstitial fibrosis were evaluated by Masson's trichrome staining, while mast cell infiltration into the renal tubular interstitium was measured by toluidine blue staining and complement C3a receptor 1 (C3aR) immunohistochemical staining (IHC). The expression of fibronectin (FN), collagen I (Col‑I), stem cell factor (SCF) and proto‑oncogene c‑kit (c‑kit) was detected by IHC, western blotting and reverse transcription‑quantitative‑polymerase chain reaction. The results demonstrated that tubulointerstitial fibrosis and mast cell infiltration were observed in DKD model rats, and this was improved dose‑dependently in the tranilast treatment groups. The expression of FN, Col‑I, SCF and c‑kit mRNA and protein was upregulated in the tubulointerstitium of DKD model rats compared with the normal control rats, and tranilast inhibited the upregulated expression of these markers. Furthermore, the degree of SCF and c‑kit expression demonstrated a significant positive correlation with C3aR‑positive mast cells and the markers of renal interstitial fibrosis. The results of the present study indicate that mast cell infiltration may promote renal interstitial fibrosis via the SCF/c‑kit signaling pathway. Tranilast may prevent renal interstitial fibrosis through inhibition of mast cell infiltration mediated through the SCF/c-kit signaling pathway.

  15. Nuclear geyser model of the origin of life: Driving force to promote the synthesis of building blocks of life

    Directory of Open Access Journals (Sweden)

    Toshikazu Ebisuzaki

    2017-03-01

    Full Text Available We propose the nuclear geyser model to elucidate an optimal site to bear the first life. Our model overcomes the difficulties that previously proposed models have encountered. Nuclear geyser is a geyser driven by a natural nuclear reactor, which was likely common in the Hadean Earth, because of a much higher abundance of 235U as nuclear fuel. The nuclear geyser supplies the following: (1 high-density ionizing radiation to promote chemical chain reactions that even tar can be used for intermediate material to restart chemical reactions, (2 a system to maintain the circulation of material and energy, which includes cyclic environmental conditions (warm/cool, dry/wet, etc. to enable to produce complex organic compounds, (3 a lower temperature than 100 °C as not to break down macromolecular organic compounds, (4 a locally reductive environment depending on rock types exposed along the geyser wall, and (5 a container to confine and accumulate volatile chemicals. These five factors are the necessary conditions that the birth place of life must satisfy. Only the nuclear geyser can meet all five, in contrast to the previously proposed birth sites, such as tidal flat, submarine hydrothermal vent, and outer space. The nuclear reactor and associated geyser, which maintain the circulations of material and energy with its surrounding environment, are regarded as the nuclear geyser system that enables numerous kinds of chemical reactions to synthesize complex organic compounds, and where the most primitive metabolism could be generated.

  16. Polymer flooding effect of seepage characteristics of the second tertiary combined model of L oilfield block B

    Directory of Open Access Journals (Sweden)

    Huan ZHAO

    2015-06-01

    Full Text Available The second tertiary combined model is applied to develop the second and third type reservoirs which have more oil layer quantity and strong anisotropism, compared to the regular main reservoir with polymer injection, whose seepage characteristics of polymer-injection-after-water-drive shows a remarkable difference, in addition. This development appears to have a larger effect on the remaining oil development and production. Simulating the second tertiary combined model by reservoir numerical simulation under different polymer molecular weight, polymer concentration, polymer injection rate on the polymer injection period, conclusions of the influenced seepage characteristics of original and added perforated interval pressure and water saturation are drawn. The conclusion shows that the polymer molecular weight could influence water saturation of added perforated interval; polymer concentration makes a significant impact on reservoir pressure; polymer injection rate has a great influence on the separate rate of original and added perforated interval. This research provides firm science evidence to the theory of the second tertiary combined model to develop and enhance oil injection-production rate.

  17. The relationship of the Yucca Mountain repository block to the regional ground-water system: A geochemical model

    International Nuclear Information System (INIS)

    Matuska, N.A.; Hess, J.W.

    1989-08-01

    Yucca Mountain, in southern Nevada, is being studied by the Department of Energy and the State of Nevada as the site of a high-level nuclear waste repository. Geochemical and isotopic modeling were used in this study to define the relationship of the volcanic tuff aquifers and aquitards to the underlying regional carbonate ground-water system. The chemical evolution of a ground water as it passes through a hypothetical tuffaceous aquifer was developed using computer models PHREEQE, WATEQDR and BALANCE. The tuffaceous system was divided into five parts, with specific mineralogies, reaction steps and temperatures. The initial solution was an analysis of a soil water from Rainier Mesa. The ending solution in each part became the initial solution in the next part. Minerals consisted of zeolites, smectites, authigenic feldspars and quartz polymorphs from described diagentic mineral zones. Reaction steps were ion exchange with zeolites. The solution from the final zone, Part V, was chosen as most representative, in terms of pH, element molalities and mineral solubilities, of tuffaceous water. This hypothetical volcanic water from Part V was mixed with water from the regional carbonate aquifer, and the results compared to analyses of Yucca Mountain wells. Mixing and modeling attempts were conducted on wells in which studies indicated upward flow

  18. Sequential activation of CD8+ T cells in the draining lymph nodes in response to pulmonary virus infection.

    Science.gov (United States)

    Yoon, Heesik; Legge, Kevin L; Sung, Sun-sang J; Braciale, Thomas J

    2007-07-01

    We have used a TCR-transgenic CD8+ T cell adoptive transfer model to examine the tempo of T cell activation and proliferation in the draining lymph nodes (DLN) in response to respiratory virus infection. The T cell response in the DLN differed for mice infected with different type A influenza strains with the onset of T cell activation/proliferation to the A/JAPAN virus infection preceding the A/PR8 response by 12-24 h. This difference in T cell activation/proliferation correlated with the tempo of accelerated respiratory DC (RDC) migration from the infected lungs to the DLN in response to influenza virus infection, with the migrant RDC responding to the A/JAPAN infection exhibiting a more rapid accumulation in the lymph nodes (i.e., peak migration for A/JAPAN at 18 h, A/PR8 at 24-36 h). Furthermore, in vivo administration of blocking anti-CD62L Ab at various time points before/after infection revealed that the virus-specific CD8+ T cells entered the DLN and activated in a sequential "conveyor belt"-like fashion. These results indicate that the tempo of CD8+ T cell activation/proliferation after viral infection is dependent on the tempo of RDC migration to the DLN and that T cell activation occurs in an ordered sequential fashion.

  19. A 3D heat conduction model for block-type high temperature reactors and its implementation into the code DYN3D

    International Nuclear Information System (INIS)

    Baier, Silvio; Kliem, Soeren; Rohde, Ulrich

    2011-01-01

    The gas-cooled high temperature reactor is a concept to produce energy at high temperatures with a high level of inherent safety. It gets special attraction due to e.g. high thermal efficiency and the possibility of hydrogen production. In addition to the PBMR (Pebble Bed Modular Reactor) the (V)HTR (Very high temperature reactor) concept has been established. The basic design of a prismatic HTR consists of the following elements. The fuel is coated with four layers of isotropic materials. These so-called TRISO particles are dispersed into compacts which are placed in a graphite block matrix. The graphite matrix additionally contains holes for the coolant gas. A one-dimensional model is sufficient to describe (the radial) heat transfer in LWRs. But temperature gradients in a prismatic HTR can occur in axial as well as in radial direction, since regions with different heat source release and with different coolant temperature heat up are coupled through the graphite matrix elements. Furthermore heat transfer into reflector elements is possible. DYN3D is a code system for coupled neutron and thermal hydraulics core calculations developed at the Helmholtzzentrum Dresden-Rossendorf. Concerning neutronics DYN3D consists of a two-group and multi-group diffusion approach based on nodal expansion methods. Furthermore a 1D thermal-hydraulics model for parallel coolant flow channels is included. The DYN3D code was extensively verified and validated via numerous numerical and experimental benchmark problems. That includes the NEA CRP benchmarks for PWR and BWR, the Three-Miles-Island-1 main steam line break and the Peach Bottom Turbine Trip benchmarks, as well as measurements carried out in an original-size VVER-1000 mock-up. An overview of the verification and validation activities can be found. Presently a DYN3D-HTR version is under development. It involves a 3D heat conduction model to deal with higher-(than one)-dimensional effects of heat transfer and heat conduction in

  20. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)