WorldWideScience

Sample records for sequential blocking model

  1. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  2. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  3. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  4. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  5. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  6. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  7. Evaluation of transversus abdominis plane block for renal transplant recipients - A meta-analysis and trial sequential analysis of published studies.

    Science.gov (United States)

    Singh, Preet Mohinder; Borle, Anuradha; Makkar, Jeetinder Kaur; Trisha, Aanjan; Sinha, Aashish

    2018-01-01

    Patients undergoing renal transplant (RT) have altered drug/opioid pharmacokinetics. Transversus abdominis plane (TAP) block in renal transplant recipients has been recently evaluated for analgesic and opioid-sparing potential by many trials. The studies comparing TAP-block to conventional analgesic regimens for RT were searched. Comparisons were made for total opioids consumed (as morphine-equivalents) during the first postoperative 24-h (primary objective), intraoperative, and immediate-postoperative period. Pain scores and postoperative nausea-vomiting (PONV) were also evaluated. Trial sequential analysis (TSA) was used to quantify the strength of analysis. Ten-trials with 258 and 237 patients in control and TAP-block group, respectively, were included. TAP-block decreased the 24-h (reported in 9-trials) opioid consumption by 14.61 ± 4.34 mg (reduction by 42.7%, random-effects, P consumption also decreased by 2.06 ± 0.63 mg (reduction of 27.8%) (random effects, P consumption in RT recipients. Persistent and better pain control is achieved when TAP-Block is used. Benefits of TAP block extend beyond the analgesic actions alone as it also decreases the 24-h incidence of postoperative nausea vomiting as well. The technique of the block needs standardization for RT recipients.

  8. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  9. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  10. Block of GABA(A) receptor ion channel by penicillin: electrophysiological and modeling insights toward the mechanism.

    Science.gov (United States)

    Rossokhin, Alexey V; Sharonova, Irina N; Bukanova, Julia V; Kolbaev, Sergey N; Skrebitsky, Vladimir G

    2014-11-01

    GABA(A) receptors (GABA(A)R) mainly mediate fast inhibitory neurotransmission in the central nervous system. Different classes of modulators target GABA(A)R properties. Penicillin G (PNG) belongs to the class of noncompetitive antagonists blocking the open GABA(A)R and is a prototype of β-lactam antibiotics. In this study, we combined electrophysiological and modeling approaches to investigate the peculiarities of PNG blockade of GABA-activated currents recorded from isolated rat Purkinje cells and to predict the PNG binding site. Whole-cell patch-сlamp recording and fast application system was used in the electrophysiological experiments. PNG block developed after channel activation and increased with membrane depolarization suggesting that the ligand binds within the open channel pore. PNG blocked stationary component of GABA-activated currents in a concentration-dependent manner with IC50 value of 1.12mM at -70mV. The termination of GABA and PNG co-application was followed by a transient tail current. Protection of the tail current from bicuculline block and dependence of its kinetic parameters on agonist affinity suggest that PNG acts as a sequential open channel blocker that prevents agonist dissociation while the channel remains blocked. We built the GABA(A)R models based on nAChR and GLIC structures and performed an unbiased systematic search of the PNG binding site. Monte-Carlo energy minimization was used to find the lowest energy binding modes. We have shown that PNG binds close to the intracellular vestibule. In both models the maximum contribution to the energy of ligand-receptor interactions revealed residues located on the level of 2', 6' and 9' rings formed by a bundle of M2 transmembrane segments, indicating that these residues most likely participate in PNG binding. The predicted structural models support the described mechanism of PNG block. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  12. Sequential error concealment for video/images by weighted template matching

    DEFF Research Database (Denmark)

    Koloda, Jan; Østergaard, Jan; Jensen, Søren Holdt

    2012-01-01

    In this paper we propose a novel spatial error concealment algorithm for video and images based on convex optimization. Block-based coding schemes in packet loss environment are considered. Missing macro blocks are sequentially reconstructed by filling them with a weighted set of templates...

  13. Efficacy of premixed versus sequential administration of ...

    African Journals Online (AJOL)

    sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

  14. A continuous-time neural model for sequential action.

    Science.gov (United States)

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. Sequential radioimmunotherapy with 177Lu- and 211At-labeled monoclonal antibody BR96 in a syngeneic rat colon carcinoma model

    DEFF Research Database (Denmark)

    Eriksson, Sophie E; Elgström, Erika; Bäck, Tom

    2014-01-01

    for small, established tumors. A combination of such radionuclides may be successful in regimens of radioimmunotherapy. In this study, rats were treated by sequential administration of first a 177Lu-labeled antibody, followed by a 211At-labeled antibody 25 days later. METHODS: Rats bearing solid colon...... carcinoma tumors were treated with 400 MBq/kg body weight 177Lu-BR96. After 25 days, three groups of animals were given either 5 or 10 MBq/kg body weight of 211At-BR96 simultaneously with or without a blocking agent reducing halogen uptake in normal tissues. Control animals were not given any 211At-BR96....... The rats suffered from reversible myelotoxicity after treatment. CONCLUSIONS: Sequential administration of 177Lu-BR96 and 211At-BR96 resulted in tolerable toxicity providing halogen blocking but did not enhance the therapeutic effect....

  16. Recovery from distal ulnar motor conduction block injury: serial EMG studies.

    Science.gov (United States)

    Montoya, Liliana; Felice, Kevin J

    2002-07-01

    Acute conduction block injuries often result from nerve compression or trauma. The temporal pattern of clinical, electrophysiologic, and histopathologic changes following these injuries has been extensively studied in experimental animal models but not in humans. Our recent evaluation of a young man with an injury to the deep motor branch of the ulnar nerve following nerve compression from weightlifting exercises provided the opportunity to follow the course and recovery of a severe conduction block injury with sequential nerve conduction studies. The conduction block slowly and completely resolved, as did the clinical deficit, over a 14-week period. The reduction in conduction block occurred at a linear rate of -6.1% per week. Copyright 2002 Wiley Periodicals, Inc.

  17. Sequential motor skill: cognition, perception and action

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.

    2013-01-01

    Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role

  18. Modelling of multi-block data

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Svinning, K.

    2006-01-01

    Here is presented a unified approach to modelling multi-block regression data. The starting point is a partition of the data X into L data blocks, X = (X-1, X-2,...X-L), and the data Y into M data-blocks, Y = (Y-1, Y-2,...,Y-M). The methods of linear regression, X -> Y, are extended to the case...... of a linear relationship between each X-i and Y-j. X-i -> Y-j. A modelling strategy is used to decide if the residual X-i should take part in the modelling of one or more Y(j)s. At each step the procedure of finding score vectors is based on well-defined optimisation procedures. The principle of optimisation...... is based on that the score vectors should give the sizes of the resulting Y(j)s loading vectors as large as possible. The partition of X and Y are independent of each other. The choice of Y-j can be X-j, Y-i = X-i, thus including the possibility of modelling X -> X-i,i=1,...,L. It is shown how...

  19. Development of computer-aided software engineering tool for sequential control of JT-60U

    International Nuclear Information System (INIS)

    Shimono, M.; Akasaka, H.; Kurihara, K.; Kimura, T.

    1995-01-01

    Discharge sequential control (DSC) is an essential control function for the intermittent and pulse discharge operation of a tokamak device, so that many subsystems may work with each other in correct order and/or synchronously. In the development of the DSC program, block diagrams of logical operation for sequential control are illustrated in its design at first. Then, the logical operators and I/O's which are involved in the block diagrams are compiled and converted to a certain particular form. Since the block diagrams of the sequential control amounts to about 50 sheets in the case of the JT-60 upgrade tokamak (JT-60U) high power discharge and the above steps of the development have been performed manually so far, a great effort has been required for the program development. In order to remove inefficiency in such development processes, a computer-aided software engineering (CASE) tool has been developed on a UNIX workstation. This paper reports how the authors design it for the development of the sequential control programs. The tool is composed of the following three tools: (1) Automatic drawing tool, (2) Editing tool, and (3) Trace tool. This CASE tool, an object-oriented programming tool having graphical formalism, can powerfully accelerate the cycle for the development of the sequential control function commonly associated with pulse discharge in a tokamak fusion device

  20. Use of sequential diagnostic pain blocks in a patient of posttraumatic complex regional pain syndrome-not otherwise specified complicated by myofascial trigger points and thoracolumbar pain syndrome

    Directory of Open Access Journals (Sweden)

    Kailash Kothari

    2017-01-01

    Full Text Available We are presenting a case of posttraumatic lower limb Complex regional pain syndrome – Not otherwise specified (CRPS – NOS. As it was not treated in acute phase, the pain became chronic and got complicated by myofascial and thoracolumbar pain syndrome. This case posed us a diagnostic challenge. We used sequential diagnostic pain blocks to identify the pain generators and successfully treat the patient. We used diagnostic blocks step by step to identify and treat pain generators – T12,L1 and L2 Facet joints, Lumbar sympathetic block for CRPS NOS and Trigger point injection with dry needling for myofascial pain syndrome. This case highlights the facet that additional pain generators unrelated to original pain may complicate the presentation. Identifying these pain generators requires out of box thinking and high index of suspicion.

  1. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  2. Models of sequential decision making in consumer lending

    OpenAIRE

    Kanshukan Rajaratnam; Peter A. Beling; George A. Overstreet

    2016-01-01

    Abstract In this paper, we introduce models of sequential decision making in consumer lending. From the definition of adverse selection in static lending models, we show that homogenous borrowers take-up offers at different instances of time when faced with a sequence of loan offers. We postulate that bounded rationality and diverse decision heuristics used by consumers drive the decisions they make about credit offers. Under that postulate, we show how observation of early decisions in a seq...

  3. Sequential Prediction of Literacy Achievement for Specific Learning Disabilities Contrasting in Impaired Levels of Language in Grades 4 to 9.

    Science.gov (United States)

    Sanders, Elizabeth A; Berninger, Virginia W; Abbott, Robert D

    Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in Grades 4 through 9 ( N = 103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n = 25), word reading and spelling (dyslexia, n = 60), or oral and written language (oral and written language learning disabilities, n = 18). That is, SLDs are defined on the basis of cascading level of language impairment (subword, word, and syntax/text). A five-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word-form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block 4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% of variance. Orthographic word-form coding uniquely predicted nearly every measure, whereas attention switching uniquely predicted only reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and oral and written language learning disabilities were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed.

  4. Infinite-degree-corrected stochastic block model

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2014-01-01

    In stochastic block models, which are among the most prominent statistical models for cluster analysis of complex networks, clusters are defined as groups of nodes with statistically similar link probabilities within and between groups. A recent extension by Karrer and Newman [Karrer and Newman...... corrected stochastic block model as a nonparametric Bayesian model, incorporating a parameter to control the amount of degree correction that can then be inferred from data. Additionally, our formulation yields principled ways of inferring the number of groups as well as predicting missing links...

  5. Sequential crystallization and morphology of triple crystalline biodegradable PEO-b-PCL-b-PLLA triblock terpolymers

    KAUST Repository

    Palacios, Jordana; Mugica, Agurtzane; Zubitur, Manuela; Iturrospe, Amaia; Arbe, A.; Liu, Guoming; Wang, Dujin; Zhao, Junpeng; Hadjichristidis, Nikolaos; Muller, Alejandro

    2016-01-01

    The sequential crystallization of poly(ethylene oxide)-b-poly(e-caprolactone)-b-poly(L-lactide) (PEO-b-PCL-b-PLLA) triblock terpolymers, in which the three blocks are able to crystallize separately and sequentially from the melt, is presented. Two

  6. RADIOMETRIC BLOCK ADJUSMENT AND DIGITAL RADIOMETRIC MODEL GENERATION

    Directory of Open Access Journals (Sweden)

    A. Pros

    2013-05-01

    Full Text Available In this paper we present a radiometric block adjustment method that is related to geometric block adjustment and to the concept of a terrain Digital Radiometric Model (DRM as a complement to the terrain digital elevation and surface models. A DRM, in our concept, is a function that for each ground point returns a reflectance value and a Bidirectional Reflectance Distribution Function (BRDF. In a similar way to the terrain geometric reconstruction procedure, given an image block of some terrain area, we split the DRM generation in two phases: radiometric block adjustment and DRM generation. In the paper we concentrate on the radiometric block adjustment step, but we also describe a preliminary DRM generator. In the block adjustment step, after a radiometric pre-calibraton step, local atmosphere radiative transfer parameters, and ground reflectances and BRDFs at the radiometric tie points are estimated. This radiometric block adjustment is based on atmospheric radiative transfer (ART models, pre-selected BRDF models and radiometric ground control points. The proposed concept is implemented and applied in an experimental campaign, and the obtained results are presented. The DRM and orthophoto mosaics are generated showing no radiometric differences at the seam lines.

  7. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  8. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  9. Spring-block Model for Barkhausen Noise

    International Nuclear Information System (INIS)

    Kovacs, K.; Brechet, Y.; Neda, Z.

    2005-01-01

    A simple mechanical spring-block model is used for studying Barkhausen noise (BN). The model incorporates the generally accepted physics of domain wall movement and pinning. Computer simulations on this model reproduces the main features of the hysteresis loop and Barkhausen jumps. The statistics of the obtained Barkhausen jumps follows several scaling laws, in qualitative agreement with experimental results. The model consists of a one-dimensional frictional spring-block system. The blocks model the Bloch-walls that separate inversely oriented magnetic domains, and springs correspond to the magnetized regions. Three types of realistic forces are modelled with this system: 1. the force resulting from the magnetic energy of the neighboring domains in external magnetic field (modelled by forces having alternating orientations and acting directly on the blocks); 2. the force resulting from the magnetic self-energy of each domain (modelled by the elastic forces of the springs); 3. the pinning forces acting on the domain walls (modelled by position dependent static friction acting on blocks). The dynamics of the system is governed by searching for equilibrium: one particular domain wall can jump to the next pinning center if the resultant of forces 1. and 2. is greater then the pinning force. The external magnetic field is successively increased (or decreased) and the system is relaxed to mechanical equilibrium. During the simulations we are monitoring the variation of the magnetization focusing on the shape of the hysteresis loop, power spectrum, jump size (avalanche size) distribution, signal duration distribution, signal area distribution. The simulated shape of the hysteresis loops fulfills all the requirements for real magnetization phenomena. The power spectrum indicates different behavior in the low (1/f noise) and high (white noise) frequency region. All the relevant distribution functions show scaling behavior over several decades of magnitude with a naturally

  10. Microwave Ablation: Comparison of Simultaneous and Sequential Activation of Multiple Antennas in Liver Model Systems.

    Science.gov (United States)

    Harari, Colin M; Magagna, Michelle; Bedoya, Mariajose; Lee, Fred T; Lubner, Meghan G; Hinshaw, J Louis; Ziemlewicz, Timothy; Brace, Christopher L

    2016-01-01

    To compare microwave ablation zones created by using sequential or simultaneous power delivery in ex vivo and in vivo liver tissue. All procedures were approved by the institutional animal care and use committee. Microwave ablations were performed in both ex vivo and in vivo liver models with a 2.45-GHz system capable of powering up to three antennas simultaneously. Two- and three-antenna arrays were evaluated in each model. Sequential and simultaneous ablations were created by delivering power (50 W ex vivo, 65 W in vivo) for 5 minutes per antenna (10 and 15 minutes total ablation time for sequential ablations, 5 minutes for simultaneous ablations). Thirty-two ablations were performed in ex vivo bovine livers (eight per group) and 28 in the livers of eight swine in vivo (seven per group). Ablation zone size and circularity metrics were determined from ablations excised postmortem. Mixed effects modeling was used to evaluate the influence of power delivery, number of antennas, and tissue type. On average, ablations created by using the simultaneous power delivery technique were larger than those with the sequential technique (P Simultaneous ablations were also more circular than sequential ablations (P = .0001). Larger and more circular ablations were achieved with three antennas compared with two antennas (P simultaneous power delivery creates larger, more confluent ablations with greater temperatures than those created with sequential power delivery. © RSNA, 2015.

  11. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  12. Sequential decoding of intramuscular EMG signals via estimation of a Markov model.

    Science.gov (United States)

    Monsifrot, Jonathan; Le Carpentier, Eric; Aoustin, Yannick; Farina, Dario

    2014-09-01

    This paper addresses the sequential decoding of intramuscular single-channel electromyographic (EMG) signals to extract the activity of individual motor neurons. A hidden Markov model is derived from the physiological generation of the EMG signal. The EMG signal is described as a sum of several action potentials (wavelet) trains, embedded in noise. For each train, the time interval between wavelets is modeled by a process that parameters are linked to the muscular activity. The parameters of this process are estimated sequentially by a Bayes filter, along with the firing instants. The method was tested on some simulated signals and an experimental one, from which the rates of detection and classification of action potentials were above 95% with respect to the reference decomposition. The method works sequentially in time, and is the first to address the problem of intramuscular EMG decomposition online. It has potential applications for man-machine interfacing based on motor neuron activities.

  13. Sequential SPECT/CT imaging starting with stress SPECT in patients with left bundle branch block suspected for coronary artery disease

    Energy Technology Data Exchange (ETDEWEB)

    Engbers, Elsemiek M.; Mouden, Mohamed [Isala, Department of Cardiology, Zwolle (Netherlands); Isala, Department of Nuclear Medicine, Zwolle (Netherlands); Timmer, Jorik R.; Ottervanger, Jan Paul [Isala, Department of Cardiology, Zwolle (Netherlands); Knollema, Siert; Jager, Pieter L. [Isala, Department of Nuclear Medicine, Zwolle (Netherlands)

    2017-01-15

    To investigate the impact of left bundle branch block (LBBB) on sequential single photon emission computed tomography (SPECT)/ CT imaging starting with stress-first SPECT. Consecutive symptomatic low- to intermediate-risk patients without a history of coronary artery disease (CAD) referred for SPECT/CT were included from an observational registry. If stress SPECT was abnormal, additional rest SPECT and, if feasible, coronary CT angiography (CCTA) were acquired. Of the 5,018 patients, 218 (4.3 %) demonstrated LBBB. Patients with LBBB were slightly older than patients without LBBB (65±12 vs. 61±11 years, p<0.001). Stress SPECT was more frequently abnormal in patients with LBBB (82 % vs. 46 %, p<0.001). After reviewing stress and rest images, SPECT was normal in 43 % of the patients with LBBB, compared to 77 % of the patients without LBBB (p<0.001). Sixty-four of the 124 patients with LBBB and abnormal stress-rest SPECT underwent CCTA (52 %), which could exclude obstructive CAD in 46 of the patients (72 %). Sequential SPECT/CT imaging starting with stress SPECT is not the optimal imaging protocol in patients with LBBB, as the majority of these patients have potentially false-positive stress SPECT. First-line testing using CCTA may be more appropriate in low- to intermediate-risk patients with LBBB. (orig.)

  14. Silk-collagen-like block copolymers with charged blocks : self-assembly into nanosized ribbons and macroscopic gels

    NARCIS (Netherlands)

    Martens, A.A.

    2008-01-01

    The research described in this thesis concerns the design, biotechnological production, and physiochemical study of large water-soluble (monodisperse) protein triblock-copolymers with sequential blocks, some of which are positively or negatively charged and self-assemble in response to a change in

  15. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    Directory of Open Access Journals (Sweden)

    Wakhid Slamet Ciptono

    2006-05-01

    Full Text Available This study extends the prior research (Zahra and Das 1993 by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment that may lead to higher company non-financial performance (productivity and operational reliability. The study distributed a questionnaire (by mail, e-mailed web system, and focus group discussion to three levels of managers (top, middle, and first-line of 49 oil and gas companies with 140 SBUs in Indonesia. These qualified samples fell into 47 upstream (supply-chain companies with 132 SBUs, and 2 downstream (demand-chain companies with 8 SBUs. A total of 1,332 individual usable questionnaires were returned thus qualified for analysis, representing an effective response rate of 50.19 percent. The researcher conducts structural equation modeling (SEM and hierarchical multiple regression analysis to assess the goodness-of-fit between the research models and the sample data and to test whether innovation strategy mediates the impact of leadership orientation on company non-financial performance. SEM reveals that the models have met goodness-of-fit criteria, thus the interpretation of the sequential models fits with the data. The results of SEM and hierarchical multiple regression: (1 support the importance of innovation strategy as a determinant of company non-financial performance, (2 suggest that the sequential model is appropriate for examining the relationships between six dimensions of innovation strategy and company non-financial performance, and (3 show that the sequential model provides additional insights into the indirect contribution of the individual

  16. Employees’ Perceptions of Corporate Social Responsibility and Job Performance: A Sequential Mediation Model

    Directory of Open Access Journals (Sweden)

    Inyong Shin

    2016-05-01

    Full Text Available In spite of the increasing importance of corporate social responsibility (CSR and employee job performance, little is still known about the links between the socially responsible actions of organizations and the job performance of their members. In order to explain how employees’ perceptions of CSR influence their job performance, this study first examines the relationships between perceived CSR, organizational identification, job satisfaction, and job performance, and then develops a sequential mediation model by fully integrating these links. The results of structural equation modeling analyses conducted for 250 employees at hotels in South Korea offered strong support for the proposed model. We found that perceived CSR was indirectly and positively associated with job performance sequentially mediated first through organizational identification and then job satisfaction. This study theoretically contributes to the CSR literature by revealing the sequential mechanism through which employees’ perceptions of CSR affect their job performance, and offers practical implications by stressing the importance of employees’ perceptions of CSR. Limitations of this study and future research directions are discussed.

  17. Multicore Performance of Block Algebraic Iterative Reconstruction Methods

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik B.; Hansen, Per Christian

    2014-01-01

    Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely on semiconv......Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....

  18. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    In this article, we present a method and an associated toolchain for the formal verification of the new Danish railway interlocking systems that are compatible with the European Train Control System (ETCS) Level 2. We have made a generic and reconfigurable model of the system behaviour and generic...... safety properties. This model accommodates sequential release - a feature in the new Danish interlocking systems. To verify the safety of an interlocking system, first a domain-specific description of interlocking configuration data is constructed and validated. Then the generic model and safety...

  19. Internet of Things building blocks and business models

    CERN Document Server

    Hussain, Fatima

    2017-01-01

    This book describes the building blocks and introductory business models for Internet of Things (IoT). The author provide an overview of the entire IoT architecture and constituent layers, followed by detail description of each block . Various inter-connecting technologies and sensors are discussed in context of IoT networks. In addition to this, concepts of Big Data and Fog Computing are presented and characterized as per data generated by versatile IoT applications . Smart parking system and context aware services are presented as an hybrid model of cloud and Fog Afterwards, various IoT applications and respective business models are discussed. Finally, author summarizes the IoT building blocks and identify research issues in each, and suggest potential research projects worthy of pursuing. .

  20. Mathematical Model for the Sequential Action of Radiation and Heat on Yeast Cells

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Lee, Yun Jong; Kim, Su Hyoun; Nili, Mohammad; Zhurakovskaya, Galina P.; Petin, Vladislav G.

    2009-01-01

    It is well known that the synergistic interaction of hyperthermia with ionizing radiation and other agents is widely used in hyperthermic oncology. Interaction between two agents may be considered as synergistic or antagonistic when the effect produced is greater or smaller than the sum of the two single responses. It has long be considered that the mechanism of synergistic interaction of hyperthermia and ionizing radiation may be brought about by an inhibition of the repair from sublethal and potentially lethal damage at the cellular level. The inhibition of the recovery process after combined treatments cannot be considered as a reason for the synergy, but rather would be the expected and predicted consequence of the production of irreversible damage. On the basis of it, a simple mathematical model of the synergistic interaction of two agents acting simultaneously has been proposed. However, the model has not been applied to predict the degree of interaction of heat and ionizing radiation after their sequential action. Extension of the model to the sequential treatment of heat and ionizing radiation seems to be of interest for theoretical and practical reasons. Thus, the purposes of the present work is to suggest the simplest mathematical model which would be able to account for the results obtained and currently available experimental information on the sequential action of radiation and heat.

  1. Sequential crystallization and morphology of triple crystalline biodegradable PEO-b-PCL-b-PLLA triblock terpolymers

    KAUST Repository

    Palacios, Jordana

    2016-01-05

    The sequential crystallization of poly(ethylene oxide)-b-poly(e-caprolactone)-b-poly(L-lactide) (PEO-b-PCL-b-PLLA) triblock terpolymers, in which the three blocks are able to crystallize separately and sequentially from the melt, is presented. Two terpolymers with identical PEO and PCL block lengths and two different PLLA block lengths were prepared, thus the effect of increasing PLLA content on the crystallization behavior and morphology was evaluated. Wide angle X-Ray scattering (WAXS) experiments performed on cooling from the melt confirmed the triple crystalline nature of these terpolymers and revealed that they crystallize in sequence: the PLLA block crystallizes first, then the PCL block, and finally the PEO block. Differential scanning calorimetry (DSC) analysis further demonstrated that the three blocks can crystallize from the melt when a low cooling rate is employed. The crystallization process takes place from a homogenous melt as indicated by small angle X-Ray scattering (SAXS) experiments. The crystallization and melting enthalpies and temperatures of both PEO and PCL blocks decrease as PLLA content in the terpolymer increases. Polarized light optical microscopy (PLOM) demonstrated that the PLLA block templates the morphology of the terpolymer, as it forms spherulites upon cooling from the melt. The subsequent crystallization of PCL and PEO blocks occurs inside the interlamellar regions of the previously formed PLLA block spherulites. In this way, unique triple crystalline mixed spherulitic superstructures have been observed for the first time. As the PLLA content in the terpolymer is reduced the superstructural morphology changes from spherulites to a more axialitic-like structure.

  2. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  3. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  4. The importance of examining movements within the US health care system: sequential logit modeling

    Directory of Open Access Journals (Sweden)

    Lee Chioun

    2010-09-01

    Full Text Available Abstract Background Utilization of specialty care may not be a discrete, isolated behavior but rather, a behavior of sequential movements within the health care system. Although patients may often visit their primary care physician and receive a referral before utilizing specialty care, prior studies have underestimated the importance of accounting for these sequential movements. Methods The sample included 6,772 adults aged 18 years and older who participated in the 2001 Survey on Disparities in Quality of Care, sponsored by the Commonwealth Fund. A sequential logit model was used to account for movement in all stages of utilization: use of any health services (i.e., first stage, having a perceived need for specialty care (i.e., second stage, and utilization of specialty care (i.e., third stage. In the sequential logit model, all stages are nested within the previous stage. Results Gender, race/ethnicity, education and poor health had significant explanatory effects with regard to use of any health services and having a perceived need for specialty care, however racial/ethnic, gender, and educational disparities were not present in utilization of specialty care. After controlling for use of any health services and having a perceived need for specialty care, inability to pay for specialty care via income (AOR = 1.334, CI = 1.10 to 1.62 or health insurance (unstable insurance: AOR = 0.26, CI = 0.14 to 0.48; no insurance: AOR = 0.12, CI = 0.07 to 0.20 were significant barriers to utilization of specialty care. Conclusions Use of a sequential logit model to examine utilization of specialty care resulted in a detailed representation of utilization behaviors and patient characteristics that impact these behaviors at all stages within the health care system. After controlling for sequential movements within the health care system, the biggest barrier to utilizing specialty care is the inability to pay, while racial, gender, and educational disparities

  5. Block Tridiagonal Matrices in Electronic Structure Calculations

    DEFF Research Database (Denmark)

    Petersen, Dan Erik

    in the Landauer–Büttiker ballistic transport regime. These calculations concentrate on determining the so– called Green’s function matrix, or portions thereof, which is the inverse of a block tridiagonal general complex matrix. To this end, a sequential algorithm based on Gaussian elimination named Sweeps...

  6. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference

  7. Solubilization of Phenol Derivatives in Polymer Micelles Formed by Cationic Block Copolymer

    Directory of Open Access Journals (Sweden)

    Irma Fuentes

    2017-01-01

    Full Text Available The aggregation of cationic block copolymers formed by polystyrene (PS and poly(ethyl-4-vinylpyridine (PS-b-PE4VP was studied in aqueous solution. Diblock copolymers of PS and poly(4-vinylpyridine were synthesized by sequential anionic polymerization using BuLi as initiator. Subsequently, the 4-vinylpyridine units were quaternized with ethyl bromide to obtain cationic PS-b-PE4VP block copolymers with different quaternization degree. The self-aggregation of cationic block copolymers was studied by fluorescence probing, whereas the morphology and size of polymer micelles were determined by transmission electronic microscopy. Results indicate that spherical micelles with sizes lower than 100 nm were formed, whereas their micropolarity decreases with increasing quaternization degree. The partition of phenols between the micellar and aqueous phase was studied by using the pseudo-phase model, and the results show that the partition coefficients increase with increasing length of the side alkyl chain and are larger for star micelles. These results are discussed in terms of three-region model.

  8. Coarse-grained modeling of hybrid block copolymer system

    Science.gov (United States)

    Su, Yongrui

    This thesis is comprised of three major projects of my research. In the first project, I proposed a nanoparticle model and combined it with the Theoretically Informed Coarse Grained (TICG) model for pure polymer systems and the grand canonical slip springs model developed in our group to build a new model for entangled nanocomposites. With Molecule Dynamics(MD) simulation, I studied the mechanic properties of the nanocomposites, for example the influence of nanoparticles size and volume fraction on entanglements, the diffusion of polymers and nanoparticles, and the influence of nanoparticles size and volume fraction on viscosity et al.. We found that the addition of small-size nanoparticles reduces the viscosity of the nanocomposites, which is in contrary to what Einstein predicted a century ago. However, when particle increases its size to micrometers the Einstein predictions is recovered. From our simulation, we believe that small-size nanoparticles can more effectively decrease the entanglements of nanocomposites than larger particles. The free volume effect introduced by small-size nanoparticles also helps decrease the viscosity of the whole system. In the second project, I combined the Ohta-Kawasaki (OK) model [3] and the Covariance Matrix Adaptation Evolutionary Strategy(CMA-ES) to optimize the block copolymer blends self-assembly in the hole-shrink process. The aim is to predict the optimal composition and the optimal surface energy to direct the block copolymer blends self-assembly process in the confined hole. After optimization in the OK model, we calibrated the optimal results by the more reliable TICG model and got the same morphology. By comparing different optimization process, we found that the homopolymers which are comprised of the same monomers as either block of the block copolymer can form a perfect perforated hole and might have better performance than the pure block copolymer. While homopolymers which are comprised of a third-party monomers

  9. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  10. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    Science.gov (United States)

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  11. Development and sensitivity analysis of a fullykinetic model of sequential reductive dechlorination in subsurface

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Albrechtsen, Hans-Jørgen

    2010-01-01

    and natural degradation of chlorinated solvents frequently occurs in the subsurface through sequential reductive dechlorination. However, the occurrence and the performance of natural sequential reductive dechlorination strongly depends on environmental factor such as redox conditions, presence of fermenting...... organic matter / electron donors, presence of specific biomass, etc. Here we develop a new fully-kinetic biogeochemical reactive model able to simulate chlorinated solvents degradation as well as production and consumption of molecular hydrogen. The model is validated using batch experiment data......Chlorinated hydrocarbons originating from point sources are amongst the most prevalent contaminants of ground water and often represent a serious threat to groundwater-based drinking water resources. Natural attenuation of contaminant plumes can play a major role in contaminated site management...

  12. Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback

    OpenAIRE

    Jung–Min Yang

    2016-01-01

    Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...

  13. Non-Markovianity in the collision model with environmental block

    Science.gov (United States)

    Jin, Jiasen; Yu, Chang-shui

    2018-05-01

    We present an extended collision model to simulate the dynamics of an open quantum system. In our model, the unit to represent the environment is, instead of a single particle, a block which consists of a number of environment particles. The introduced blocks enable us to study the effects of different strategies of system–environment interactions and states of the blocks on the non-Markovianities. We demonstrate our idea in the Gaussian channels of an all-optical system and derive a necessary and sufficient condition of non-Markovianity for such channels. Moreover, we show the equivalence of our criterion to the non-Markovian quantum jump in the simulation of the pure damping process of a single-mode field. We also show that the non-Markovianity of the channel working in the strategy that the system collides with environmental particles in each block in a certain order will be affected by the size of the block and the embedded entanglement and the effects of heating and squeezing the vacuum environmental state will quantitatively enhance the non-Markovianity.

  14. Device-independent two-party cryptography secure against sequential attacks

    International Nuclear Information System (INIS)

    Kaniewski, Jędrzej; Wehner, Stephanie

    2016-01-01

    The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser–Horne–Shimony–Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation. (paper)

  15. Device-independent two-party cryptography secure against sequential attacks

    Science.gov (United States)

    Kaniewski, Jędrzej; Wehner, Stephanie

    2016-05-01

    The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser-Horne-Shimony-Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation.

  16. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  17. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  18. Block models and personalized PageRank.

    Science.gov (United States)

    Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon

    2017-01-03

    Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods through the "seed set expansion problem": given a subset [Formula: see text] of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work, we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter [Formula: see text] that depends on the block model parameters. This connection provides a formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance, despite being simple linear classification rules, and are even competitive with belief propagation.

  19. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  20. Quantifying private benefits of control from a structural model of block trades

    NARCIS (Netherlands)

    Albuquerque, R.; Schroth, E.

    2009-01-01

    We study the determinants of private benefits of control in negotiated block transactions. We estimate the block pricing model in Burkart, Gromb, and Panunzi (2000) explicitly accounting for both block premia and block discounts in the data. The evidence suggests that the occurrence of a block

  1. Spatial distribution of block falls using volumetric GIS-decision-tree models

    Science.gov (United States)

    Abdallah, C.

    2010-10-01

    Block falls are considered a significant aspect of surficial instability contributing to losses in land and socio-economic aspects through their damaging effects to natural and human environments. This paper predicts and maps the geographic distribution and volumes of block falls in central Lebanon using remote sensing, geographic information systems (GIS) and decision-tree modeling (un-pruned and pruned trees). Eleven terrain parameters (lithology, proximity to fault line, karst type, soil type, distance to drainage line, elevation, slope gradient, slope aspect, slope curvature, land cover/use, and proximity to roads) were generated to statistically explain the occurrence of block falls. The latter were discriminated using SPOT4 satellite imageries, and their dimensions were determined during field surveys. The un-pruned tree model based on all considered parameters explained 86% of the variability in field block fall measurements. Once pruned, it classifies 50% in block falls' volumes by selecting just four parameters (lithology, slope gradient, soil type, and land cover/use). Both tree models (un-pruned and pruned) were converted to quantitative 1:50,000 block falls' maps with different classes; starting from Nil (no block falls) to more than 4000 m 3. These maps are fairly matching with coincidence value equal to 45%; however, both can be used to prioritize the choice of specific zones for further measurement and modeling, as well as for land-use management. The proposed tree models are relatively simple, and may also be applied to other areas (i.e. the choice of un-pruned or pruned model is related to the availability of terrain parameters in a given area).

  2. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    Science.gov (United States)

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  3. Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data

    KAUST Repository

    Xie, Qing

    2016-02-23

    Alzheimer\\'s Disease (AD) is currently attracting much attention in elders\\' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD\\'s progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients\\' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer\\'s Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.

  4. Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data

    KAUST Repository

    Xie, Qing; Wang, Su; Zhu, Jia; Zhang, Xiangliang

    2016-01-01

    Alzheimer's Disease (AD) is currently attracting much attention in elders' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD's progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer's Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.

  5. Lyme Carditis: An Interesting Trip to Third-Degree Heart Block and Back

    Directory of Open Access Journals (Sweden)

    Maxwell Eyram Afari

    2016-01-01

    Full Text Available Carditis is an uncommon presentation of the early disseminated phase of Lyme disease. We present the case of a young female who presented with erythema migrans and was found to have first-degree heart block which progressed to complete heart block within hours. After receiving ceftriaxone, there was complete resolution of the heart block in sequential fashion. Our case illustrates the importance of early recognition and anticipation of progressive cardiac conduction abnormalities in patients presenting with Lyme disease.

  6. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    OpenAIRE

    Ciptono, Wakhid Slamet

    2006-01-01

    This study extends the prior research (Zahra and Das 1993) by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs) of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment) that may ...

  7. Dynamic swelling of tunable full-color block copolymer photonic gels via counterion exchange.

    Science.gov (United States)

    Lim, Ho Sun; Lee, Jae-Hwang; Walish, Joseph J; Thomas, Edwin L

    2012-10-23

    One-dimensionally periodic block copolymer photonic lamellar gels with full-color tunability as a result of a direct exchange of counteranions were fabricated via a two-step procedure comprising the self-assembly of a hydrophobic block-hydrophilic polyelectrolyte block copolymer, polystyrene-b-poly(2-vinyl pyridine) (PS-b-P2VP), followed by sequential quaternization of the P2VP layers in 1-bromoethane solution. Depending on the hydration characteristics of each counteranion, the selective swelling of the block copolymer lamellar structures leads to large tunability of the photonic stop band from blue to red wavelengths. More extensive quaternization of the P2VP block allows the photonic lamellar gels to swell more and red shift to longer wavelength. Here, we investigate the dynamic swelling behavior in the photonic gel films through time-resolved in situ measurement of UV-vis transmission. We model the swelling behavior using the transfer matrix method based on the experimentally observed reflectivity data with substitution of appropriate counterions. These tunable structural color materials may be attractive for numerous applications such as high-contrast displays without using a backlight, color filters, and optical mirrors for flexible lasing.

  8. A fast and accurate online sequential learning algorithm for feedforward networks.

    Science.gov (United States)

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  9. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  10. Slip-spring model of entangled rod-coil block copolymers

    Science.gov (United States)

    Wang, Muzhou; Likhtman, Alexei E.; Olsen, Bradley D.

    2015-03-01

    Understanding the dynamics of rod-coil block copolymers is important for optimal design of functional nanostructured materials for organic electronics and biomaterials. Recently, we proposed a reptation theory of entangled rod-coil block copolymers, predicting the relaxation mechanisms of activated reptation and arm retraction that slow rod-coil dynamics relative to coil and rod homopolymers, respectively. In this work, we introduce a coarse-grained slip-spring model of rod-coil block copolymers to further explore these mechanisms. First, parameters of the coarse-grained model are tuned to match previous molecular dynamics simulation results for coils, rods, and block copolymers. For activated reptation, rod-coil copolymers are shown to disfavor configurations where the rod occupies curved portions of the entanglement tube of randomly varying curvature created by the coil ends. The effect of these barriers on diffusion is quantitatively captured by considering one-dimensional motion along an entanglement tube with a rough free energy potential. Finally, we analyze the crossover between the two mechanisms. The resulting dynamics from both mechanisms acting in combination is faster than from each one individually.

  11. Modelling of composite concrete block pavement systems applying a cohesive zone model

    DEFF Research Database (Denmark)

    Skar, Asmus; Poulsen, Peter Noe

    This paper presents a numerical analysis of the fracture behaviour of the cement bound base material in composite concrete block pavement systems, using a cohesive zone model. The functionality of the proposed model is tested on experimental and numerical investigations of beam bending tests....... The pavement is modelled as a simple slab on grade structure and parameters influencing the response, such as analysis technique, geometry and material parameters are studied. Moreover, the analysis is extended to a real scale example, modelling the pavement as a three-layered structure. It is found...... block pavements. It is envisaged that the methodology implemented in this study can be extended and thereby contribute to the ongoing development of rational failure criteria that can replace the empirical formulas currently used in pavement engineering....

  12. A 2d Block Model For Landslide Simulation: An Application To The 1963 Vajont Case

    Science.gov (United States)

    Tinti, S.; Zaniboni, F.; Manucci, A.; Bortolucci, E.

    A 2D block model to study the motion of a sliding mass is presented. The slide is par- titioned into a matrix of blocks the basis of which are quadrilaterals. The blocks move on a specified sliding surface and follow a trajectory that is computed by the model. The forces acting on the blocks are gravity, basal friction, buoyancy in case of under- water motion, and interaction with neighbouring blocks. At any time step, the position of the blocks on the sliding surface is determined in curvilinear (local) co-ordinates by computing the position of the vertices of the quadrilaterals and the position of the block centre of mass. Mathematically, the topology of the system is invariant during the motion, which means that the number of blocks is constant and that each block has always the same neighbours. Physically, this means that blocks are allowed to change form, but not to penetrate into each other, not to coalesce, not to split. The change of form is compensated by the change of height, under the computational assumption that the block volume is constant during motion: consequently lateral expansion or contraction yield respectively height reduction or increment of the blocks. This model is superior to the analogous 1D model where the mass is partitioned into a chain of interacting blocks. 1D models require the a-priori specification of the sliding path, that is of the trajectory of the blocks, which the 2D block model supplies as one of its output. In continuation of previous studies on the catastrophic slide of Vajont that occurred in 1963 in northern Italy and caused more than 2000 victims, the 2D block model has been applied to the Vajont case. The results are compared to the outcome of the 1D model, and more importantly to the observational data concerning the deposit position and morphology. The agreement between simulation and data is found to be quite good.

  13. A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.

    Science.gov (United States)

    Baron, Jonathan; Gürçay, Burcu

    2017-05-01

    The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.

  14. Mining Emerging Sequential Patterns for Activity Recognition in Body Sensor Networks

    DEFF Research Database (Denmark)

    Gu, Tao; Wang, Liang; Chen, Hanhua

    2010-01-01

    Body Sensor Networks oer many applications in healthcare, well-being and entertainment. One of the emerging applications is recognizing activities of daily living. In this paper, we introduce a novel knowledge pattern named Emerging Sequential Pattern (ESP)|a sequential pattern that discovers...... signicant class dierences|to recognize both simple (i.e., sequential) and complex (i.e., interleaved and concurrent) activities. Based on ESPs, we build our complex activity models directly upon the sequential model to recognize both activity types. We conduct comprehensive empirical studies to evaluate...

  15. Standardized method for reproducing the sequential X-rays flap

    International Nuclear Information System (INIS)

    Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia

    2009-01-01

    A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es

  16. Modeling sequential context effects in diagnostic interpretation of screening mammograms.

    Science.gov (United States)

    Alamudun, Folami; Paulus, Paige; Yoon, Hong-Jun; Tourassi, Georgia

    2018-07-01

    Prior research has shown that physicians' medical decisions can be influenced by sequential context, particularly in cases where successive stimuli exhibit similar characteristics when analyzing medical images. This type of systematic error is known to psychophysicists as sequential context effect as it indicates that judgments are influenced by features of and decisions about the preceding case in the sequence of examined cases, rather than being based solely on the peculiarities unique to the present case. We determine if radiologists experience some form of context bias, using screening mammography as the use case. To this end, we explore correlations between previous perceptual behavior and diagnostic decisions and current decisions. We hypothesize that a radiologist's visual search pattern and diagnostic decisions in previous cases are predictive of the radiologist's current diagnostic decisions. To test our hypothesis, we tasked 10 radiologists of varied experience to conduct blind reviews of 100 four-view screening mammograms. Eye-tracking data and diagnostic decisions were collected from each radiologist under conditions mimicking clinical practice. Perceptual behavior was quantified using the fractal dimension of gaze scanpath, which was computed using the Minkowski-Bouligand box-counting method. To test the effect of previous behavior and decisions, we conducted a multifactor fixed-effects ANOVA. Further, to examine the predictive value of previous perceptual behavior and decisions, we trained and evaluated a predictive model for radiologists' current diagnostic decisions. ANOVA tests showed that previous visual behavior, characterized by fractal analysis, previous diagnostic decisions, and image characteristics of previous cases are significant predictors of current diagnostic decisions. Additionally, predictive modeling of diagnostic decisions showed an overall improvement in prediction error when the model is trained on additional information about

  17. Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte

    2004-01-01

    are proposed, but all remain applicable to other domain problems. The well-known multi-resolution AAM optimisation is extended to include sequential relaxations on texture resolution, model coverage and model parameter constraints. Fully unsupervised analysis is obtained by exploiting model parameter...... that show that the method produces accurate, robust and rapid segmentations in a cross sectional study of 17 subjects, establishing its feasibility as a fully automated clinical tool for analysis and segmentation.......This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case...

  18. Effects of channel blocking on information transmission and energy efficiency in squid giant axons.

    Science.gov (United States)

    Liu, Yujiang; Yue, Yuan; Yu, Yuguo; Liu, Liwei; Yu, Lianchun

    2018-04-01

    Action potentials are the information carriers of neural systems. The generation of action potentials involves the cooperative opening and closing of sodium and potassium channels. This process is metabolically expensive because the ions flowing through open channels need to be restored to maintain concentration gradients of these ions. Toxins like tetraethylammonium can block working ion channels, thus affecting the function and energy cost of neurons. In this paper, by computer simulation of the Hodgkin-Huxley neuron model, we studied the effects of channel blocking with toxins on the information transmission and energy efficiency in squid giant axons. We found that gradually blocking sodium channels will sequentially maximize the information transmission and energy efficiency of the axons, whereas moderate blocking of potassium channels will have little impact on the information transmission and will decrease the energy efficiency. Heavy blocking of potassium channels will cause self-sustained oscillation of membrane potentials. Simultaneously blocking sodium and potassium channels with the same ratio increases both information transmission and energy efficiency. Our results are in line with previous studies suggesting that information processing capacity and energy efficiency can be maximized by regulating the number of active ion channels, and this indicates a viable avenue for future experimentation.

  19. A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems

    Science.gov (United States)

    Gambolati, Giuseppe; Teatini, Pietro

    1996-01-01

    A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.

  20. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

    NARCIS (Netherlands)

    Zamarro, G.

    2006-01-01

    This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

  1. Sequential infiltration synthesis for advanced lithography

    Energy Technology Data Exchange (ETDEWEB)

    Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing

    2017-10-10

    A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.

  2. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    Science.gov (United States)

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε analysis is effective in the evaluation of maize-breeding trials.

  3. [Mathematical modeling of synergistic interaction of sequential thermoradiation action on mammalian cells].

    Science.gov (United States)

    Belkina, S V; Semkina, M A; Kritskiĭ, R O; Petin, V G

    2010-01-01

    Data obtained by other authors for mammalian cells treated by sequential action of ionizing radiation and hyperthermia were used to estimate the dependence of synergistic enhancement ratio on the ratio of damages induced by these agents. Experimental results were described and interpreted by means of the mathematical model of synergism in accordance with which the synergism is expected to result from the additional lethal damage arising from the interaction of sublesions induced by both agents.

  4. Predictability of blocking

    International Nuclear Information System (INIS)

    Tosi, E.; Ruti, P.; Tibaldi, S.; D'Andrea, F.

    1994-01-01

    Tibaldi and Molteni (1990, hereafter referred to as TM) had previously investigated operational blocking predictability by the ECMWF model and the possible relationships between model systematic error and blocking in the winter season of the Northern Hemisphere, using seven years of ECMWF operational archives of analyses and day 1 to 10 forecasts. They showed that fewer blocking episodes than in the real atmosphere were generally simulated by the model, and that this deficiency increased with increasing forecast time. As a consequence of this, a major contribution to the systematic error in the winter season was shown to derive from the inability of the model to properly forecast blocking. In this study, the analysis performed in TM for the first seven winter seasons of the ECMWF operational model is extended to the subsequent five winters, during which model development, reflecting both resolution increases and parametrisation modifications, continued unabated. In addition the objective blocking index developed by TM has been applied to the observed data to study the natural low frequency variability of blocking. The ability to simulate blocking of some climate models has also been tested

  5. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  6. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    Science.gov (United States)

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  7. Universal block diagram based modeling and simulation schemes for fractional-order control systems.

    Science.gov (United States)

    Bai, Lu; Xue, Dingyü

    2017-05-08

    Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. The constructive backlash dissipate effect model for concrete blocks

    International Nuclear Information System (INIS)

    Tepes-Onea Florin

    2004-01-01

    From physical point of view, the dumping represents the soil seismic excitation energy taken over process through internal absorption, rubbed between existent layers, as and cracks on rocky foundations. Generally, on heavy dams dynamic analysis it is considered a viscous dump, proportional with deformation speed. The dumping can be evaluated on experimental bases or on environmental conditions measurements. The latest determine higher values of dumping elements. This it could be explained with the local factors influence which is not possible to modeled as backlash treatment, foundation ground characteristics, the concrete technology. This represents atypical dissipate phenomenon. A major influence is done by the excitation level as real seism or experimental excitation. The present work is about to establish the influence of the dissipate effect of the backlash on concrete blocks. The backlash finite elements modeling make this possible, studying different situations as rub effect, cohesion effect, seismic action on varying directions with the same accelerogram of 0.4 g. The studied blocks have the same dimensions, the relative displacement being obtained by foundation stiffness modified under two block parts. (author)

  9. The sequential structure of brain activation predicts skill.

    Science.gov (United States)

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    Science.gov (United States)

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522

  11. Work-Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress.

    Science.gov (United States)

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  12. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    Directory of Open Access Journals (Sweden)

    Shiyi Zhou

    2018-04-01

    Full Text Available After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  13. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    A. Tran-Duy (An); A. Boonen (Annelies); M.A.F.J. van de Laar (Mart); A. Franke (Andre); J.L. Severens (Hans)

    2011-01-01

    textabstractObjective: To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods: Discrete event simulation paradigm was selected for model

  14. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    Tran-Duy, A.; Boonen, A.; Laar, M.A.F.J.; Franke, A.C.; Severens, J.L.

    2011-01-01

    Objective To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods Discrete event simulation paradigm was selected for model development. Drug

  15. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  16. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Fiandaca, G.; Auken, Esben

    2013-01-01

    hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM) and electrical resistivity tomography (ERT) data. In a sequential hydrogeophysical inversion (SHI) a groundwater model is calibrated with geophysical data by coupling groundwater model parameters...... with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI). In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical...

  17. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Fiandaca, G.; Auken, Esben

    2013-01-01

    with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI). In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical...... hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM) and electrical resistivity tomography (ERT) data. In a sequential hydrogeophysical inversion (SHI) a groundwater model is calibrated with geophysical data by coupling groundwater model parameters...

  18. In vivo comparison of simultaneous versus sequential injection technique for thermochemical ablation in a porcine model.

    Science.gov (United States)

    Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C

    2012-01-01

    To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.

  19. Sequential super-stereotypy of an instinctive fixed action pattern in hyper-dopaminergic mutant mice: a model of obsessive compulsive disorder and Tourette's

    Directory of Open Access Journals (Sweden)

    Houchard Kimberly R

    2005-02-01

    Full Text Available Abstract Background Excessive sequential stereotypy of behavioral patterns (sequential super-stereotypy in Tourette's syndrome and obsessive compulsive disorder (OCD is thought to involve dysfunction in nigrostriatal dopamine systems. In sequential super-stereotypy, patients become trapped in overly rigid sequential patterns of action, language, or thought. Some instinctive behavioral patterns of animals, such as the syntactic grooming chain pattern of rodents, have sufficiently complex and stereotyped serial structure to detect potential production of overly-rigid sequential patterns. A syntactic grooming chain is a fixed action pattern that serially links up to 25 grooming movements into 4 predictable phases that follow 1 syntactic rule. New mutant mouse models allow gene-based manipulation of brain function relevant to sequential patterns, but no current animal model of spontaneous OCD-like behaviors has so far been reported to exhibit sequential super-stereotypy in the sense of a whole complex serial pattern that becomes stronger and excessively rigid. Here we used a hyper-dopaminergic mutant mouse to examine whether an OCD-like behavioral sequence in animals shows sequential super-stereotypy. Knockdown mutation of the dopamine transporter gene (DAT causes extracellular dopamine levels in the neostriatum of these adult mutant mice to rise to 170% of wild-type control levels. Results We found that the serial pattern of this instinctive behavioral sequence becomes strengthened as an entire entity in hyper-dopaminergic mutants, and more resistant to interruption. Hyper-dopaminergic mutant mice have stronger and more rigid syntactic grooming chain patterns than wild-type control mice. Mutants showed sequential super-stereotypy in the sense of having more stereotyped and predictable syntactic grooming sequences, and were also more likely to resist disruption of the pattern en route, by returning after a disruption to complete the pattern from the

  20. Extended recency effect extended: blocking, presentation mode, and retention interval.

    Science.gov (United States)

    Glidden, L M; Pawelski, C; Mar, H; Zigman, W

    1979-07-01

    The effect of blocking of stimulus items on the free recall of EMR adolescents was examined. In Experiment 1 a multitrial free-recall list of 15 pictures was presented either simultaneously in groups of 3, or sequentially, one at a time. Consistent ordering was used in both conditions, so that on each trial, each item in each set of 3 pictures was presented contiguously with the other 2 items from that set. In addition, recall came immediately or after a filled or unfilled delay of 24.5 seconds. Results showed that simultaneous presentation led to higher recall, subjective organization, and clustering than did sequential presentation, but analysis of serial-position curves showed a much reduced extended recency effect in comparison with previous studies. Experiment 2 was designed to determine whether the cause of the reduced extended recency was the use of pictures rather than words as stimuli. Stimuli were presented either as pictures, as pictures with auditory labels, or as words with auditory labels, with both simultaneous and consistent ordering for all conditions. Results indicated a strong extended recency effect for all groups, eliminating presentation mode as a causal factor in the data of Experiment 1. We concluded that blocking leads to increased organization and recall over a variety of presentation modes, rates, and block sizes.

  1. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    International Nuclear Information System (INIS)

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  2. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  3. Campbell and moment measures for finite sequential spatial processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2006-01-01

    textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector

  4. Transformation Strategies between Block-Oriented and Graph-Oriented Process Modelling Languages

    DEFF Research Database (Denmark)

    Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe

    2006-01-01

    Much recent research work discusses the transformation between different process modelling languages. This work, however, is mainly focussed on specific process modelling languages, and thus the general reusability of the applied transformation concepts is rather limited. In this paper, we aim...... to abstract from concrete transformation strategies by distinguishing two major paradigms for representing control flow in process modelling languages: block-oriented languages (such as BPEL and BPML) and graph-oriented languages (such as EPCs and YAWL). The contribution of this paper are generic strategies...... for transforming from block-oriented process languages to graph-oriented languages, and vice versa....

  5. Generalized infimum and sequential product of quantum effects

    International Nuclear Information System (INIS)

    Li Yuan; Sun Xiuhong; Chen Zhengli

    2007-01-01

    The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

  6. Modeling the building blocks of biodiversity.

    Directory of Open Access Journals (Sweden)

    Lucas N Joppa

    Full Text Available BACKGROUND: Networks of single interaction types, such as plant-pollinator mutualisms, are biodiversity's "building blocks". Yet, the structure of mutualistic and antagonistic networks differs, leaving no unified modeling framework across biodiversity's component pieces. METHODS/PRINCIPAL FINDINGS: We use a one-dimensional "niche model" to predict antagonistic and mutualistic species interactions, finding that accuracy decreases with the size of the network. We show that properties of the modeled network structure closely approximate empirical properties even where individual interactions are poorly predicted. Further, some aspects of the structure of the niche space were consistently different between network classes. CONCLUSIONS/SIGNIFICANCE: These novel results reveal fundamental differences between the ability to predict ecologically important features of the overall structure of a network and the ability to predict pair-wise species interactions.

  7. Testing block subdivision algorithms on block designs

    Science.gov (United States)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  8. Dynamics-based sequential memory: Winnerless competition of patterns

    International Nuclear Information System (INIS)

    Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.

    2003-01-01

    We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns

  9. About Block Dynamic Model of Earthquake Source.

    Science.gov (United States)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  10. Behavioral Modeling of WSN MAC Layer Security Attacks: A Sequential UML Approach

    DEFF Research Database (Denmark)

    Pawar, Pranav M.; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    is the vulnerability to security attacks/threats. The performance and behavior of a WSN are vastly affected by such attacks. In order to be able to better address the vulnerabilities of WSNs in terms of security, it is important to understand the behavior of the attacks. This paper addresses the behavioral modeling...... of medium access control (MAC) security attacks in WSNs. The MAC layer is responsible for energy consumption, delay and channel utilization of the network and attacks on this layer can introduce significant degradation of the individual sensor nodes due to energy drain and in performance due to delays....... The behavioral modeling of attacks will be beneficial for designing efficient and secure MAC layer protocols. The security attacks are modeled using a sequential diagram approach of Unified Modeling Language (UML). Further, a new attack definition, specific to hybrid MAC mechanisms, is proposed....

  11. DETERMINATION OF RESOLUTION LIMITS OF ELECTRICAL TOMOGRAPHY ON THE BLOCK MODEL IN A HOMOGENOUS ENVIRONMENT BY MEANS OF ELECTRICAL MODELLING

    Directory of Open Access Journals (Sweden)

    Franjo Šumanovac

    2007-12-01

    Full Text Available The block model in a homogenous environment can generally serve for presentation of some geological models: changes of facies, changes of rock compactness-fragmentation, underground cavities, bauxite deposits, etc. Therefore, on the block model of increased resistivities in a homogenous environment of low resistivity, the potentials of the electrical tomography method were tested for the purpose of their detection. Regarding potentials of block detection, resolution methods depend on: depth of block location, ratio between block resistivity and the environment in which it is located as well as applied survey geometry, i.e. electrode array. Thus the analyses carried out for the most frequently used electrode arrays in the investigations are the following: the Wenner, Wenner-Schlumberger, dipole-dipole and pole-pole arrays. For each array, maximum depths at which a block can be detected relative to the ratio between block resistivity and parent rock environment were analyzed. The results are shown in the two-dimensional graphs, where the ratio between the block resistivity and the environment is shown on the X-axis, and the resolution depth on the Y-axis, after which the curves defining the resolution limits were drawn. These graphs have a practical use, since they enable a fast, simple determination of potentials of the method application on a specific geological model.

  12. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  13. Validation studies on indexed sequential modeling for the Colorado River Basin

    International Nuclear Information System (INIS)

    Labadie, J.W.; Fontane, D.G.; Salas, J.D.; Ouarda, T.

    1991-01-01

    This paper reports on a method called indexed sequential modeling (ISM) that has been developed by the Western Area Power Administration to estimate reliable levels of project dependable power capacity (PDC) and applied to several federal hydro systems in the Western U.S. The validity of ISM in relation to more commonly accepted stochastic modeling approaches is analyzed by applying it to the Colorado River Basin using the Colorado River Simulation System (CRSS) developed by the U.S. Bureau of Reclamation. Performance of ISM is compared with results from input of stochastically generated data using the LAST Applied Stochastic Techniques Package. Results indicate that output generated from ISM synthetically generated sequences display an acceptable correspondence with results obtained from final convergent stochastically generated hydrology for the Colorado River Basin

  14. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  15. A sequential threshold cure model for genetic analysis of time-to-event data

    DEFF Research Database (Denmark)

    Ødegård, J; Madsen, Per; Labouriau, Rodrigo S.

    2011-01-01

    In analysis of time-to-event data, classical survival models ignore the presence of potential nonsusceptible (cured) individuals, which, if present, will invalidate the inference procedures. Existence of nonsusceptible individuals is particularly relevant under challenge testing with specific...... pathogens, which is a common procedure in aquaculture breeding schemes. A cure model is a survival model accounting for a fraction of nonsusceptible individuals in the population. This study proposes a mixed cure model for time-to-event data, measured as sequential binary records. In a simulation study...... survival data were generated through 2 underlying traits: susceptibility and endurance (risk of dying per time-unit), associated with 2 sets of underlying liabilities. Despite considerable phenotypic confounding, the proposed model was largely able to distinguish the 2 traits. Furthermore, if selection...

  16. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  17. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Guiseppe, Cavaliere; Rahbæk, Anders; Taylor, A.M. Robert

    with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....

  18. Robust Sequential Circuits Design Technique for Low Voltage and High Noise Scenarios

    Directory of Open Access Journals (Sweden)

    Garcia-Leyva Lancelot

    2016-01-01

    In this paper we introduce an innovative input and output data redundancy principle for sequential block circuits, the responsible to keep the state of the system, showing its efficiency in front of other robust technique approaches. The methodology is totally different from the Von Neumann approaches, because element are not replicated N times, but instead, they check the coherence of redundant input data no allowing data propagation in case of discrepancy. This mechanism does not require voting devices.

  19. Physical and theoretical modeling of rock slopes against block-flexure toppling failure

    Directory of Open Access Journals (Sweden)

    Mehdi Amini

    2015-12-01

    Full Text Available Block-flexure is the most common mode of toppling failure in natural and excavated rock slopes. In such failure, some rock blocks break due to tensile stresses and some overturn under their own weights and then all of them topple together. In this paper, first, a brief review of previous studies on toppling failures is presented. Then, the physical and mechanical properties of experimental modeling materials are summarized. Next, the physical modeling results of rock slopes with the potential of block-flexural toppling failures are explained and a new analytical solution is proposed for the stability analysis of such slopes. The results of this method are compared with the outcomes of the experiments. The comparative studies show that the proposed analytical approach is appropriate for the stability analysis of rock slopes against block-flexure toppling failure. Finally, a real case study is used for the practical verification of the suggested method.

  20. Hemodynamic analysis of sequential graft from right coronary system to left coronary system.

    Science.gov (United States)

    Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun

    2016-12-28

    Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.

  1. A Partial Proportional Odds Model for Pedestrian Crashes at Mid-Blocks in Melbourne Metropolitan Area

    Directory of Open Access Journals (Sweden)

    Toran Pour Alireza

    2016-01-01

    Full Text Available Pedestrian crashes account for 11% of all reported traffic crashes in Melbourne metropolitan area between 2004 and 2013. There are very limited studies on pedestrian accidents at mid-blocks. Mid-block crashes account for about 46% of the total pedestrian crashes in Melbourne metropolitan area. Meanwhile, about 50% of all pedestrian fatalities occur at mid-blocks. In this research, Partial Proportional Odds (PPO model is applied to examine vehicle-pedestrian crash severity at mid-blocks in Melbourne metropolitan area. The PPO model is a logistic regression model that allows the covariates that meet the proportional odds assumption to affect different crash severity levels with the same magnitude; whereas the covariates that do not meet the proportional odds assumption can have different effects on different severity levels. In this research vehicle-pedestrian crashes at mid-blocks are analysed for first time. In addition, some factors such as distance of crashes to public transport stops, average road slope and some social characteristics are considered to develop the model in this research for first time. Results of PPO model show that speed limit, light condition, pedestrian age and gender, and vehicle type are the most significant factors that influence vehicle-pedestrian crash severity at mid-blocks.

  2. Blocking temperature distribution in implanted Co-Ni nanoparticles obtained by magneto-optical measurements

    Energy Technology Data Exchange (ETDEWEB)

    D' Orazio, F.; Lucari, F. E-mail: franco.lucari@aquila.infn.it; Melchiorri, M.; Julian Fernandez, C. de; Mattei, G.; Mazzoldi, P.; Sangregorio, C.; Gatteschi, D.; Fiorani, D

    2003-05-01

    Three samples of Co-Ni alloy nanoparticles with different compositions were prepared by sequential ion implantation in silica slides. Transmission electron microscopy (TEM) showed the presence of spherical nanoparticles dispersed in the matrix. Magneto-optical Kerr effect analysis identified two magnetic components attributed to superparamagnetic particles in unblocked and blocked states, respectively. Magnetic field loops were measured as a function of temperature. Blocking temperature distributions were obtained; and their comparison with the size distributions derived from TEM provided the average magnetic anisotropy of the particles.

  3. Blocking temperature distribution in implanted Co-Ni nanoparticles obtained by magneto-optical measurements

    International Nuclear Information System (INIS)

    D'Orazio, F.; Lucari, F.; Melchiorri, M.; Julian Fernandez, C. de; Mattei, G.; Mazzoldi, P.; Sangregorio, C.; Gatteschi, D.; Fiorani, D.

    2003-01-01

    Three samples of Co-Ni alloy nanoparticles with different compositions were prepared by sequential ion implantation in silica slides. Transmission electron microscopy (TEM) showed the presence of spherical nanoparticles dispersed in the matrix. Magneto-optical Kerr effect analysis identified two magnetic components attributed to superparamagnetic particles in unblocked and blocked states, respectively. Magnetic field loops were measured as a function of temperature. Blocking temperature distributions were obtained; and their comparison with the size distributions derived from TEM provided the average magnetic anisotropy of the particles

  4. A casemix model for estimating the impact of hospital access block on the emergency department.

    Science.gov (United States)

    Stuart, Peter

    2004-06-01

    To determine the ED activity and costs resulting from access block. A casemix model (AWOOS) was developed to measure activity due to access block. Using data from four hospitals between 1998 and 2002, ED activity was measured using the urgency and disposition group (UDG) casemix model and the AWOOS model with the purpose of determining the change in ED activity due to access block. Whilst the mean length of stay in ED (admitted patients) increased by 93% between 1998 and 2002, mean UDG activity increased by 0.63% compared to a mean increase in AWOOS activity of 24.5%. The 23.9% difference between UDG and AWOOS activity represents the (unmeasured) increase in ED activity and costs for the period 1998-2002 resulting from access block. The UDG system significantly underestimates the activity in EDs experiencing marked access block.

  5. Macroscopic Dynamic Modeling of Sequential Batch Cultures of Hybridoma Cells: An Experimental Validation

    Directory of Open Access Journals (Sweden)

    Laurent Dewasme

    2017-02-01

    Full Text Available Hybridoma cells are commonly grown for the production of monoclonal antibodies (MAb. For monitoring and control purposes of the bioreactors, dynamic models of the cultures are required. However these models are difficult to infer from the usually limited amount of available experimental data and do not focus on target protein production optimization. This paper explores an experimental case study where hybridoma cells are grown in a sequential batch reactor. The simplest macroscopic reaction scheme translating the data is first derived using a maximum likelihood principal component analysis. Subsequently, nonlinear least-squares estimation is used to determine the kinetic laws. The resulting dynamic model reproduces quite satisfactorily the experimental data, as evidenced in direct and cross-validation tests. Furthermore, model predictions can also be used to predict optimal medium renewal time and composition.

  6. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  7. Application of blocking diagnosis methods to general circulation models. Part I: a novel detection scheme

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal); Universidad de Extremadura, Departamento de Fisica, Facultad de Ciencias, Badajoz (Spain); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain); Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal)

    2010-12-15

    This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable

  8. A variational EM method for pole-zero modeling of speech with mixed block sparse and Gaussian excitation

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    The modeling of speech can be used for speech synthesis and speech recognition. We present a speech analysis method based on pole-zero modeling of speech with mixed block sparse and Gaussian excitation. By using a pole-zero model, instead of the all-pole model, a better spectral fitting can...... be expected. Moreover, motivated by the block sparse glottal flow excitation during voiced speech and the white noise excitation for unvoiced speech, we model the excitation sequence as a combination of block sparse signals and white noise. A variational EM (VEM) method is proposed for estimating...... in reconstructing of the block sparse excitation....

  9. Valuation model of exploratory blocks; Modelo de valoracao de blocos exploratorios

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Thiago Neves de; Sartori, Vanderlei [Agencia Nacional do Petroleo, Gas Natural e Biocombustiveis (ANP), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    Last year completed 10 years of the promulgations of the Brazilian Petroleum Act. This act has regulated the of the sector of exploration and production of oil and natural gas in Brazil, enabling these activities were granted to private or state companies, preceded by a bidding round. Since 1998, ANP have been doing these bids, using in the judgment of offers the following criteria: Minimum Exploration Program, Local Content and Bonuses of Signature. The objective of this article is to present a model of valuation of the blocks on offer, showing a model of estimation of the monetary value of the block. (author)

  10. Transformation Strategies between Block-Oriented and Graph-Oriented Process Modelling Languages

    DEFF Research Database (Denmark)

    Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe

    to abstract from concrete transformationstrategies by distinguishing two major paradigms for process modelling languages:block-oriented languages (such as BPEL and BPML) and graph-oriented languages(such as EPCs and YAWL). The contribution of this paper are generic strategiesfor transforming from block......Much recent research work discusses the transformation between differentprocess modelling languages. This work, however, is mainly focussed on specific processmodelling languages, and thus the general reusability of the applied transformationconcepts is rather limited. In this paper, we aim......-oriented process languages to graph-oriented languages,and vice versa. We also present two case studies of applying our strategies....

  11. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection.

    Science.gov (United States)

    Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  12. Application of Combined Cake Filtration-Complete Blocking Model to Ultrafiltration of Skim Milk

    Directory of Open Access Journals (Sweden)

    Mansoor Kazemimoghadam

    2017-10-01

    Full Text Available Membrane ultrafiltration (UF is widely used in dairy industries like milk concentration and dehydration processes. The limiting factor of UF systems is fouling which is defined as the precipitation of solutes in the form of a cake layer on the surface of the membrane. In this study, the combined cake filtration-complete blocking model was compared to cake filtration mechanism for flux data through ultrafiltration of skim milk at constant flow rate. The resistance data also was modeled using cake filtration model and standard blocking model. The effect of different trans-membrane pressures and temperatures on flux decline was then investigated. Based on the results obtained here, the combined complete blocking-cake formation model was in excellent agreement with experimental data. The cake filtration model also provided good data fits and can be applied to solutions whose solutes tend to accumulate on the surface of the membrane in the form of a cake layer. With increasing pressure, the differences between the model and experimental data increased.

  13. Sequential Logic Model Deciphers Dynamic Transcriptional Control of Gene Expressions

    Science.gov (United States)

    Yeo, Zhen Xuan; Wong, Sum Thai; Arjunan, Satya Nanda Vel; Piras, Vincent; Tomita, Masaru; Selvarajoo, Kumar; Giuliani, Alessandro; Tsuchiya, Masa

    2007-01-01

    Background Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. Methodology Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM) is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. Principal Findings SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin) during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. Conclusions/Significance The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet providing rich biological

  14. Sequential logic model deciphers dynamic transcriptional control of gene expressions.

    Directory of Open Access Journals (Sweden)

    Zhen Xuan Yeo

    Full Text Available BACKGROUND: Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. METHODOLOGY: Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. PRINCIPAL FINDINGS: SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. CONCLUSIONS/SIGNIFICANCE: The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet

  15. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  16. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Richard Stahl

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  17. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  18. Making Career Decisions--A Sequential Elimination Approach.

    Science.gov (United States)

    Gati, Itamar

    1986-01-01

    Presents a model for career decision making based on the sequential elimination of occupational alternatives, an adaptation for career decisions of Tversky's (1972) elimination-by-aspects theory of choice. The expected utility approach is reviewed as a representative compensatory model for career decisions. Advantages, disadvantages, and…

  19. Sequential Computerized Mastery Tests--Three Simulation Studies

    Science.gov (United States)

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  20. Modelling of the Vajont rockslide displacements by delayed plasticity of interacting sliding blocks

    Science.gov (United States)

    Castellanza, riccardo; Hedge, Amarnath; Crosta, Giovanni; di Prisco, Claudio; Frigerio, Gabriele

    2015-04-01

    In order to model complex sliding masses subject to continuous slow movements related to water table fluctuations it is convenient to: i) model the time-dependent mechanical behaviour of the materials by means of a viscous-plastic constitutive law; ii) assume the water table fluctuation as the main input to induce displacement acceleration; iii) consider, the 3D constrains by maintaining a level of simplicity such to allow the implementation into EWS (Early Warning System) for risk management. In this work a 1D pseudo-dynamic visco-plastic model (Secondi et al. 2011), based on Perzyna's delayed plasticity theory is applied. The sliding mass is considered as a rigid block subject to its self weight, inertial forces and seepage forces varying with time. All non-linearities are lumped in a thin layer positioned between the rigid block and the stable bedrock. The mechanical response of this interface is assumed to be visco-plastic. The viscous nucleus is assumed to be of the exponential type, so that irreversible strains develop for both positive and negative values of the yield function; the sliding mass is discretized in blocks to cope with complex rockslide geometries; the friction angle is assumed to reduce with strain rate assuming a sort of strain - rate law (Dietrich-Ruina law). To validate the improvements introduced in this paper the simulation of the displacements of the Vajont rockslide from 1960 to the failure, occurred on October the 9th 1963, is perfomed. It will be shown that, in its modified version, the model satisfactorily fits the Vajont pre-collapse displacements triggered by the fluctuation of the Vajont lake level and the associated groundwater level. The model is able to follow the critical acceleration of the motion with a minimal change in friction properties.The discretization in interacting sliding blocks confirms its suitability to model the complex 3D rockslide behaviour. We are currently implementing a multi-block model capable to include

  1. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  2. AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.

    Science.gov (United States)

    Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo

    2017-09-21

    Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.

  3. Hydration effects on the electronic properties of eumelanin building blocks

    International Nuclear Information System (INIS)

    Assis Oliveira, Leonardo Bruno; Fonseca, Tertius L.; Costa Cabral, Benedito J.; Coutinho, Kaline; Canuto, Sylvio

    2016-01-01

    Theoretical results for the electronic properties of eumelanin building blocks in the gas phase and water are presented. The building blocks presently investigated include the monomeric species DHI (5,6-dihydroxyindole) or hydroquinone (HQ), DHICA (5,6-dihydroxyindole-2-carboxylic acid), indolequinone (IQ), quinone methide (MQ), two covalently bonded dimers [HM ≡ HQ + MQ and IM ≡ IQ + MQ], and two tetramers [HMIM ≡ HQ + IM, IMIM ≡ IM + IM]. The electronic properties in water were determined by carrying out sequential Monte Carlo/time dependent density functional theory calculations. The results illustrate the role played by hydrogen bonding and electrostatic interactions in the electronic properties of eumelanin building blocks in a polar environment. In water, the dipole moments of monomeric species are significantly increased ([54–79]%) relative to their gas phase values. Recently, it has been proposed that the observed enhancement of the higher-energy absorption intensity in eumelanin can be explained by excitonic coupling among eumelanin protomolecules [C.-T. Chen et al., Nat. Commun. 5, 3859 (2014)]. Here, we are providing evidence that for DHICA, IQ, and HMIM, the electronic absorption toward the higher-energy end of the spectrum ([180–220] nm) is enhanced by long-range Coulombic interactions with the water environment. It was verified that by superposing the absorption spectra of different eumelanin building blocks corresponding to the monomers, dimers, and tetramers in liquid water, the behaviour of the experimental spectrum, which is characterised by a nearly monotonic decay from the ultraviolet to the infrared, is qualitatively reproduced. This result is in keeping with a “chemical disorder model,” where the broadband absorption of eumelanin pigments is determined by the superposition of the spectra associated with the monomeric and oligomeric building blocks.

  4. Hydration effects on the electronic properties of eumelanin building blocks

    Energy Technology Data Exchange (ETDEWEB)

    Assis Oliveira, Leonardo Bruno [Instituto de Física da Universidade Federal de Goiás, 74690-900 Goiânia, GO (Brazil); Departamento de Física - CEPAE, Universidade Federal de Goiás, 74690-900 Goiânia, GO (Brazil); Escola de Ciências Exatas e da Computação, Pontifícia Universidade Católica de Goiás, 74605-010 Goiânia, GO (Brazil); Fonseca, Tertius L. [Instituto de Física da Universidade Federal de Goiás, 74690-900 Goiânia, GO (Brazil); Costa Cabral, Benedito J., E-mail: ben@cii.fc.ul.pt [Grupo de Física Matemática da Universidade de Lisboa and Departamento de Química e Bioquímica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa (Portugal); Coutinho, Kaline; Canuto, Sylvio [Instituto de Física da Universidade de São Paulo, CP 66318, 05314-970 São Paulo, SP (Brazil)

    2016-08-28

    Theoretical results for the electronic properties of eumelanin building blocks in the gas phase and water are presented. The building blocks presently investigated include the monomeric species DHI (5,6-dihydroxyindole) or hydroquinone (HQ), DHICA (5,6-dihydroxyindole-2-carboxylic acid), indolequinone (IQ), quinone methide (MQ), two covalently bonded dimers [HM ≡ HQ + MQ and IM ≡ IQ + MQ], and two tetramers [HMIM ≡ HQ + IM, IMIM ≡ IM + IM]. The electronic properties in water were determined by carrying out sequential Monte Carlo/time dependent density functional theory calculations. The results illustrate the role played by hydrogen bonding and electrostatic interactions in the electronic properties of eumelanin building blocks in a polar environment. In water, the dipole moments of monomeric species are significantly increased ([54–79]%) relative to their gas phase values. Recently, it has been proposed that the observed enhancement of the higher-energy absorption intensity in eumelanin can be explained by excitonic coupling among eumelanin protomolecules [C.-T. Chen et al., Nat. Commun. 5, 3859 (2014)]. Here, we are providing evidence that for DHICA, IQ, and HMIM, the electronic absorption toward the higher-energy end of the spectrum ([180–220] nm) is enhanced by long-range Coulombic interactions with the water environment. It was verified that by superposing the absorption spectra of different eumelanin building blocks corresponding to the monomers, dimers, and tetramers in liquid water, the behaviour of the experimental spectrum, which is characterised by a nearly monotonic decay from the ultraviolet to the infrared, is qualitatively reproduced. This result is in keeping with a “chemical disorder model,” where the broadband absorption of eumelanin pigments is determined by the superposition of the spectra associated with the monomeric and oligomeric building blocks.

  5. Influence of blocking on Northern European and Western Russian heatwaves in large climate model ensembles

    Science.gov (United States)

    Schaller, N.; Sillmann, J.; Anstey, J.; Fischer, E. M.; Grams, C. M.; Russo, S.

    2018-05-01

    Better preparedness for summer heatwaves could mitigate their adverse effects on society. This can potentially be attained through an increased understanding of the relationship between heatwaves and one of their main dynamical drivers, atmospheric blocking. In the 1979–2015 period, we find that there is a significant correlation between summer heatwave magnitudes and the number of days influenced by atmospheric blocking in Northern Europe and Western Russia. Using three large global climate model ensembles, we find similar correlations, indicating that these three models are able to represent the relationship between extreme temperature and atmospheric blocking, despite having biases in their simulation of individual climate variables such as temperature or geopotential height. Our results emphasize the need to use large ensembles of different global climate models as single realizations do not always capture this relationship. The three large ensembles further suggest that the relationship between summer heatwaves and atmospheric blocking will not change in the future. This could be used to statistically model heatwaves with atmospheric blocking as a covariate and aid decision-makers in planning disaster risk reduction and adaptation to climate change.

  6. ITER blanket module shield block design and analysis

    International Nuclear Information System (INIS)

    Mitin, D.; Khomyakov, S.; Razmerov, A.; Strebkov, Yu.

    2008-01-01

    This paper presents the alternative design of the shield block cooling path for a typical ITER blanket module with a predominantly sequential flow circuit. A number of serious disadvantages have been observed for the reference design, where the parallel flow circuit is used, which is inherent in the majority of blanket modules. The paper discusses these disadvantages and demonstrates the benefit of the alternative design based on the detailed design and the technological, hydraulic, thermal, structural and strength analyses, conducted for module no. 17

  7. The statistical decay of very hot nuclei: from sequential decay to multifragmentation

    International Nuclear Information System (INIS)

    Carlson, B.V.; Donangelo, R.; Universidad de la Republica, Montevideo; Souza, S.R.; Universidade Federal do Rio Grande do Sul; Lynch, W.G.; Steiner, A.W.; Tsang, M.B.

    2010-01-01

    Full text. At low excitation energies, the compound nucleus typically decays through the sequential emission of light particles. As the energy increases, the emission probability of heavier fragments increases until, at sufficiently high energies, several heavy complex fragments are emitted during the decay. The extent to which this fragment emission is simultaneous or sequential has been a subject of theoretical and experimental study for almost 30 years. The Statistical Multifragmentation Model, an equilibrium model of simultaneous fragment emission, uses the configurations of a statistical ensemble to determine the distribution of primary fragments of a compound nucleus. The primary fragments are then assumed to decay by sequential compound emission or Fermi breakup. As the first step toward a more unified model of these processes, we demonstrate the equivalence of a generalized Fermi breakup model, in which densities of excited states are taken into account, to the microcanonical version of the statistical multifragmentation model. We then establish a link between this unified Fermi breakup / statistical multifragmentation model and the well-known process of compound nucleus emission, which permits to consider simultaneous and sequential emission on the same footing. Within this unified framework, we analyze the increasing importance of simultaneous, multifragment decay with increasing excitation energy and decreasing lifetime of the compound nucleus. (author)

  8. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task

    Directory of Open Access Journals (Sweden)

    Cristóbal Moënne-Loccoz

    2017-09-01

    Full Text Available Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  9. Research on manufacturing service behavior modeling based on block chain theory

    Science.gov (United States)

    Zhao, Gang; Zhang, Guangli; Liu, Ming; Yu, Shuqin; Liu, Yali; Zhang, Xu

    2018-04-01

    According to the attribute characteristics of processing craft, the manufacturing service behavior is divided into service attribute, basic attribute, process attribute, resource attribute. The attribute information model of manufacturing service is established. The manufacturing service behavior information is successfully divided into public and private domain. Additionally, the block chain technology is introduced, and the information model of manufacturing service based on block chain principle is established, which solves the problem of sharing and secreting information of processing behavior, and ensures that data is not tampered with. Based on the key pairing verification relationship, the selective publishing mechanism for manufacturing information is established, achieving the traceability of product data, guarantying the quality of processing quality.

  10. An enhanced model for minimizing fuel consumption under block-queuing in a drive-through service system

    Energy Technology Data Exchange (ETDEWEB)

    Reilly, C.H.; Berglin, J. [University of Central Florida, Orlando, FL (United States). Dept. of Industrial Engineering and Management Systems

    2004-05-01

    We present a new model for determining the optimal block-size under block-queuing in a simple, single-channel queue at a drive-through service facility. With block-queuing, a queue is partitioned into an active section and a passive section, where drivers are asked to turn off their engines until the active section clears. Our model prescribes a block-size, i.e., a maximum number of vehicles in the active section, which minimizes the expected amount of fuel consumed in the queue. It can assess the effects of the traffic intensity, the service-time variance, and the proportion of compliant drivers in the passive section on the optimal block- size and on fuel consumption in the queue. (author)

  11. A Relational Account of Call-by-Value Sequentiality

    DEFF Research Database (Denmark)

    Riecke, Jon Gary; Sandholm, Anders Bo

    2002-01-01

    We construct a model for FPC, a purely functional, sequential, call-by-value language. The model is built from partial continuous functions, in the style of Plotkin, further constrained to be uniform with respect to a class of logical relations. We prove that the model is fully abstract....

  12. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    Science.gov (United States)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  13. Ecohydrologic process modeling of mountain block groundwater recharge.

    Science.gov (United States)

    Magruder, Ian A; Woessner, William W; Running, Steve W

    2009-01-01

    Regional mountain block recharge (MBR) is a key component of alluvial basin aquifer systems typical of the western United States. Yet neither water scientists nor resource managers have a commonly available and reasonably invoked quantitative method to constrain MBR rates. Recent advances in landscape-scale ecohydrologic process modeling offer the possibility that meteorological data and land surface physical and vegetative conditions can be used to generate estimates of MBR. A water balance was generated for a temperate 24,600-ha mountain watershed, elevation 1565 to 3207 m, using the ecosystem process model Biome-BGC (BioGeochemical Cycles) (Running and Hunt 1993). Input data included remotely sensed landscape information and climate data generated with the Mountain Climate Simulator (MT-CLIM) (Running et al. 1987). Estimated mean annual MBR flux into the crystalline bedrock terrain is 99,000 m(3) /d, or approximately 19% of annual precipitation for the 2003 water year. Controls on MBR predictions include evapotranspiration (radiation limited in wet years and moisture limited in dry years), soil properties, vegetative ecotones (significant at lower elevations), and snowmelt (dominant recharge process). The ecohydrologic model is also used to investigate how climatic and vegetative controls influence recharge dynamics within three elevation zones. The ecohydrologic model proves useful for investigating controls on recharge to mountain blocks as a function of climate and vegetation. Future efforts will need to investigate the uncertainty in the modeled water balance by incorporating an advanced understanding of mountain recharge processes, an ability to simulate those processes at varying scales, and independent approaches to calibrating MBR estimates. Copyright © 2009 The Author(s). Journal compilation © 2009 National Ground Water Association.

  14. Slip-Size Distribution and Self-Organized Criticality in Block-Spring Models with Quenched Randomness

    Science.gov (United States)

    Sakaguchi, Hidetsugu; Kadowaki, Shuntaro

    2017-07-01

    We study slowly pulling block-spring models in random media. Second-order phase transitions exist in a model pulled by a constant force in the case of velocity-strengthening friction. If external forces are slowly increased, nearly critical states are self-organized. Slips of various sizes occur, and the probability distributions of slip size roughly obey power laws. The exponent is close to that in the quenched Edwards-Wilkinson model. Furthermore, the slip-size distributions are investigated in cases of Coulomb friction, velocity-weakening friction, and two-dimensional block-spring models.

  15. Percutaneous sciatic nerve block with tramadol induces analgesia and motor blockade in two animal pain models

    International Nuclear Information System (INIS)

    Sousa, A.M.; Ashmawi, H.A.; Costa, L.S.; Posso, I.P.; Slullitel, A.

    2011-01-01

    Local anesthetic efficacy of tramadol has been reported following intradermal application. Our aim was to investigate the effect of perineural tramadol as the sole analgesic in two pain models. Male Wistar rats (280-380 g; N = 5/group) were used in these experiments. A neurostimulation-guided sciatic nerve block was performed and 2% lidocaine or tramadol (1.25 and 5 mg) was perineurally injected in two different animal pain models. In the flinching behavior test, the number of flinches was evaluated and in the plantar incision model, mechanical and heat thresholds were measured. Motor effects of lidocaine and tramadol were quantified and a motor block score elaborated. Tramadol, 1.25 mg, completely blocked the first and reduced the second phase of the flinching behavior test. In the plantar incision model, tramadol (1.25 mg) increased both paw withdrawal latency in response to radiant heat (8.3 ± 1.1, 12.7 ± 1.8, 8.4 ± 0.8, and 11.1 ± 3.3 s) and mechanical threshold in response to von Frey filaments (459 ± 82.8, 447.5 ± 91.7, 320.1 ± 120, 126.43 ± 92.8 mN) at 5, 15, 30, and 60 min, respectively. Sham block or contralateral sciatic nerve block did not differ from perineural saline injection throughout the study in either model. The effect of tramadol was not antagonized by intraperitoneal naloxone. High dose tramadol (5 mg) blocked motor function as well as 2% lidocaine. In conclusion, tramadol blocks nociception and motor function in vivo similar to local anesthetics

  16. Percutaneous sciatic nerve block with tramadol induces analgesia and motor blockade in two animal pain models

    Directory of Open Access Journals (Sweden)

    A.M. Sousa

    2012-02-01

    Full Text Available Local anesthetic efficacy of tramadol has been reported following intradermal application. Our aim was to investigate the effect of perineural tramadol as the sole analgesic in two pain models. Male Wistar rats (280-380 g; N = 5/group were used in these experiments. A neurostimulation-guided sciatic nerve block was performed and 2% lidocaine or tramadol (1.25 and 5 mg was perineurally injected in two different animal pain models. In the flinching behavior test, the number of flinches was evaluated and in the plantar incision model, mechanical and heat thresholds were measured. Motor effects of lidocaine and tramadol were quantified and a motor block score elaborated. Tramadol, 1.25 mg, completely blocked the first and reduced the second phase of the flinching behavior test. In the plantar incision model, tramadol (1.25 mg increased both paw withdrawal latency in response to radiant heat (8.3 ± 1.1, 12.7 ± 1.8, 8.4 ± 0.8, and 11.1 ± 3.3 s and mechanical threshold in response to von Frey filaments (459 ± 82.8, 447.5 ± 91.7, 320.1 ± 120, 126.43 ± 92.8 mN at 5, 15, 30, and 60 min, respectively. Sham block or contralateral sciatic nerve block did not differ from perineural saline injection throughout the study in either model. The effect of tramadol was not antagonized by intraperitoneal naloxone. High dose tramadol (5 mg blocked motor function as well as 2% lidocaine. In conclusion, tramadol blocks nociception and motor function in vivo similar to local anesthetics.

  17. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  18. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    Science.gov (United States)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  19. Block Pickard Models for Two-Dimensional Constraints

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2009-01-01

    In Pickard random fields (PRF), the probabilities of finite configurations and the entropy of the field can be calculated explicitly, but only very simple structures can be incorporated into such a field. Given two Markov chains describing a boundary, an algorithm is presented which determines...... for the domino tiling constraint represented by a quaternary alphabet. PRF models are also presented for higher order constraints, including the no isolated bits (n.i.b.) constraint, and a minimum distance 3 constraint by defining super symbols on blocks of binary symbols....

  20. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  1. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  2. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  3. Tyre tread-block friction: modelling, simulation and experimental validation

    Science.gov (United States)

    Wallaschek, Jörg; Wies, Burkard

    2013-07-01

    Pneumatic tyres are used in vehicles since the beginning of the last century. They generate braking and steering forces for bicycles, motor cycles, cars, busses, trucks, agricultural vehicles and aircraft. These forces are generated in the usually very small contact area between tyre and road and their performance characteristics are of eminent importance for safety and comfort. Much research has been addressed to optimise tyre design with respect to footprint pressure and friction. In this context, the development of virtual tyre prototypes, that is, simulation models for the tyre, has grown to a science in its own. While the modelling of the structural dynamics of the tyre has reached a very advanced level, which allows to take into account effects like the rate-independent inelasticity of filled elastomers or the transient 3D deformations of the ply-reinforced tread, shoulder and sidewalls, little is known about the friction between tread-block elements and road. This is particularly obvious in the case when snow, ice, water or a third-body layer are present in the tyre-road contact. In the present paper, we give a survey on the present state of knowledge in the modelling, simulation and experimental validation of tyre tread-block friction processes. We concentrate on experimental techniques.

  4. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  5. Levobupivacaine vs racemic bupivacaine in spinal anesthesia for sequential bilateral total knee arthroplasty: a retrospective cohort study.

    Science.gov (United States)

    Chen, Chee Kean; Lau, Francis C S; Lee, Woo Guan; Phui, Vui Eng

    2016-09-01

    To compare the anesthetic potency and safety of spinal anesthesia with higher dosages of levobupivacaine and bupivacaine in patients for bilateral sequential for total knee arthroplasty (TKA). Retrospective cohort study. Operation theater with postoperative inpatient follow-up. The medical records of 315 patients who underwent sequential bilateral TKA were reviewed. Patients who received intrathecal levobupicavaine 0.5% were compared with patients who received hyperbaric bupivacaine 0.5% with fentanyl 25 μg for spinal anesthesia. The primary outcome was the use of rescue analgesia (systemic opioids, conversion to general anesthesia) during surgery for both groups. Secondary outcomes included adverse effects of local anesthetics (hypotension and bradycardia) during surgery and morbidity related to spinal anesthesia (postoperative nausea, vomiting, and bleeding) during hospital stay. One hundred fifty patients who received intrathecal levobupivacaine 0.5% (group L) were compared with 90 patients given hyperbaric bupivacaine 0.5% with fentanyl 25 μg (group B). The mean volume of levobupivacaine administered was 5.8 mL (range, 5.0-6.0 mL), and that of bupivacaine was 3.8 mL (range, 3.5-4.0 mL). Both groups achieved similar maximal sensory level of block (T6). The time to maximal height of sensory block was significantly shorter in group B than group L, 18.2 ± 4.5 vs 23.9 ± 3.8 minutes (P< .001). The time to motor block of Bromage 3 was also shorter in group B (8.7 ± 4.1 minutes) than group L (16.0 ± 4.5 minutes) (P< .001). Patients in group B required more anesthetic supplement than group L (P< .001). Hypotension and postoperative bleeding were significantly less common in group L than group B. Levobupivacaine at a higher dosage provided longer duration of spinal anesthesia with better safety profile in sequential bilateral TKA. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    Iooss, Christine

    1991-01-01

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

  7. Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations

    Directory of Open Access Journals (Sweden)

    Nah-Oak Song

    2015-08-01

    Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.

  8. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Science.gov (United States)

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  9. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Directory of Open Access Journals (Sweden)

    Fangjun Qin

    2018-05-01

    Full Text Available In this paper, a sequential multiplicative extended Kalman filter (SMEKF is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  10. Sequential Foreign Investments, Regional Technology Platforms and the Evolution of Japanese Multinationals in East Asia

    OpenAIRE

    Song, Jaeyong

    2001-01-01

    IVABSTRACTIn this paper, we investigate the firm-level mechanisms that underlie the sequential foreign direct investment (FDI) decisions of multinational corporations (MNCs). To understand inter-firm heterogeneity in the sequential FDI behaviors of MNCs, we develop a firm capability-based model of sequential FDI decisions. In the setting of Japanese electronics MNCs in East Asia, we empirically examine how prior investments in firm capabilities affect sequential investments into existingprodu...

  11. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  12. A controlled human malaria infection model enabling evaluation of transmission-blocking interventions

    NARCIS (Netherlands)

    Collins, K.A.; Wang, C.Y.; Adams, M.; Mitchell, H.; Rampton, M.; Elliott, S.; Reuling, I.J.; Bousema, T.; Sauerwein, R.; Chalon, S.; Mohrle, J.J.; McCarthy, J.S.

    2018-01-01

    BACKGROUND: Drugs and vaccines that can interrupt the transmission of Plasmodium falciparum will be important for malaria control and elimination. However, models for early clinical evaluation of candidate transmission-blocking interventions are currently unavailable. Here, we describe a new model

  13. Mechanistic Understanding of Tungsten Oxide In-Plane Nanostructure Growth via Sequential Infiltration Synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Jin; Suh, Hyo Seon; Zhou, Chun; Mane, Anil U.; Lee, Byeongdu; Kim, Soojeong; Emery, Jonathan D.; Elam, Jeffrey W.; Nealey, Paul F.; Fenter, Paul; Fister, Timothy T.

    2018-02-21

    Tungsten oxide (WO3-x) nanostructures with hexagonal in-plane arrangements were fabricated by sequential infiltration synthesis (SIS), using the selective interaction of gas phase precursors with functional groups in one domain of a block copolymer (BCP) self-assembled template. Such structures are highly desirable for various practical applications and as model systems for fundamental studies. The nanostructures were characterized by cross-sectional scanning electron microscopy, grazing-incidence small/wide-angle X-ray scattering (GISAXS/GIWAXS), and X-ray absorption near edge structure (XANES) measurements at each stage during the SIS process and subsequent thermal treatments, to provide a comprehensive picture of their evolution in morphology, crystallography and electronic structure. In particular, we discuss the critical role of SIS Al2O3 seeds toward modifying the chemical affinity and free volume in a polymer for subsequent infiltration of gas phase precursors. The insights into SIS growth obtained from this study are valuable to the design and fabrication of a wide range of targeted nanostructures.

  14. Block spins and chirality in Heisenberg model on Kagome and triangular lattices

    International Nuclear Information System (INIS)

    Subrahmanyam, V.

    1994-01-01

    The spin-1/2 Heisenberg model (HM) is investigated using a block-spin renormalization approach on Kagome and triangular lattices. In both cases, after coarse graining the triangles on original lattice and truncation of the Hilbert space to the triangular ground state subspace, HM reduces to an effective model on a triangular lattice in terms of the triangular-block degrees of freedom viz. the spin and the chirality quantum numbers. The chirality part of the effective Hamiltonian captures the essential difference between the two lattices. It is seen that simple eigenstates can be constructed for the effective model whose energies serve as upper bounds on the exact ground state energy of HM, and chiral ordered variational states have high energies compared to the other variational states. (author). 12 refs, 2 figs

  15. Relative resilience to noise of standard and sequential approaches to measurement-based quantum computation

    Science.gov (United States)

    Gallagher, C. B.; Ferraro, A.

    2018-05-01

    A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.

  16. Transient photocurrent in molecular junctions: singlet switching on and triplet blocking.

    Science.gov (United States)

    Petrov, E G; Leonov, V O; Snitsarev, V

    2013-05-14

    The kinetic approach adapted to describe charge transmission in molecular junctions, is used for the analysis of the photocurrent under conditions of moderate light intensity of the photochromic molecule. In the framework of the HOMO-LUMO model for the single electron molecular states, the analytic expressions describing the temporary behavior of the transient and steady state sequential (hopping) as well as direct (tunnel) current components have been derived. The conditions at which the current components achieve their maximal values are indicated. It is shown that if the rates of charge transmission in the unbiased molecular diode are much lower than the intramolecular singlet-singlet excitation/de-excitation rate, and the threefold degenerated triplet excited state of the molecule behaves like a trap blocking the charge transmission, a possibility of a large peak-like transient switch-on photocurrent arises.

  17. Methamphetamine-alcohol interactions in murine models of sequential and simultaneous oral drug-taking.

    Science.gov (United States)

    Fultz, Elissa K; Martin, Douglas L; Hudson, Courtney N; Kippin, Tod E; Szumlinski, Karen K

    2017-08-01

    A high degree of co-morbidity exists between methamphetamine (MA) addiction and alcohol use disorders and both sequential and simultaneous MA-alcohol mixing increases risk for co-abuse. As little preclinical work has focused on the biobehavioral interactions between MA and alcohol within the context of drug-taking behavior, we employed simple murine models of voluntary oral drug consumption to examine how prior histories of either MA- or alcohol-taking influence the intake of the other drug. In one study, mice with a 10-day history of binge alcohol-drinking [5,10, 20 and 40% (v/v); 2h/day] were trained to self-administer oral MA in an operant-conditioning paradigm (10-40mg/L). In a second study, mice with a 10-day history of limited-access oral MA-drinking (5, 10, 20 and 40mg/L; 2h/day) were presented with alcohol (5-40% v/v; 2h/day) and then a choice between solutions of 20% alcohol, 10mg/L MA or their mix. Under operant-conditioning procedures, alcohol-drinking mice exhibited less MA reinforcement overall, than water controls. However, when drug availability was not behaviorally-contingent, alcohol-drinking mice consumed more MA and exhibited greater preference for the 10mg/L MA solution than drug-naïve and combination drug-experienced mice. Conversely, prior MA-drinking history increased alcohol intake across a range of alcohol concentrations. These exploratory studies indicate the feasibility of employing procedurally simple murine models of sequential and simultaneous oral MA-alcohol mixing of relevance to advancing our biobehavioral understanding of MA-alcohol co-abuse. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Sequential Path Model for Grain Yield in Soybean

    Directory of Open Access Journals (Sweden)

    Mohammad SEDGHI

    2010-09-01

    Full Text Available This study was performed to determine some physiological traits that affect soybean,s grain yield via sequential path analysis. In a factorial experiment, two cultivars (Harcor and Williams were sown under four levels of nitrogen and two levels of weed management at the research station of Tabriz University, Iran, during 2004 and 2005. Grain yield, some yield components and physiological traits were measured. Correlation coefficient analysis showed that grain yield had significant positive and negative association with measured traits. A sequential path analysis was done in order to evaluate associations among grain yield and related traits by ordering the various variables in first, second and third order paths on the basis of their maximum direct effects and minimal collinearity. Two first-order variables, namely number of pods per plant and pre-flowering net photosynthesis revealed highest direct effect on total grain yield and explained 49, 44 and 47 % of the variation in grain yield based on 2004, 2005, and combined datasets, respectively. Four traits i.e. post-flowering net photosynthesis, plant height, leaf area index and intercepted radiation at the bottom layer of canopy were found to fit as second-order variables. Pre- and post-flowering chlorophyll content, main root length and intercepted radiation at the middle layer of canopy were placed at the third-order path. From the results concluded that, number of pods per plant and pre-flowering net photosynthesis are the best selection criteria in soybean for grain yield.

  19. Sequential rhodium/palladium catalysis: enantioselective formation of dihydroquinolinones in the presence of achiral and chiral ligands.

    Science.gov (United States)

    Zhang, Lei; Qureshi, Zafar; Sonaglia, Lorenzo; Lautens, Mark

    2014-12-08

    Compatible combinations of achiral and chiral ligands can be used in rhodium/palladium catalysis to achieve highly enantioselective domino reactions. The difference in rates of catalysis and minimal effects of ligand interference confer control in the domino sequence. The "all-in-one" 1,4-conjugate arylation and C-N cross-coupling through sequential Rh/Pd catalysis provides access to enantioenriched dihydroquinolinone building blocks. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  1. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  2. Exposure assessment of mobile phone base station radiation in an outdoor environment using sequential surrogate modeling.

    Science.gov (United States)

    Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom

    2013-05-01

    Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.

  3. Excitation block in a nerve fibre model owing to potassium-dependent changes in myelin resistance.

    Science.gov (United States)

    Brazhe, A R; Maksimov, G V; Mosekilde, E; Sosnovtseva, O V

    2011-02-06

    The myelinated nerve fibre is formed by an axon and Schwann cells or oligodendrocytes that sheath the axon by winding around it in tight myelin layers. Repetitive stimulation of a fibre is known to result in accumulation of extracellular potassium ions, especially between the axon and the myelin. Uptake of potassium leads to Schwann cell swelling and myelin restructuring that impacts the electrical properties of the myelin. In order to further understand the dynamic interaction that takes place between the myelin and the axon, we have modelled submyelin potassium accumulation and related changes in myelin resistance during prolonged high-frequency stimulation. We predict that potassium-mediated decrease in myelin resistance leads to a functional excitation block with various patterns of altered spike trains. The patterns are found to depend on stimulation frequency and amplitude and to range from no block (less than 100 Hz) to a complete block (greater than 500 Hz). The transitional patterns include intermittent periodic block with interleaved spiking and non-spiking intervals of different relative duration as well as an unstable regime with chaotic switching between the spiking and non-spiking states. Intermittent conduction blocks are accompanied by oscillations of extracellular potassium. The mechanism of conductance block based on myelin restructuring complements the already known and modelled block via hyperpolarization mediated by the axonal sodium pump and potassium depolarization.

  4. Mind-to-mind heteroclinic coordination: Model of sequential episodic memory initiation

    Science.gov (United States)

    Afraimovich, V. S.; Zaks, M. A.; Rabinovich, M. I.

    2018-05-01

    Retrieval of episodic memory is a dynamical process in the large scale brain networks. In social groups, the neural patterns, associated with specific events directly experienced by single members, are encoded, recalled, and shared by all participants. Here, we construct and study the dynamical model for the formation and maintaining of episodic memory in small ensembles of interacting minds. We prove that the unconventional dynamical attractor of this process—the nonsmooth heteroclinic torus—is structurally stable within the Lotka-Volterra-like sets of equations. Dynamics on this torus combines the absence of chaos with asymptotic instability of every separate trajectory; its adequate quantitative characteristics are length-related Lyapunov exponents. Variation of the coupling strength between the participants results in different types of sequential switching between metastable states; we interpret them as stages in formation and modification of the episodic memory.

  5. 3D seismic modeling and reverse‐time migration with the parallel Fourier method using non‐blocking collective communications

    KAUST Repository

    Chu, Chunlei

    2009-01-01

    The major performance bottleneck of the parallel Fourier method on distributed memory systems is the network communication cost. In this study, we investigate the potential of using non‐blocking all‐to‐all communications to solve this problem by overlapping computation and communication. We present the runtime comparison of a 3D seismic modeling problem with the Fourier method using non‐blocking and blocking calls, respectively, on a Linux cluster. The data demonstrate that a performance improvement of up to 40% can be achieved by simply changing blocking all‐to‐all communication calls to non‐blocking ones to introduce the overlapping capability. A 3D reverse‐time migration result is also presented as an extension to the modeling work based on non‐blocking collective communications.

  6. On thermal vibration effects in diffusion model calculations of blocking dips

    International Nuclear Information System (INIS)

    Fuschini, E.; Ugozzoni, A.

    1983-01-01

    In the framework of the diffusion model, a method for calculating blocking dips is suggested that takes into account thermal vibrations of the crystal lattice. Results of calculations of the diffusion factor and the transverse energy distribution taking into accoUnt scattering of the channeled particles at thermal vibrations of lattice nuclei, are presented. Calculations are performed for α-particles with the energy of 2.12 MeV at 300 K scattered by Al crystal. It is shown that calculations performed according to the above method prove the necessity of taking into account effects of multiple scattering under blocking conditions

  7. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  8. Mathematical modeling and simulation of nanopore blocking by precipitation

    KAUST Repository

    Wolfram, M-T

    2010-10-29

    High surface charges of polymer pore walls and applied electric fields can lead to the formation and subsequent dissolution of precipitates in nanopores. These precipitates block the pore, leading to current fluctuations. We present an extended Poisson-Nernst-Planck system which includes chemical reactions of precipitation and dissolution. We discuss the mathematical modeling and present 2D numerical simulations. © 2010 IOP Publishing Ltd.

  9. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    Directory of Open Access Journals (Sweden)

    Dario Cuevas Rivera

    2015-10-01

    Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.

  10. A "catalyst switch" Strategy for the sequential metal-free polymerization of epoxides and cyclic Esters/Carbonate

    KAUST Repository

    Zhao, Junpeng

    2014-06-24

    A "catalyst switch" strategy was used to synthesize well-defined polyether-polyester/polycarbonate block copolymers. Epoxides (ethylene oxide and/or 1,2-butylene oxide) were first polymerized from a monoalcohol in the presence of a strong phosphazene base promoter (t-BuP4). Then an excess of diphenyl phosphate (DPP) was introduced, followed by the addition and polymerization of a cyclic ester (ε-caprolactone or δ-valerolactone) or a cyclic carbonate (trimethylene carbonate), where DPP acted as both the neutralizer of phosphazenium alkoxide (polyether chain end) and the activator of cyclic ester/carbonate. This work has provided a one-pot sequential polymerization method for the metal-free synthesis of block copolymers from monomers which are suited for different types of organic catalysts. © 2014 American Chemical Society.

  11. A "catalyst switch" Strategy for the sequential metal-free polymerization of epoxides and cyclic Esters/Carbonate

    KAUST Repository

    Zhao, Junpeng; Pahovnik, David; Gnanou, Yves; Hadjichristidis, Nikolaos

    2014-01-01

    A "catalyst switch" strategy was used to synthesize well-defined polyether-polyester/polycarbonate block copolymers. Epoxides (ethylene oxide and/or 1,2-butylene oxide) were first polymerized from a monoalcohol in the presence of a strong phosphazene base promoter (t-BuP4). Then an excess of diphenyl phosphate (DPP) was introduced, followed by the addition and polymerization of a cyclic ester (ε-caprolactone or δ-valerolactone) or a cyclic carbonate (trimethylene carbonate), where DPP acted as both the neutralizer of phosphazenium alkoxide (polyether chain end) and the activator of cyclic ester/carbonate. This work has provided a one-pot sequential polymerization method for the metal-free synthesis of block copolymers from monomers which are suited for different types of organic catalysts. © 2014 American Chemical Society.

  12. Analogue modelling of microcontinent formation: a case study from the Danakil Block, southern Red Sea

    Science.gov (United States)

    Molnar, Nicolas; Cruden, Alexander; Betts, Peter

    2017-04-01

    The kinematic evolution of the Danakil Block is well constrained but the processes responsible for the formation of an isolated continental segment around 13 Ma ago with an independent pole of rotation are still matter of debate. We performed three-dimensional analogue experiments of rotational continental extension containing a pre-existing linear weakness zones in the lithospheric mantle to investigate the formation of the Red Sea, including the Danakil Block. We imposed a rotational extensional boundary condition that simulates the progressive anticlockwise rotation of the Arabian Plate with respect to the Nubia Plate over the last 13-15 Ma and we simulated the presence of a narrow thermal anomaly related to the northward channelling of Afar plume by varying the viscosity of the model lithospheric mantle. The results from experiments containing a linear zone of weakness oriented at low angles with respect to the rift axis show that early stages of deformation are characterised by the development of two rift sub-parallel compartments that delimit an intra-rift block in the vicinity of the weak lithosphere boundary zone, which are analogous to the two rift branches that confine the Danakil Block in the southern Red Sea. The imposed rotational boundary condition creates a displacement gradient along the intra-rift block and prevents the nucleation of the early rift compartments to the north of the block, enhancing the formation of an independently rotating intra-rift segment. Comparison with geodetic data supports our modelling results, which are also in agreement with the "crank-arm" model of Sichler (1980. La biellette Danakile: un modèle pour l'évolution géodynamique de l'Afar. Bull. la Société Géologique Fr. 22, 925-933). Additional analogue models of i) orthogonal extension with an identical lithospheric mantle weakness and, ii) rotational extension with a homogeneous lithosphere (i.e., no lithospheric mantle weakness) show no evidence of developing

  13. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  14. A sequential EMT-MET mechanism drives the differentiation of human embryonic stem cells towards hepatocytes.

    Science.gov (United States)

    Li, Qiuhong; Hutchins, Andrew P; Chen, Yong; Li, Shengbiao; Shan, Yongli; Liao, Baojian; Zheng, Dejin; Shi, Xi; Li, Yinxiong; Chan, Wai-Yee; Pan, Guangjin; Wei, Shicheng; Shu, Xiaodong; Pei, Duanqing

    2017-05-03

    Reprogramming has been shown to involve EMT-MET; however, its role in cell differentiation is unclear. We report here that in vitro differentiation of hESCs to hepatic lineage undergoes a sequential EMT-MET with an obligatory intermediate mesenchymal phase. Gene expression analysis reveals that Activin A-induced formation of definitive endoderm (DE) accompanies a synchronous EMT mediated by autocrine TGFβ signalling followed by a MET process. Pharmacological inhibition of TGFβ signalling blocks the EMT as well as DE formation. We then identify SNAI1 as the key EMT transcriptional factor required for the specification of DE. Genetic ablation of SNAI1 in hESCs does not affect the maintenance of pluripotency or neural differentiation, but completely disrupts the formation of DE. These results reveal a critical mesenchymal phase during the acquisition of DE, highlighting a role for sequential EMT-METs in both differentiation and reprogramming.

  15. Natural convection in a horizontal channel provided with heat generating blocks: Discussion of the isothermal blocks validity

    International Nuclear Information System (INIS)

    Mouhtadi, D.; Amahmid, A.; Hasnaoui, M.; Bennacer, R.

    2012-01-01

    Highlights: ► We examine the validity of isothermal model for blocks with internal heat generation. ► Criteria based on comparison of total and local quantities are adopted. ► Thermal conductivity and Biot number required for the validity of the isothermal model are dependent on the Rayleigh number. ► The validity conditions are also affected by the multiplicity of solutions. - Abstract: This work presents a numerical study of air natural convection in a horizontal channel provided with heating blocks periodically distributed on its lower adiabatic surface. The blocks are submitted to a uniform heat generation and the channel upper surface is maintained at a cold constant temperature. The main objective of this study is to examine the validity of the model with isothermal blocks for the system under consideration. Then the calculations are performed using two different models. In the first (denoted Model 1 or M1) the calculations are performed by imposing a uniform volumetric heat generation inside the blocks. In the second model (denoted Model 2 or M2), the blocks are maintained isothermal at the average blocks surface temperature deduced from the Model 1. The controlling parameters of the present problem are the thermal conductivity ratio of the solid block and the fluid (0.1 ⩽ k* = k s /k a ⩽ 200) and the Rayleigh number (10 4 ⩽ Ra ⩽ 10 7 ). The validity of the isothermal model is examined for various Ra by using criteria based on local and mean heat transfer characteristics. It is found that some solutions of the isothermal model do not reproduce correctly the results of the first model even for very large conductivity ratios. The Biot number below which the Model 2 is valid depends strongly on the Rayleigh number and the type of solution.

  16. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  17. Multiplicity distributions and multiplicity correlations in sequential, off-equilibrium fragmentation process

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    A new kinetic fragmentation model, the Fragmentation - Inactivation -Binary (FIB) model is described where a dissipative process stops randomly the sequential, conservative and off-equilibrium fragmentation process. (K.A.)

  18. Sulfonated amphiphilic block copolymers : synthesis, self-assembly in water, and application as stabilizer in emulsion polymerization

    Science.gov (United States)

    Jiguang Zhang; Matthew R. Dubay; Carl J. Houtman; Steven J. Severtson

    2009-01-01

    Described is the synthesis of diblock copolymers generated via sequential atom transfer radical polymerization (ATRP) of poly(n-butyl acrylate) (PnBA) followed by chain augmentation with either sulfonated poly(2-hydroxyethyl methacrylate) (PHEMA) or poly(2-hydroxyethyl acrylate) (PHEA) blocks. ATRP of PHEMA or PHEA from PnBA macroinitiator was conducted in acetone/...

  19. Clustering network layers with the strata multilayer stochastic block model.

    Science.gov (United States)

    Stanley, Natalie; Shai, Saray; Taylor, Dane; Mucha, Peter J

    2016-01-01

    Multilayer networks are a useful data structure for simultaneously capturing multiple types of relationships between a set of nodes. In such networks, each relational definition gives rise to a layer. While each layer provides its own set of information, community structure across layers can be collectively utilized to discover and quantify underlying relational patterns between nodes. To concisely extract information from a multilayer network, we propose to identify and combine sets of layers with meaningful similarities in community structure. In this paper, we describe the "strata multilayer stochastic block model" (sMLSBM), a probabilistic model for multilayer community structure. The central extension of the model is that there exist groups of layers, called "strata", which are defined such that all layers in a given stratum have community structure described by a common stochastic block model (SBM). That is, layers in a stratum exhibit similar node-to-community assignments and SBM probability parameters. Fitting the sMLSBM to a multilayer network provides a joint clustering that yields node-to-community and layer-to-stratum assignments, which cooperatively aid one another during inference. We describe an algorithm for separating layers into their appropriate strata and an inference technique for estimating the SBM parameters for each stratum. We demonstrate our method using synthetic networks and a multilayer network inferred from data collected in the Human Microbiome Project.

  20. Crustal block motion model and interplate coupling along Ecuador-Colombia trench based on GNSS observation network

    Science.gov (United States)

    Ito, T.; Mora-Páez, H.; Peláez-Gaviria, J. R.; Kimura, H.; Sagiya, T.

    2017-12-01

    IntroductionEcuador-Colombia trench is located at the boundary between South-America plate, Nazca Plate and Caribrian plate. This region is very complexes such as subducting Caribrian plate and Nazca plate, and collision between Panama and northern part of the Andes mountains. The previous large earthquakes occurred along the subducting boundary of Nazca plate, such as 1906 (M8.8) and 1979 (M8.2). And also, earthquakes occurred inland, too. So, it is important to evaluate earthquake potentials for preparing huge damage due to large earthquake in near future. GNSS observation In the last decade, the GNSS observation was established in Columbia. The GNSS observation is called by GEORED, which is operated by servicing Geologico Colomiano. The purpose of GEORED is research of crustal deformation. The number of GNSS site of GEORED is consist of 60 continuous GNSS observation site at 2017 (Mora et al., 2017). The sampling interval of almost GNSS site is 30 seconds. These GNSS data were processed by PPP processing using GIPSY-OASYS II software. GEORED can obtain the detailed crustal deformation map in whole Colombia. In addition, we use 100 GNSS data at Ecuador-Peru region (Nocquet et al. 2014). Method We developed a crustal block movements model based on crustal deformation derived from GNSS observation. Our model considers to the block motion with pole location and angular velocity and the interplate coupling between each block boundaries, including subduction between the South-American plate and the Nazca plate. And also, our approach of estimation of crustal block motion and coefficient of interplate coupling are based on MCMC method. The estimated each parameter is obtained probably density function (PDF). Result We tested 11 crustal block models based on geological data, such as active fault trace at surface. The optimal number of crustal blocks is 11 for based on geological and geodetic data using AIC. We use optimal block motion model. And also, we estimate

  1. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  2. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  3. CONSTRUCTION OF A DYNAMIC INPUT-OUTPUT MODEL WITH A HUMAN CAPITAL BLOCK

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2017-03-01

    Full Text Available The accumulation of human capital is an important factor of economic growth. It seems to be useful to include «human capital» as a factor of a macroeconomic model, as it helps to take into account the quality differentiation of the workforce. Most of the models usually distinguish labor force by the levels of education, while some of the factors remain unaccounted. Among them are health status and culture development level, which influence productivity level as well as gross product reproduction. Inclusion of the human capital block to the interindustry model can help to make it more reliable for economic development forecasting. The article presents a mathematical description of the extended dynamic input-output model (DIOM with a human capital block. The extended DIOM is based on the Input-Output Model from The KAMIN system (the System of Integrated Analyses of Interindustrial Information developed at the Institute of Economics and Industrial Engineering of the Siberian Branch of the Academy of Sciences of the Russian Federation and at the Novosibirsk State University. The extended input-output model can be used to analyze and forecast development of Russian economy.

  4. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  5. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...

  6. Kalman-filter model for determining block and trickle SNM losses

    International Nuclear Information System (INIS)

    Barlow, R.E.; Durst, M.J.; Smiriga, N.G.

    1982-07-01

    This paper describes an integrated decision procedure for deciding whether a diversion of SNM has occurred. Two possible types of diversion are considered: a block loss during a single time period and a cumulative trickle loss over several time periods. The methodology used is based on a compound Kalman filter model. Numerical examples illustrate our approach

  7. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-11-24

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  8. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  9. Direct methods and residue type specific isotope labeling in NMR structure determination and model-driven sequential assignment

    International Nuclear Information System (INIS)

    Schedlbauer, Andreas; Auer, Renate; Ledolter, Karin; Tollinger, Martin; Kloiber, Karin; Lichtenecker, Roman; Ruedisser, Simon; Hommel, Ulrich; Schmid, Walther; Konrat, Robert; Kontaxis, Georg

    2008-01-01

    Direct methods in NMR based structure determination start from an unassigned ensemble of unconnected gaseous hydrogen atoms. Under favorable conditions they can produce low resolution structures of proteins. Usually a prohibitively large number of NOEs is required, to solve a protein structure ab-initio, but even with a much smaller set of distance restraints low resolution models can be obtained which resemble a protein fold. One problem is that at such low resolution and in the absence of a force field it is impossible to distinguish the correct protein fold from its mirror image. In a hybrid approach these ambiguous models have the potential to aid in the process of sequential backbone chemical shift assignment when 13 C β and 13 C' shifts are not available for sensitivity reasons. Regardless of the overall fold they enhance the information content of the NOE spectra. These, combined with residue specific labeling and minimal triple-resonance data using 13 C α connectivity can provide almost complete sequential assignment. Strategies for residue type specific labeling with customized isotope labeling patterns are of great advantage in this context. Furthermore, this approach is to some extent error-tolerant with respect to data incompleteness, limited precision of the peak picking, and structural errors caused by misassignment of NOEs

  10. Sequential studies of cell inhibition of host fibroblasts in 51 patients given HLA-identical marrow grafts

    International Nuclear Information System (INIS)

    Tsoi, M.-S.; Storb, R.; Weiden, P.; Santos, E.; Kopecky, K.J.; Thomas, E.D.

    1982-01-01

    Thirty-four patients with leukemia, two with lymphoma and 15 with aplastic anemia, were studied sequentially between 33 and 666 days after treatment with high-dose cyclophosphamide and/or total body irradiation and marrow transplantation from HLA-identical siblings. Peripheral blood mononuclear cells from patients and normals were tested for cell inhibition (CI) of cultured skin fibroblasts from patients and donors or unrelated individuals using the microcytotoxicity assay. In addition, blocking of Cl by factors in patient serum was studied. Twenty patients were tested three or more times during the first year, 15 patients were studied twice and 16 patients once. Results showed that during the first 2 mo postgrafting, mononuclear cells from 45% of the patients had neither Cl nor blocking activities, 50% had Cl and serum blocking, and 5% had Cl without blocking. As time after transplatation elapsed, the percentage of patients without Cl gradually increased, whereas the percentage of patients with Cl with or without blocking decreased. At the end of 1 yr, 89% of the patients showed neither Cl nor blocking compared with 11% who showed Cl and blocking. This trend was significant (p < 0.005). The results were in agreement with our previus conclusion that serum-blocking factors were not important in the maintenance of the stable chimeric state. Early after grafting, there was a suggestive correlation (p < 0.08) between the in vitro finding of Cl of host fibroblasts by chimeric cells and the in vivo finding of acute graft-vs-host disease. However, there was no evidence that presence or absence of serum-blocking factors early after grafting was correlated with presence or absence of graft-vs-host disease

  11. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    Science.gov (United States)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S

  12. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  13. Fluorosilicone multi-block copolymers tethering quaternary ammonium salt groups for antimicrobial purpose

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Fang; Qin, Xiaoshuai; Li, Yancai; Ren, Lixia; Zhao, Yunhui, E-mail: zhaoyunhui@tju.edu.cn; Yuan, Xiaoyan

    2015-08-30

    Highlights: • QAS-containing fluorosilicone multi-block copolymers were synthesized. • The block length of PHFBMA in the copolymers was tailored via RAFT polymerization. • Surface roughness of the copolymers decreased with the increased PHFBMA content. • A certain length of PHFBMA block enhanced C−N{sup +} percentage on the surface. - Abstract: Symmetrically structured fluorosilicone multi-block copolymers containing poly(2-(dimethylamino)ethyl methacrylate) (PDMAEMA) and poly(hexafluorobutyl methacrylate) (PHFBMA) were sequentially synthesized via reversible addition–fragmentation chain transfer polymerization, using a polydimethylsiloxane (PDMS) chain transfer agent with dithiocarbonate groups at both ends. Then, the CBABC-type block copolymers were quaternized with n-octyliodide to tether quaternary ammonium salt (QAS) groups in the PDMAEMA blocks for the antimicrobial use. The obtained fluorosilicone copolymers showed clear variations in the C-N{sup +} composition and surface morphology on their films depending on the content of the PHFBMA blocks, which were characterized by X-ray photoelectron spectroscopy and atomic force microscopy, respectively. The results indicated that the symmetrical CBABC structure favored PDMS and QAS tethered blocks migrating to the film surface. With the mass percentage of the PHFBMA increased from 0 to 32.5%, the surface roughness of the copolymer film decreased gradually with a tendency to form a smooth surface. Owing to the surface properties, fluorosilicone multi-block copolymers containing a certain amount of PHFBMA with higher C-N{sup +} content and relatively smooth morphology demonstrated obvious antimicrobial activity against Gram-positive bacteria, Bacillus subtilis and Gram-negative bacteria, Escherichia coli. The functionalized multi-block copolymers based on fluorosilicone and QAS groups would have potential applications in antimicrobial coatings.

  14. Fluorosilicone multi-block copolymers tethering quaternary ammonium salt groups for antimicrobial purpose

    International Nuclear Information System (INIS)

    Zhou, Fang; Qin, Xiaoshuai; Li, Yancai; Ren, Lixia; Zhao, Yunhui; Yuan, Xiaoyan

    2015-01-01

    Highlights: • QAS-containing fluorosilicone multi-block copolymers were synthesized. • The block length of PHFBMA in the copolymers was tailored via RAFT polymerization. • Surface roughness of the copolymers decreased with the increased PHFBMA content. • A certain length of PHFBMA block enhanced C−N + percentage on the surface. - Abstract: Symmetrically structured fluorosilicone multi-block copolymers containing poly(2-(dimethylamino)ethyl methacrylate) (PDMAEMA) and poly(hexafluorobutyl methacrylate) (PHFBMA) were sequentially synthesized via reversible addition–fragmentation chain transfer polymerization, using a polydimethylsiloxane (PDMS) chain transfer agent with dithiocarbonate groups at both ends. Then, the CBABC-type block copolymers were quaternized with n-octyliodide to tether quaternary ammonium salt (QAS) groups in the PDMAEMA blocks for the antimicrobial use. The obtained fluorosilicone copolymers showed clear variations in the C-N + composition and surface morphology on their films depending on the content of the PHFBMA blocks, which were characterized by X-ray photoelectron spectroscopy and atomic force microscopy, respectively. The results indicated that the symmetrical CBABC structure favored PDMS and QAS tethered blocks migrating to the film surface. With the mass percentage of the PHFBMA increased from 0 to 32.5%, the surface roughness of the copolymer film decreased gradually with a tendency to form a smooth surface. Owing to the surface properties, fluorosilicone multi-block copolymers containing a certain amount of PHFBMA with higher C-N + content and relatively smooth morphology demonstrated obvious antimicrobial activity against Gram-positive bacteria, Bacillus subtilis and Gram-negative bacteria, Escherichia coli. The functionalized multi-block copolymers based on fluorosilicone and QAS groups would have potential applications in antimicrobial coatings

  15. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers.

    Science.gov (United States)

    Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui

    2018-01-01

    Children's block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children's block building performance. Chinese preschoolers ( N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children's block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  16. Lithium-Assisted Copolymerization of CO 2 /Cyclohexene Oxide: A Novel and Straightforward Route to Polycarbonates and Related Block Copolymers

    KAUST Repository

    Zhang, Dongyue; Zhang, Hefeng; Hadjichristidis, Nikolaos; Gnanou, Yves; Feng, Xiaoshuang

    2016-01-01

    of this initiating system also resides in the easy access to PSt-b-PCHC (PSt: polystyrene) and PI-b-PCHC (PI: polyisoprene) block copolymers which can be derived by mere one-pot sequential addition of styrene or dienes first and then of CO2 and CHO under the same

  17. Sequential lineup presentation promotes less-biased criterion setting but does not improve discriminability.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil

    2012-06-01

    When compared with simultaneous lineup presentation, sequential presentation has been shown to reduce false identifications to a greater extent than it reduces correct identifications. However, there has been much debate about whether this difference in identification performance represents improved discriminability or more conservative responding. In this research, data from 22 experiments that compared sequential and simultaneous lineups were analyzed using a compound signal-detection model, which is specifically designed to describe decision-making performance on tasks such as eyewitness identification tests. Sequential (cf. simultaneous) presentation did not influence discriminability, but produced a conservative shift in response bias that resulted in less-biased choosing for sequential than simultaneous lineups. These results inform understanding of the effects of lineup presentation mode on eyewitness identification decisions.

  18. Learning Orthographic Structure With Sequential Generative Neural Networks.

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  19. An efficient, block-by-block algorithm for inverting a block tridiagonal, nearly block Toeplitz matrix

    International Nuclear Information System (INIS)

    Reuter, Matthew G; Hill, Judith C

    2012-01-01

    We present an algorithm for computing any block of the inverse of a block tridiagonal, nearly block Toeplitz matrix (defined as a block tridiagonal matrix with a small number of deviations from the purely block Toeplitz structure). By exploiting both the block tridiagonal and the nearly block Toeplitz structures, this method scales independently of the total number of blocks in the matrix and linearly with the number of deviations. Numerical studies demonstrate this scaling and the advantages of our method over alternatives.

  20. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  1. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    regularities. In combination with a wide range of auditory MMN studies, the present study highlights the critical role of sensory systems in automatically encoding sequential regularities when modeling the world.

  2. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  3. Evaluation of Effective thermal conductivity models on the prismatic fuel block of a Very High Temperature Reactor by CFD analysis

    International Nuclear Information System (INIS)

    Shin, Dong-Ho; Cho, Hyoung-Kyu; Tak, Nam-Il; Park, Goon-Cherl

    2014-01-01

    Effective thermal conductivity models which can be used to analyze the heat transfer phenomena of a prismatic fuel block were evaluated by CFD analysis. In the accident condition of VHTR when forced convection is lost, the heat flows in radial direction through the hexagonal fuel blocks that contain the large number of coolant holes and fuel compacts. Due to the complex geometry of fuel block and radiation heat transfer; the detail heat transfer computation on the fuel block needs excessive computation resources. Therefore, the detail computation isn’t appropriate for the lumped parameter code. The system code such as GAMMA+ adopts effective thermal conductivity model. Despite the complexity in heat transfer modes, the accurate analysis on the heat transfer in fuel block is necessary since it is directly relevant to the integrity of nuclear fuel embedded in fuel block. To satisfy the accurate analysis of complex heat transfer modes with limited computing sources, the credible effective thermal conductivity (ETC) models in which the effects of all of heat transfer modes are lumped is necessary. In this study, various ETC models were introduced and they are evaluated with CFD calculations. It is estimated that Maxwell-based model was the most pertinent one among the introduced ETC models. (author)

  4. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  5. Dispersion and alignment of nanorods in cylindrical block copolymer thin films.

    Science.gov (United States)

    Rasin, Boris; Chao, Huikuan; Jiang, Guoqian; Wang, Dongliang; Riggleman, Robert A; Composto, Russell J

    2016-02-21

    Although significant progress has been made in controlling the dispersion of spherical nanoparticles in block copolymer thin films, our ability to disperse and control the assembly of anisotropic nanoparticles into well-defined structures is lacking in comparison. Here we use a combination of experiments and field theoretic simulations to examine the assembly of gold nanorods (AuNRs) in a block copolymer. Experimentally, poly(2-vinylpyridine)-grafted AuNRs (P2VP-AuNRs) are incorporated into poly(styrene)-b-poly(2-vinylpyridine) (PS-b-P2VP) thin films with a vertical cylinder morphology. At sufficiently low concentrations, the AuNRs disperse in the block copolymer thin film. For these dispersed AuNR systems, atomic force microscopy combined with sequential ultraviolet ozone etching indicates that the P2VP-AuNRs segregate to the base of the P2VP cylinders. Furthermore, top-down transmission electron microscopy imaging shows that the P2VP-AuNRs mainly lie parallel to the substrate. Our field theoretic simulations indicate that the NRs are strongly attracted to the cylinder base where they can relieve the local stretching of the minority block of the copolymer. These simulations also indicate conditions that will drive AuNRs to adopt a vertical orientation, namely by increasing nanorod length and/or reducing the wetting of the short block towards the substrate.

  6. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  7. Three-dimensional Reconstruction of Block Shape Irregularity and its Effects on Block Impacts Using an Energy-Based Approach

    Science.gov (United States)

    Zhang, Yulong; Liu, Zaobao; Shi, Chong; Shao, Jianfu

    2018-04-01

    This study is devoted to three-dimensional modeling of small falling rocks in block impact analysis in energy view using the particle flow method. The restitution coefficient of rockfall collision is introduced from the energy consumption mechanism to describe rockfall-impacting properties. Three-dimensional reconstruction of falling block is conducted with the help of spherical harmonic functions that have satisfactory mathematical properties such as orthogonality and rotation invariance. Numerical modeling of the block impact to the bedrock is analyzed with both the sphere-simplified model and the 3D reconstructed model. Comparisons of the obtained results suggest that the 3D reconstructed model is advantageous in considering the combination effects of rockfall velocity and rotations during colliding process. Verification of the modeling is carried out with the results obtained from other experiments. In addition, the effects of rockfall morphology, surface characteristics, velocity, and volume, colliding damping and relative angle are investigated. A three-dimensional reconstruction modulus of falling blocks is to be developed and incorporated into the rockfall simulation tools in order to extend the modeling results at block scale to slope scale.

  8. Strategic Path Planning by Sequential Parametric Bayesian Decisions

    Directory of Open Access Journals (Sweden)

    Baro Hyun

    2013-11-01

    Full Text Available The objective of this research is to generate a path for a mobile agent that carries sensors used for classification, where the path is to optimize strategic objectives that account for misclassification and the consequences of misclassification, and where the weights assigned to these consequences are chosen by a strategist. We propose a model that accounts for the interaction between the agent kinematics (i.e., the ability to move, informatics (i.e., the ability to process data to information, classification (i.e., the ability to classify objects based on the information, and strategy (i.e., the mission objective. Within this model, we pose and solve a sequential decision problem that accounts for strategist preferences and the solution to the problem yields a sequence of kinematic decisions of a moving agent. The solution of the sequential decision problem yields the following flying tactics: “approach only objects whose suspected identity matters to the strategy”. These tactics are numerically illustrated in several scenarios.

  9. Hidden Markov model approach for identifying the modular framework of the protein backbone.

    Science.gov (United States)

    Camproux, A C; Tuffery, P; Chevrolat, J P; Boisvieux, J F; Hazout, S

    1999-12-01

    The hidden Markov model (HMM) was used to identify recurrent short 3D structural building blocks (SBBs) describing protein backbones, independently of any a priori knowledge. Polypeptide chains are decomposed into a series of short segments defined by their inter-alpha-carbon distances. Basically, the model takes into account the sequentiality of the observed segments and assumes that each one corresponds to one of several possible SBBs. Fitting the model to a database of non-redundant proteins allowed us to decode proteins in terms of 12 distinct SBBs with different roles in protein structure. Some SBBs correspond to classical regular secondary structures. Others correspond to a significant subdivision of their bounding regions previously considered to be a single pattern. The major contribution of the HMM is that this model implicitly takes into account the sequential connections between SBBs and thus describes the most probable pathways by which the blocks are connected to form the framework of the protein structures. Validation of the SBBs code was performed by extracting SBB series repeated in recoding proteins and examining their structural similarities. Preliminary results on the sequence specificity of SBBs suggest promising perspectives for the prediction of SBBs or series of SBBs from the protein sequences.

  10. Reconstruction of a Phreatic Explosion from Block Dispersion Modeling at King's Bowl, Idaho

    Science.gov (United States)

    Kobs-Nawotniak, S. E.; Sears, D. W. G.; Hughes, S. S.; Borg, C.; Sears, H.; Skok, J. R.; Elphic, R. C.; Lim, D. S. S.; Heldmann, J. L.; Haberle, C. W.; Guy, H.; Kobayashi, L.; Garry, B.; Neish, C.; Kim, K. J.

    2014-12-01

    King's Bowl (KB), located in Idaho's eastern Snake River Plain, was formed by a phreatic blast through a mostly-congealed lava lake. Blocks up to ~2m diameter were ejected from the vent to form a ballistic ejecta blanket extending radially more than 100m. The blocks on the western side of the KB fissure are extraordinarily well exposed, as the fine fraction was blown eastward by ambient winds during the explosion. We present preliminary modeling results using the western ballistic blocks of KB to calculate the energy of the eruption, and the water volume necessary to create the blast. This work is presented in conjunction with two other 2014 AGU conference abstracts submitted by NASA SSERVI funded FINESSE (Field Investigations to Enable Solar System Science and Exploration) team members: Hughes et al., which introduces the geology of KB and Sears et al., which discusses field observation and data trends. Results of this research are extensible to steam-driven pits on other solar system bodies, including those observed on Mars, Phobos, Deimos, and the asteroids. Over 600 blocks ranging from .2 to 2m in diameter were mapped using differential GPS and measured for 3 axial lengths and vesicularity. Mass calculations were corrected using a scaling factor determined from measurements of 100 blocks at KB, coupled with targeted density measurements. The dispersed block trajectories were modeled using a fourth order Runge-Kutta solution of the equations of motion to calculate suites of possible ejection speeds and angles. The resulting characteristic vent velocities were used to calculate the kinetic energy necessary to evacuate the crater at KB; energy required for fragmentation is neglected at this time. Total mass in the kinetic energy calculations was calculated by two separate methods: 1) current volume expression of the KB crater and 2) an additive solution of the ejecta field as determined from radial transect surveys. From the kinetic energy we calculated the

  11. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers

    Directory of Open Access Journals (Sweden)

    Mi Tian

    2018-01-01

    Full Text Available Children’s block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children’s block building performance. Chinese preschoolers (N = 180 participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children’s block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  12. Analysis of membrane fusion as a two-state sequential process: evaluation of the stalk model.

    Science.gov (United States)

    Weinreb, Gabriel; Lentz, Barry R

    2007-06-01

    We propose a model that accounts for the time courses of PEG-induced fusion of membrane vesicles of varying lipid compositions and sizes. The model assumes that fusion proceeds from an initial, aggregated vesicle state ((A) membrane contact) through two sequential intermediate states (I(1) and I(2)) and then on to a fusion pore state (FP). Using this model, we interpreted data on the fusion of seven different vesicle systems. We found that the initial aggregated state involved no lipid or content mixing but did produce leakage. The final state (FP) was not leaky. Lipid mixing normally dominated the first intermediate state (I(1)), but content mixing signal was also observed in this state for most systems. The second intermediate state (I(2)) exhibited both lipid and content mixing signals and leakage, and was sometimes the only leaky state. In some systems, the first and second intermediates were indistinguishable and converted directly to the FP state. Having also tested a parallel, two-intermediate model subject to different assumptions about the nature of the intermediates, we conclude that a sequential, two-intermediate model is the simplest model sufficient to describe PEG-mediated fusion in all vesicle systems studied. We conclude as well that a fusion intermediate "state" should not be thought of as a fixed structure (e.g., "stalk" or "transmembrane contact") of uniform properties. Rather, a fusion "state" describes an ensemble of similar structures that can have different mechanical properties. Thus, a "state" can have varying probabilities of having a given functional property such as content mixing, lipid mixing, or leakage. Our data show that the content mixing signal may occur through two processes, one correlated and one not correlated with leakage. Finally, we consider the implications of our results in terms of the "modified stalk" hypothesis for the mechanism of lipid pore formation. We conclude that our results not only support this hypothesis but

  13. Modeling of block copolymer dry etching for directed self-assembly lithography

    Science.gov (United States)

    Belete, Zelalem; Baer, Eberhard; Erdmann, Andreas

    2018-03-01

    Directed self-assembly (DSA) of block copolymers (BCP) is a promising alternative technology to overcome the limits of patterning for the semiconductor industry. DSA exploits the self-assembling property of BCPs for nano-scale manufacturing and to repair defects in patterns created during photolithography. After self-assembly of BCPs, to transfer the created pattern to the underlying substrate, selective etching of PMMA (poly (methyl methacrylate)) to PS (polystyrene) is required. However, the etch process to transfer the self-assemble "fingerprint" DSA patterns to the underlying layer is still a challenge. Using combined experimental and modelling studies increases understanding of plasma interaction with BCP materials during the etch process and supports the development of selective process that form well-defined patterns. In this paper, a simple model based on a generic surface model has been developed and an investigation to understand the etch behavior of PS-b-PMMA for Ar, and Ar/O2 plasma chemistries has been conducted. The implemented model is calibrated for etch rates and etch profiles with literature data to extract parameters and conduct simulations. In order to understand the effect of the plasma on the block copolymers, first the etch model was calibrated for polystyrene (PS) and poly (methyl methacrylate) (PMMA) homopolymers. After calibration of the model with the homopolymers etch rate, a full Monte-Carlo simulation was conducted and simulation results are compared with the critical-dimension (CD) and selectivity of etch profile measurement. In addition, etch simulations for lamellae pattern have been demonstrated, using the implemented model.

  14. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  15. Morphing the feature-based multi-blocks of normative/healthy vertebral geometries to scoliosis vertebral geometries: development of personalized finite element models.

    Science.gov (United States)

    Hadagali, Prasannaah; Peters, James R; Balasubramanian, Sriram

    2018-03-12

    Personalized Finite Element (FE) models and hexahedral elements are preferred for biomechanical investigations. Feature-based multi-block methods are used to develop anatomically accurate personalized FE models with hexahedral mesh. It is tedious to manually construct multi-blocks for large number of geometries on an individual basis to develop personalized FE models. Mesh-morphing method mitigates the aforementioned tediousness in meshing personalized geometries every time, but leads to element warping and loss of geometrical data. Such issues increase in magnitude when normative spine FE model is morphed to scoliosis-affected spinal geometry. The only way to bypass the issue of hex-mesh distortion or loss of geometry as a result of morphing is to rely on manually constructing the multi-blocks for scoliosis-affected spine geometry of each individual, which is time intensive. A method to semi-automate the construction of multi-blocks on the geometry of scoliosis vertebrae from the existing multi-blocks of normative vertebrae is demonstrated in this paper. High-quality hexahedral elements were generated on the scoliosis vertebrae from the morphed multi-blocks of normative vertebrae. Time taken was 3 months to construct the multi-blocks for normative spine and less than a day for scoliosis. Efforts taken to construct multi-blocks on personalized scoliosis spinal geometries are significantly reduced by morphing existing multi-blocks.

  16. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

    Directory of Open Access Journals (Sweden)

    Huber-Wagner S

    2010-05-01

    Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

  17. Sequential activation of CD8+ T cells in the draining lymph nodes in response to pulmonary virus infection.

    Science.gov (United States)

    Yoon, Heesik; Legge, Kevin L; Sung, Sun-sang J; Braciale, Thomas J

    2007-07-01

    We have used a TCR-transgenic CD8+ T cell adoptive transfer model to examine the tempo of T cell activation and proliferation in the draining lymph nodes (DLN) in response to respiratory virus infection. The T cell response in the DLN differed for mice infected with different type A influenza strains with the onset of T cell activation/proliferation to the A/JAPAN virus infection preceding the A/PR8 response by 12-24 h. This difference in T cell activation/proliferation correlated with the tempo of accelerated respiratory DC (RDC) migration from the infected lungs to the DLN in response to influenza virus infection, with the migrant RDC responding to the A/JAPAN infection exhibiting a more rapid accumulation in the lymph nodes (i.e., peak migration for A/JAPAN at 18 h, A/PR8 at 24-36 h). Furthermore, in vivo administration of blocking anti-CD62L Ab at various time points before/after infection revealed that the virus-specific CD8+ T cells entered the DLN and activated in a sequential "conveyor belt"-like fashion. These results indicate that the tempo of CD8+ T cell activation/proliferation after viral infection is dependent on the tempo of RDC migration to the DLN and that T cell activation occurs in an ordered sequential fashion.

  18. Modelling a multi-crystal detector block for PET

    International Nuclear Information System (INIS)

    Carroll, L.R.; Nutt, R.; Casey, M.

    1985-01-01

    A simple mathematical model describes the performance of a modular detector ''block'' which is a key component in an advanced, high-resolution PET Scanner. Each block contains 32 small bismuth germanate (BGO) crystals coupled to four photomultiplier tubes (PMTs) through a coded light pipe. AT each PMT cathode the charge released for 511 keV coincidence events may be characterized as Poisson random variables in which the variance grows as the mean of the observed current. Given the light from BGO, one must; arrange the best coding - the distribution of light to the four PMTs, specify an optimum decoding scheme for choosing the correct crystal location from a noisy ensemble of PMT currents, and estimate the average probability of error. The statistical fluctuation or ''noise'' becomes decoupled from the ''signal'' and can be regarded as independent, additive components with zero mean and unit variance. Moreover, the envelope of the transformed noise distribution approximates very closely a normal (Gaussian) distribution with variance = 1. Specifying the coding and decoding strategy becomes a problem of signalling through a channel corrupted by additive, white, Gaussian noise; a classic problem long since solved within the context of Communication Engineering using geometry: i.e. distance, volume, angle, inner product, etc., in a linear space of higher dimension

  19. Application of multi-block methods in cement production

    DEFF Research Database (Denmark)

    Svinning, K.; Høskuldsson, Agnar

    2008-01-01

    distribution and the two last blocks the superficial microstructure analysed by differential thermo gravimetric analysis. The multi-block method is used to identify the role of each part. The score vectors of each block can be analysed separately or together with score vectors of other blocks. Stepwise......Compressive strength at 1 day of Portland cement as a function of the microstructure of cement was statistically modelled by application of multi-block regression method. The observation X-matrix was partitioned into four blocks, the first block representing the mineralogy, the second particle size...... regression is used to find minimum number of variables of each block. The multi-block method proved useful in determining the modelling strength of each data block and finding minimum number of variables within each data block....

  20. . Redundancy and blocking in the spatial domain: A connectionist model

    Directory of Open Access Journals (Sweden)

    I. P. L. Mc Laren

    2002-01-01

    Full Text Available How can the observations of spatial blocking (Rodrigo, Chamizo, McLaren & Mackintosh, 1997 and cue redundancy (O’Keefe and Conway, 1978 be reconciled within the framework provided by an error-correcting, connectionist account of spatial navigation? I show that an implementation of McLaren’s (1995 better beta model can serve this purpose, and examine some of the implications for spatial learning and memory.

  1. Distributed data access in the sequential access model at the D0 experiment at Fermilab

    International Nuclear Information System (INIS)

    Terekhov, Igor; White, Victoria

    2000-01-01

    The authors present the Sequential Access Model (SAM), which is the data handling system for D0, one of two primary High Energy Experiments at Fermilab. During the next several years, the D0 experiment will store a total of about 1 PByte of data, including raw detector data and data processed at various levels. The design of SAM is not specific to the D0 experiment and carries few assumptions about the underlying mass storage level; its ideas are applicable to any sequential data access. By definition, in the sequential access mode a user application needs to process a stream of data, by accessing each data unit exactly once, the order of data units in the stream being irrelevant. The units of data are laid out sequentially in files. The adopted model allows for significant optimizations of system performance, decrease of user file latency and increase of overall throughput. In particular, caching is done with the knowledge of all the files needed in the near future, defined as all the files of the already running or submitted jobs. The bulk of the data is stored in files on tape in the mass storage system (MSS) called Enstore[2] and also developed at Fermilab. (The tape drives are served by an ADIC AML/2 Automated Tape Library). At any given time, SAM has a small fraction of the data cached on disk for processing. In the present paper, the authors discuss how data is delivered onto disk and how it is accessed by user applications. They will concentrate on data retrieval (consumption) from the MSS; when SAM is used for storing of data, the mechanisms are rather symmetrical. All of the data managed by SAM is cataloged in great detail in a relational database (ORACLE). The database also serves as the persistency mechanism for the SAM servers described in this paper. Any client or server in the SAM system which needs to store or retrieve information from the database does so through the interfaces of a CORBA-based database server. The users (physicists) use the

  2. Final report of the TRUE Block Scale project. 1. Characterisation and model development

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Peter; Byegaard, Johan [Geosigma AB, Uppsala (Sweden); Dershowitz, Bill; Doe, Thomas [Golder Associates Inc., Redmond, WA (United States); Hermanson, Jan [Golder Associates AB (Sweden); Meier, Peter [ANDRA, Chatenay-Malabry (France); Tullborg, Eva-Lena [Terralogica AB (Sweden); Winberg, Anders (ed.) [Conterra AB, Partille (Sweden)

    2002-04-01

    The general objectives of the TRUE Block Scale Project were to 1) increase understanding of tracer transport in a fracture network and to improve predictive capabilities, 2) assess the importance of tracer retention mechanisms (diffusion and sorption) in a fracture network, and 3) assess the link between flow and transport data as a means for predicting transport phenomena. During the period mid 1996 through mid 1999 a 200x250x100 m rock volume was characterised with the purpose of furnishing the basis for successful tracer experiments in a network of conductive structures in the block scale (10-100 m). In total five cored boreholes were drilled as part of the project in an iterative mode with a period of analysis following completion of characterisation, and with a strong component of inter activity with numerical modelling and experimental design, particularly towards the end of the characterisation. The combined use of pressure responses due to drilling and drilling records provided important early information/confirmation of the existence and location of a given structure. Verification of conductors identified from pressure responses was achieved through the use of various flow logging techniques. The usage of the Posiva difference flow log towards the end of the characterisation work enabled identification of discrete conductive fractures with a high resolution. Pressure responses collected during drilling were used to obtain a first assessment of connectivity between boreholes. The transient behaviour of the responses collected during cross-hole interference tests in packed-off boreholes were used to identify families of responses, which correlated well with the identified principal families of structures/fracture networks. The conductive geometry of the investigated rock block is made up of steeply dipping deterministic NW structures and NNW structures. High inflows in the boreholes were for the most part associated with geologically/geometrically identified

  3. Final report of the TRUE Block Scale project. 1. Characterisation and model development

    International Nuclear Information System (INIS)

    Andersson, Peter; Byegaard, Johan; Dershowitz, Bill; Doe, Thomas; Hermanson, Jan; Meier, Peter; Tullborg, Eva-Lena; Winberg, Anders

    2002-04-01

    The general objectives of the TRUE Block Scale Project were to 1) increase understanding of tracer transport in a fracture network and to improve predictive capabilities, 2) assess the importance of tracer retention mechanisms (diffusion and sorption) in a fracture network, and 3) assess the link between flow and transport data as a means for predicting transport phenomena. During the period mid 1996 through mid 1999 a 200x250x100 m rock volume was characterised with the purpose of furnishing the basis for successful tracer experiments in a network of conductive structures in the block scale (10-100 m). In total five cored boreholes were drilled as part of the project in an iterative mode with a period of analysis following completion of characterisation, and with a strong component of inter activity with numerical modelling and experimental design, particularly towards the end of the characterisation. The combined use of pressure responses due to drilling and drilling records provided important early information/confirmation of the existence and location of a given structure. Verification of conductors identified from pressure responses was achieved through the use of various flow logging techniques. The usage of the Posiva difference flow log towards the end of the characterisation work enabled identification of discrete conductive fractures with a high resolution. Pressure responses collected during drilling were used to obtain a first assessment of connectivity between boreholes. The transient behaviour of the responses collected during cross-hole interference tests in packed-off boreholes were used to identify families of responses, which correlated well with the identified principal families of structures/fracture networks. The conductive geometry of the investigated rock block is made up of steeply dipping deterministic NW structures and NNW structures. High inflows in the boreholes were for the most part associated with geologically/geometrically identified

  4. Sliding contact on the interface of elastic body and rigid surface using a single block Burridge-Knopoff model

    Science.gov (United States)

    Amireghbali, A.; Coker, D.

    2018-01-01

    Burridge and Knopoff proposed a mass-spring model to explore interface dynamics along a fault during an earthquake. The Burridge and Knopoff (BK) model is composed of a series of blocks of equal mass connected to each other by springs of same stiffness. The blocks also are attached to a rigid driver via another set of springs that pulls them at a constant velocity against a rigid substrate. They studied dynamics of interface for an especial case with ten blocks and a specific set of fault properties. In our study effects of Coulomb and rate-state dependent friction laws on the dynamics of a single block BK model is investigated. The model dynamics is formulated as a system of coupled nonlinear ordinary differential equations in state-space form which lends itself to numerical integration methods, e.g. Runge-Kutta procedure for solution. The results show that the rate and state dependent friction law has the potential of triggering dynamic patterns that are different from those under Coulomb law.

  5. Sequential Uniformly Reweighted Sum-Product Algorithm for Cooperative Localization in Wireless Networks

    OpenAIRE

    Li, Wei; Yang, Zhen; Hu, Haifeng

    2014-01-01

    Graphical models have been widely applied in solving distributed inference problems in wireless networks. In this paper, we formulate the cooperative localization problem in a mobile network as an inference problem on a factor graph. Using a sequential schedule of message updates, a sequential uniformly reweighted sum-product algorithm (SURW-SPA) is developed for mobile localization problems. The proposed algorithm combines the distributed nature of belief propagation (BP) with the improved p...

  6. Kinetic Modeling of Synthetic Wastewater Treatment by the Moving-bed Sequential Continuous-inflow Reactor (MSCR

    Directory of Open Access Journals (Sweden)

    Mohammadreza Khani

    2016-11-01

    Full Text Available It was the objective of the present study to conduct a kinetic modeling of a Moving-bed Sequential Continuous-inflow Reactor (MSCR and to develop its best prediction model. For this purpose, a MSCR consisting of an aerobic-anoxic pilot 50 l in volume and an anaerobic pilot of 20 l were prepared. The MSCR was fed a variety of organic loads and operated at different hydraulic retention times (HRT using synthetic wastewater at input COD concentrations of 300 to 1000 mg/L with HRTs of 2 to 5 h. Based on the results and the best system operation conditions, the highest COD removal (98.6% was obtained at COD=500 mg/L. The three well-known first order, second order, and Stover-Kincannon models were utilized for the kinetic modeling of the reactor. Based on the kinetic analysis of organic removal, the Stover-Kincannon model was chosen for the kinetic modeling of the moving bed biofilm. Given its advantageous properties in the statisfactory prediction of organic removal at different organic loads, this model is recommended for the design and operation of MSCR systems.

  7. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  8. From spinning conformal blocks to matrix Calogero-Sutherland models

    Science.gov (United States)

    Schomerus, Volker; Sobko, Evgeny

    2018-04-01

    In this paper we develop further the relation between conformal four-point blocks involving external spinning fields and Calogero-Sutherland quantum mechanics with matrix-valued potentials. To this end, the analysis of [1] is extended to arbitrary dimensions and to the case of boundary two-point functions. In particular, we construct the potential for any set of external tensor fields. Some of the resulting Schrödinger equations are mapped explicitly to the known Casimir equations for 4-dimensional seed conformal blocks. Our approach furnishes solutions of Casimir equations for external fields of arbitrary spin and dimension in terms of functions on the conformal group. This allows us to reinterpret standard operations on conformal blocks in terms of group-theoretic objects. In particular, we shall discuss the relation between the construction of spinning blocks in any dimension through differential operators acting on seed blocks and the action of left/right invariant vector fields on the conformal group.

  9. Patient specific dynamic geometric models from sequential volumetric time series image data.

    Science.gov (United States)

    Cameron, B M; Robb, R A

    2004-01-01

    Generating patient specific dynamic models is complicated by the complexity of the motion intrinsic and extrinsic to the anatomic structures being modeled. Using a physics-based sequentially deforming algorithm, an anatomically accurate dynamic four-dimensional model can be created from a sequence of 3-D volumetric time series data sets. While such algorithms may accurately track the cyclic non-linear motion of the heart, they generally fail to accurately track extrinsic structural and non-cyclic motion. To accurately model these motions, we have modified a physics-based deformation algorithm to use a meta-surface defining the temporal and spatial maxima of the anatomic structure as the base reference surface. A mass-spring physics-based deformable model, which can expand or shrink with the local intrinsic motion, is applied to the metasurface, deforming this base reference surface to the volumetric data at each time point. As the meta-surface encompasses the temporal maxima of the structure, any extrinsic motion is inherently encoded into the base reference surface and allows the computation of the time point surfaces to be performed in parallel. The resultant 4-D model can be interactively transformed and viewed from different angles, showing the spatial and temporal motion of the anatomic structure. Using texture maps and per-vertex coloring, additional data such as physiological and/or biomechanical variables (e.g., mapping electrical activation sequences onto contracting myocardial surfaces) can be associated with the dynamic model, producing a 5-D model. For acquisition systems that may capture only limited time series data (e.g., only images at end-diastole/end-systole or inhalation/exhalation), this algorithm can provide useful interpolated surfaces between the time points. Such models help minimize the number of time points required to usefully depict the motion of anatomic structures for quantitative assessment of regional dynamics.

  10. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  11. On-line Flagging of Anomalies and Adaptive Sequential Hypothesis Testing for Fine-feature Characterization of Geosynchronous Satellites

    Science.gov (United States)

    2015-10-18

    model-based evidence. This work resolves cross-tag using three methods (Z-test for dependent data, classical sequential analysis and Brownian motion...Slider Movement The two-facet model is used as the Inversion Model. It represents a three-axis stabilized satellite as two facets, namely a body...the sequential analysis. If is independent and has an approximately normal distribution then Brownian motion drift analysis is used. If is

  12. A sequential vesicle pool model with a single release sensor and a ca(2+)-dependent priming catalyst effectively explains ca(2+)-dependent properties of neurosecretion

    DEFF Research Database (Denmark)

    Walter, Alexander M; da Silva Pinheiro, Paulo César; Verhage, Matthijs

    2013-01-01

    identified. We here propose a Sequential Pool Model (SPM), assuming a novel Ca(2+)-dependent action: a Ca(2+)-dependent catalyst that accelerates both forward and reverse priming reactions. While both models account for fast fusion from the Readily-Releasable Pool (RRP) under control of synaptotagmin-1...... the simultaneous changes in release rate and amplitude seen when mutating the SNARE-complex. Finally, it can account for the loss of fast- and the persistence of slow release in the synaptotagmin-1 knockout by assuming that the RRP is depleted, leading to slow and Ca(2+)-dependent fusion from the NRP. We conclude...... that the elusive 'alternative Ca(2+) sensor' for slow release might be the upstream priming catalyst, and that a sequential model effectively explains Ca(2+)-dependent properties of secretion without assuming parallel pools or sensors....

  13. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  14. Depth geological model building: application to the 3D high resolution 'ANDRA' seismic block

    International Nuclear Information System (INIS)

    Mari, J.L.; Yven, B.

    2012-01-01

    Document available in extended abstract form only. 3D seismic blocks and logging data, mainly acoustic and density logs, are often used for geological model building in time. The geological model must be then converted from time to depth. Geostatistical approach for time-to-depth conversion of seismic horizons is often used in many geo-modelling projects. From a geostatistical point of view, the time-to-depth conversion of seismic horizons is a classical estimation problem involving one or more secondary variables. Bayesian approach [1] provides an excellent estimator which is more general than the traditional kriging with external drift(s) and fits very well to the needs for time-to-depth conversion of seismic horizons. The time-to-depth conversion of the selected seismic horizons is used to compute a time-to-depth conversion model at the time sampling rate (1 ms). The 3D depth conversion model allows the computation of an interval velocity block which is compared with the acoustic impedance block to estimate a density block as QC. Non realistic density values are edited and the interval velocity block as well as the depth conversion model is updated. The proposed procedure has been applied on a 3D data set. The dataset comes from a High Resolution 3D seismic survey recorded in France at the boundary of the Meuse and Haute-Marne departments in the vicinity of the Andra Center (National radioactive waste management Agency). The 3D design is a cross spread. The active spread is composed of 12 receiver lines with 120 stations each. The source lines are perpendicular to the receiver lines. The receiver and source line spacings are respectively 80 m and 120 m. The receiver and source point spacings are 20 m. The source is a Vibroseis source generating a signal in the 14 - 140 Hz frequency bandwidth.. The bin size is 10 x 10 m 2 . The nominal fold is 60. A conventional seismic sequence was applied to the data set. It includes amplitude recovery, deconvolution and wave

  15. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  16. Local persistence and blocking in the two-dimensional blume-capel model

    OpenAIRE

    Silva, Roberto da; Dahmen, S. R.

    2004-01-01

    In this paper we study the local persistence of the two-dimensional Blume-Capel Model by extending the concept of Glauber dynamics. We verify that for any value of the ratio alpha = D/J between anisotropy D and exchange J the persistence shows a power law behavior. In particular for alpha 0 (a ¹ 1) we observe the occurrence of blocking.

  17. Prospectivity Modeling of Karstic Groundwater Using a Sequential Exploration Approach in Tepal Area, Iran

    Science.gov (United States)

    Sharifi, Fereydoun; Arab-Amiri, Ali Reza; Kamkar-Rouhani, Abolghasem; Yousefi, Mahyar; Davoodabadi-Farahani, Meysam

    2017-09-01

    The purpose of this study is water prospectivity modeling (WPM) for recognizing karstic water-bearing zones by using analyses of geo-exploration data in Kal-Qorno valley, located in Tepal area, north of Iran. For this, a sequential exploration method applied on geo-evidential data to delineate target areas for further exploration. In this regard, two major exploration phases including regional and local scales were performed. In the first phase, indicator geological features, structures and lithological units, were used to model groundwater prospectivity as a regional scale. In this phase, for karstic WPM, fuzzy lithological and structural evidence layers were generated and combined using fuzzy operators. After generating target areas using WPM, in the second phase geophysical surveys including gravimetry and geoelectrical resistivity were carried out on the recognized high potential zones as a local scale exploration. Finally the results of geophysical analyses in the second phase were used to select suitable drilling locations to access and extract karstic groundwater in the study area.

  18. Random and cooperative sequential adsorption

    Science.gov (United States)

    Evans, J. W.

    1993-10-01

    Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

  19. Gstat: a program for geostatistical modelling, prediction and simulation

    Science.gov (United States)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  20. Lineup composition, suspect position, and the sequential lineup advantage.

    Science.gov (United States)

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  1. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  2. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  3. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  4. Block-Parallel Data Analysis with DIY2

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, Dmitriy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Peterka, Tom [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-08-30

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial, parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.

  5. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  6. Sequential bargaining in a market with one seller and two different buyers

    DEFF Research Database (Denmark)

    Tranæs, Torben; Hendon, Ebbe

    1991-01-01

    A matching and bargaining model in a market with one seller and two buyers, differing only in their reservation price, is analyzed. No subgame perfect equilibrium exists for stationary strategies. We demonstrate the existence of inefficient equilibria in which the low buyer receives the good with...... with large probability, even as friction becomes negligible. We investigate the relationship between the use of Nash and sequential bargaining. Nash bargaining seems applicable only when the sequential approach yields a unique stationary strategy subgame perfect equilibrium...

  7. Sequential bargaining in a market with one seller and two different buyers

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Tranæs, Torben

    1991-01-01

    A matching and bargaining model in a market with one seller and two buyers, differing only in their reservation price, is analyzed. No subgame perfect equilibrium exists for stationary strategies. We demonstrate the existence of inefficient equilibria in which the low buyer receives the good with...... with large probability, even as friction becomes negligible. We investigate the relationship between the use of Nash and sequential bargaining. Nash bargaining seems applicable only when the sequential approach yields a unique stationary strategy subgame perfect equilibrium....

  8. Palladium(II)-Catalyzed meta-C-H Olefination: Constructing Multisubstituted Arenes through Homo-Diolefination and Sequential Hetero-Diolefination.

    Science.gov (United States)

    Bera, Milan; Maji, Arun; Sahoo, Santosh K; Maiti, Debabrata

    2015-07-13

    Divinylbenzene derivatives represent an important class of molecular building blocks in organic chemistry and materials science. Reported herein is the palladium-catalyzed synthesis of divinylbenzenes by meta-C-H olefination of sulfone-based arenes. Successful sequential olefinations in a position-selective manner provided a novel route for the synthesis of hetero-dialkenylated products, which are difficult to access using conventional methods. Additionally, 1,3,5-trialkenylated compounds can be generated upon successful removal of the directing group. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  10. Collaborative Filtering Based on Sequential Extraction of User-Item Clusters

    Science.gov (United States)

    Honda, Katsuhiro; Notsu, Akira; Ichihashi, Hidetomo

    Collaborative filtering is a computational realization of “word-of-mouth” in network community, in which the items prefered by “neighbors” are recommended. This paper proposes a new item-selection model for extracting user-item clusters from rectangular relation matrices, in which mutual relations between users and items are denoted in an alternative process of “liking or not”. A technique for sequential co-cluster extraction from rectangular relational data is given by combining the structural balancing-based user-item clustering method with sequential fuzzy cluster extraction appraoch. Then, the tecunique is applied to the collaborative filtering problem, in which some items may be shared by several user clusters.

  11. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  12. Sequential decisions: a computational comparison of observational and reinforcement accounts.

    Directory of Open Access Journals (Sweden)

    Nazanin Mohammadi Sepahvand

    Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.

  13. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

    Science.gov (United States)

    Hayashi, Ken; Hayashi, Hideyuki

    2006-10-01

    To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

  14. Blocked vs. interleaved presentation and proactive interference in episodic memory.

    Science.gov (United States)

    Del Missier, Fabio; Sassano, Alessia; Coni, Valentina; Salomonsson, Martina; Mäntylä, Timo

    2018-05-01

    Although a number of theoretical accounts of proactive interference (PI) in episodic memory have been proposed, existing empirical evidence does not support conclusively a single view yet. In two experiments we tested the predictions of the temporal discrimination theory of PI against alternative accounts by manipulating the presentation schedule of study materials (lists blocked by category vs. interleaved). In line with the temporal discrimination theory, we observed a clear buildup of (and release from) PI in the blocked condition, in which all the lists of the same category were presented sequentially. In the interleaved condition, with alternating lists of different categories, a more gradual and smoother buildup of PI was observed. When participants were left free to choose their presentation schedule, they spontaneously adopted an interleaved schedule, resulting again in more gradual PI. After longer delays, we observed recency effects at the list level in overall recall and, in the blocked condition, PI-related effects. The overall pattern of findings agrees with the predictions of the temporal discrimination theory of PI, complemented with categorical processing of list items, but not with alternative accounts, shedding light on the dynamics and underpinnings of PI under diverse presentation schedules and over different time scales.

  15. Mixed logit model of intended residential mobility in renovated historical blocks in China

    NARCIS (Netherlands)

    Jiang, W.; Timmermans, H.J.P.; Li, H.; Feng, T.

    2016-01-01

    Using data from 8 historical blocks in China, the influence of socialdemographic characteristics and residential satisfaction on intended residentialmobility is analysed. The results of a mixed logit model indicate that higher residential satisfaction will lead to a lower intention to move house,

  16. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  17. Static reservoir modeling of the Bahariya reservoirs for the oilfields development in South Umbarka area, Western Desert, Egypt

    Science.gov (United States)

    Abdel-Fattah, Mohamed I.; Metwalli, Farouk I.; Mesilhi, El Sayed I.

    2018-02-01

    3D static reservoir modeling of the Bahariya reservoirs using seismic and wells data can be a relevant part of an overall strategy for the oilfields development in South Umbarka area (Western Desert, Egypt). The seismic data is used to build the 3D grid, including fault sticks for the fault modeling, and horizon interpretations and surfaces for horizon modeling. The 3D grid is the digital representation of the structural geology of Bahariya Formation. When we got a reasonably accurate representation, we fill the 3D grid with facies and petrophysical properties to simulate it, to gain a more precise understanding of the reservoir properties behavior. Sequential Indicator Simulation (SIS) and Sequential Gaussian Simulation (SGS) techniques are the stochastic algorithms used to spatially distribute discrete reservoir properties (facies) and continuous reservoir properties (shale volume, porosity, and water saturation) respectively within the created 3D grid throughout property modeling. The structural model of Bahariya Formation exhibits the trapping mechanism which is a fault assisted anticlinal closure trending NW-SE. This major fault breaks the reservoirs into two major fault blocks (North Block and South Block). Petrophysical models classified Lower Bahariya reservoir as a moderate to good reservoir rather than Upper Bahariya reservoir in terms of facies, with good porosity and permeability, low water saturation, and moderate net to gross. The Original Oil In Place (OOIP) values of modeled Bahariya reservoirs show hydrocarbon accumulation in economic quantity, considering the high structural dips at the central part of South Umbarka area. The powerful of 3D static modeling technique has provided a considerable insight into the future prediction of Bahariya reservoirs performance and production behavior.

  18. Selective condensation drives partitioning and sequential secretion of cyst wall proteins in differentiating Giardia lamblia.

    Directory of Open Access Journals (Sweden)

    Christian Konrad

    2010-04-01

    Full Text Available Controlled secretion of a protective extracellular matrix is required for transmission of the infective stage of a large number of protozoan and metazoan parasites. Differentiating trophozoites of the highly minimized protozoan parasite Giardia lamblia secrete the proteinaceous portion of the cyst wall material (CWM consisting of three paralogous cyst wall proteins (CWP1-3 via organelles termed encystation-specific vesicles (ESVs. Phylogenetic and molecular data indicate that Diplomonads have lost a classical Golgi during reductive evolution. However, neogenesis of ESVs in encysting Giardia trophozoites transiently provides basic Golgi functions by accumulating presorted CWM exported from the ER for maturation. Based on this "minimal Golgi" hypothesis we predicted maturation of ESVs to a trans Golgi-like stage, which would manifest as a sorting event before regulated secretion of the CWM. Here we show that proteolytic processing of pro-CWP2 in maturing ESVs coincides with partitioning of CWM into two fractions, which are sorted and secreted sequentially with different kinetics. This novel sorting function leads to rapid assembly of a structurally defined outer cyst wall, followed by slow secretion of the remaining components. Using live cell microscopy we find direct evidence for condensed core formation in maturing ESVs. Core formation suggests that a mechanism controlled by phase transitions of the CWM from fluid to condensed and back likely drives CWM partitioning and makes sorting and sequential secretion possible. Blocking of CWP2 processing by a protease inhibitor leads to mis-sorting of a CWP2 reporter. Nevertheless, partitioning and sequential secretion of two portions of the CWM are unaffected in these cells. Although these cysts have a normal appearance they are not water resistant and therefore not infective. Our findings suggest that sequential assembly is a basic architectural principle of protective wall formation and requires

  19. Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations

    Science.gov (United States)

    Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James

    2013-01-01

    This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks [Scargle 1998]-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piece- wise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by [Arias-Castro, Donoho and Huo 2003]. In the spirit of Reproducible Research [Donoho et al. (2008)] all of the code and data necessary to reproduce all of the figures in this paper are included as auxiliary material.

  20. STUDIES IN ASTRONOMICAL TIME SERIES ANALYSIS. VI. BAYESIAN BLOCK REPRESENTATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Scargle, Jeffrey D. [Space Science and Astrobiology Division, MS 245-3, NASA Ames Research Center, Moffett Field, CA 94035-1000 (United States); Norris, Jay P. [Physics Department, Boise State University, 2110 University Drive, Boise, ID 83725-1570 (United States); Jackson, Brad [The Center for Applied Mathematics and Computer Science, Department of Mathematics, San Jose State University, One Washington Square, MH 308, San Jose, CA 95192-0103 (United States); Chiang, James, E-mail: jeffrey.d.scargle@nasa.gov [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States)

    2013-02-20

    This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by Arias-Castro et al. In the spirit of Reproducible Research all of the code and data necessary to reproduce all of the figures in this paper are included as supplementary material.

  1. Modeling two-phase ferroelectric composites by sequential laminates

    International Nuclear Information System (INIS)

    Idiart, Martín I

    2014-01-01

    Theoretical estimates are given for the overall dissipative response of two-phase ferroelectric composites with complex particulate microstructures under arbitrary loading histories. The ferroelectric behavior of the constituent phases is described via a stored energy density and a dissipation potential in accordance with the theory of generalized standard materials. An implicit time-discretization scheme is used to generate a variational representation of the overall response in terms of a single incremental potential. Estimates are then generated by constructing sequentially laminated microgeometries of particulate type whose overall incremental potential can be computed exactly. Because they are realizable, by construction, these estimates are guaranteed to conform with any material constraints, to satisfy all pertinent bounds and to exhibit the required convexity properties with no duality gap. Predictions for representative composite and porous systems are reported and discussed in the light of existing experimental data. (paper)

  2. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  3. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  4. [Optimization and Prognosis of Cell Radiosensitivity Enhancement in vitro and in vivo after Sequential Thermoradiactive Action].

    Science.gov (United States)

    Belkina, S V; Petin, V G

    2016-01-01

    Previously developed mathematical model of simultaneous action of two inactivating agents has been adapted and tested to describe the results of sequential action. The possibility of applying the mathematical model to the interpretation and prognosis of the increase in radio-sensitivity of tumor cells as well as mammalian cells after sequential action of two high temperatures or hyperthermia and ionizing radiation is analyzed. The model predicts the value of the thermal enhancement ratio depending on the duration of thermal exposure, its greatest value, and the condition under which it is achieved.

  5. Indications of de Sitter spacetime from classical sequential growth dynamics of causal sets

    International Nuclear Information System (INIS)

    Ahmed, Maqbool; Rideout, David

    2010-01-01

    A large class of the dynamical laws for causal sets described by a classical process of sequential growth yields a cyclic universe, whose cycles of expansion and contraction are punctuated by single 'origin elements' of the causal set. We present evidence that the effective dynamics of the immediate future of one of these origin elements, within the context of the sequential growth dynamics, yields an initial period of de Sitter-like exponential expansion, and argue that the resulting picture has many attractive features as a model of the early universe, with the potential to solve some of the standard model puzzles without any fine-tuning.

  6. Numerical Validation of Chemical Compositional Model for Wettability Alteration Processes

    Science.gov (United States)

    Bekbauov, Bakhbergen; Berdyshev, Abdumauvlen; Baishemirov, Zharasbek; Bau, Domenico

    2017-12-01

    Chemical compositional simulation of enhanced oil recovery and surfactant enhanced aquifer remediation processes is a complex task that involves solving dozens of equations for all grid blocks representing a reservoir. In the present work, we perform a numerical validation of the newly developed mathematical formulation which satisfies the conservation laws of mass and energy and allows applying a sequential solution approach to solve the governing equations separately and implicitly. Through its application to the numerical experiment using a wettability alteration model and comparisons with existing chemical compositional model's numerical results, the new model has proven to be practical, reliable and stable.

  7. THE RELATIONSHIP BETWEEN STIFFNESS LOSSES AND LOSSES IN BEARINGS OF ROPE BLOCKS

    Directory of Open Access Journals (Sweden)

    V. M. Bohomaz

    2017-08-01

    Full Text Available Purpose. To determine the efficiency of rope blocks, it is necessary to determine the stiffness coefficient of the ropes of blocks, taking into account the classification group of the mechanism and the wrapping angle of a block by a rope. At this one should use well-tested values of the efficiency coefficients of the rope blocks, taking into account the wrapping angle of a block by a rope and the analytically found friction coefficients of the rolling bearings given to the trunnion. Methodology. The work presents the analytical method of determining the coefficient of bearing resistance of the block when it is rotated by both the inner and outer cages, as well as the design scheme of the bearing of the block. Findings. The analysis of the lubrication method effect, the operating mode of the mechanism and the wrapping angle of a block by a rope on losses in bearings was carried out for rope blocks. The corresponding comparative tables of losses are given. Analysis of the obtained calculation results allows us to establish: 1 the main resistance affecting the cable blocks efficiency is the resistance in bearings; 2 the second largest component is the stiffness losses, depending on the operating mode, the wrapping angle of a block by a rope, the type of bearing lubrication; 3 the block efficiency when rotating the inner cage is higher than rotating the outer one by about 3% with thick lubrication and 1M mode; 4 in the sequential location of assemblies with a rolling bearing, it is necessary to strive for the design of the assembly in which the inner cage rotates; 5 with the number of blocks up to 5, one can use the recommended definitions of block bearings in the literature with an error in the efficiency value of up to 10%. Originality. The authors obtained values of resistances in the rolling bearings of the rope blocks and stiffness losses due to the girth of the block by the rope. In this case, dependences were used to determine the coefficient

  8. Approximating the imbibition and absorption behavior of a distribution of matrix blocks by an equivalent spherical block

    International Nuclear Information System (INIS)

    Zimmerman, R.W.; Bodvarsson, G.S.

    1994-03-01

    A theoretical study is presented of the effect of matrix block shape and matrix block size distribution on liquid imbibition and solute absorption in a fractured rock mass. It is shown that the behavior of an individual irregularly-shaped matrix block can be modeled with reasonable accuracy by using the results for a spherical matrix block, if one uses an effective radius a = 3V/A, where V is the volume of the block and A is its surface area. In the early-time regime of matrix imbibition, it is shown that a collection of blocks of different sizes can be modeled by a single equivalent block, with an equivalent radius of -1 > -1 , where the average is taken on a volumetrically-weighted basis. In an intermediate time regime, it is shown for the case where the radii are normally distributed that the equivalent radius is reasonably well approximated by the mean radius . In the long-time limit, where no equivalent radius can be rigorously defined, an asymptotic expression is derived for the cumulative diffusion as a function of the mean and the standard deviation of the radius distribution function

  9. Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Damiani, Rick; Wendt, Fabian; Musial, Walter; Finucane, Z.; Hulliger, L.; Chilka, S.; Dolan, D.; Cushing, J.; O' Connell, D.; Falk, S.

    2017-06-19

    The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, the turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  10. Bursts and heavy tails in temporal and sequential dynamics of foraging decisions.

    Directory of Open Access Journals (Sweden)

    Kanghoon Jung

    2014-08-01

    Full Text Available A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a a highly biased choice distribution; and (b preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices.

  11. Bursts and Heavy Tails in Temporal and Sequential Dynamics of Foraging Decisions

    Science.gov (United States)

    Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D.; Jeong, Jaeseung

    2014-01-01

    A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices. PMID:25122498

  12. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity.

    Science.gov (United States)

    Kuersteiner, Guido M; Prucha, Ingmar R

    2013-06-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n . The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.

  13. Intramolecular structures in a single copolymer chain consisting of flexible and semiflexible blocks: Monte Carlo simulation of a lattice model

    International Nuclear Information System (INIS)

    Martemyanova, Julia A; Ivanov, Victor A; Paul, Wolfgang

    2014-01-01

    We study conformational properties of a single multiblock copolymer chain consisting of flexible and semiflexible blocks. Monomer units of different blocks are equivalent in the sense of the volume interaction potential, but the intramolecular bending potential between successive bonds along the chain is different. We consider a single flexible-semiflexible regular multiblock copolymer chain with equal content of flexible and semiflexible units and vary the length of the blocks and the stiffness parameter. We perform flat histogram type Monte Carlo simulations based on the Wang-Landau approach and employ the bond fluctuation lattice model. We present here our data on different non-trivial globular morphologies which we have obtained in our model for different values of the block length and the stiffness parameter. We demonstrate that the collapse can occur in one or in two stages depending on the values of both these parameters and discuss the role of the inhomogeneity of intraglobular distributions of monomer units of both flexible and semiflexible blocks. For short block length and/or large stiffness the collapse occurs in two stages, because it goes through intermediate (meta-)stable structures, like a dumbbell shaped conformation. In such conformations the semiflexible blocks form a cylinder-like core, and the flexible blocks form two domains at both ends of such a cylinder. For long block length and/or small stiffness the collapse occurs in one stage, and in typical conformations the flexible blocks form a spherical core of a globule while the semiflexible blocks are located on the surface and wrap around this core.

  14. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  15. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

    OpenAIRE

    Utpala Niranjan; R.B.V. Subramanyam; V-Khana

    2010-01-01

    Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

  16. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  17. Adaptive decision making in a dynamic environment: a test of a sequential sampling model of relative judgment.

    Science.gov (United States)

    Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew

    2013-09-01

    Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  18. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    OpenAIRE

    Shiyi Zhou; Shu Da; Heng Guo; Xichao Zhang

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relat...

  19. Monitoring sequential electron transfer with EPR

    International Nuclear Information System (INIS)

    Thurnauer, M.C.; Feezel, L.L.; Snyder, S.W.; Tang, J.; Norris, J.R.; Morris, A.L.; Rustandi, R.R.

    1989-01-01

    A completely general model which treats electron spin polarization (ESP) found in a system in which radical pairs with different magnetic interactions are formed sequentially has been described. This treatment has been applied specifically to the ESP found in the bacterial reaction center. Test cases show clearly how parameters such as structure, lifetime, and magnetic interactions within the successive radical pairs affect the ESP, and demonstrate that previous treatments of this problem have been incomplete. The photosynthetic bacterial reaction center protein is an ideal system for testing the general model of ESP. The radical pair which exhibits ESP, P 870 + Q - (P 870 + is the oxidized, primary electron donor, a bacteriochlorophyll special pair and Q - is the reduced, primary quinone acceptor) is formed via sequential electron transport through the intermediary radical pair P 870 + I - (I - is the reduced, intermediary electron acceptor, a bacteriopheophytin). In addition, it is possible to experimentally vary most of the important parameters, such as the lifetime of the intermediary radical pair and the magnetic interactions in each pair. It has been shown how selective isotopic substitution ( 1 H or 2 H) on P 870 , I and Q affects the ESP of the EPR spectrum of P 870 + Q - , observed at two different microwave frequencies, in Fe 2+ -depleted bacterial reaction centers of Rhodobacter sphaeroides R26. Thus, the relative magnitudes of the magnetic properties (nuclear hyperfine and g-factor differences) which influence ESP development were varied. The results support the general model of ESP in that they suggest that the P 870 + Q - radical pair interactions are the dominant source of ESP production in 2 H bacterial reaction centers

  20. Sequential nonadiabatic excitation of large molecules and ions driven by strong laser fields

    International Nuclear Information System (INIS)

    Markevitch, Alexei N.; Levis, Robert J.; Romanov, Dmitri A.; Smith, Stanley M.; Schlegel, H. Bernhard; Ivanov, Misha Yu.

    2004-01-01

    Electronic processes leading to dissociative ionization of polyatomic molecules in strong laser fields are investigated experimentally, theoretically, and numerically. Using time-of-flight ion mass spectroscopy, we study the dependence of fragmentation on laser intensity for a series of related molecules and report regular trends in this dependence on the size, symmetry, and electronic structure of a molecule. Based on these data, we develop a model of dissociative ionization of polyatomic molecules in intense laser fields. The model is built on three elements: (i) nonadiabatic population transfer from the ground electronic state to the excited-state manifold via a doorway (charge-transfer) transition; (ii) exponential enhancement of this transition by collective dynamic polarization of all electrons, and (iii) sequential energy deposition in both neutral molecules and resulting molecular ions. The sequential nonadiabatic excitation is accelerated by a counterintuitive increase of a large molecule's polarizability following its ionization. The generic theory of sequential nonadiabatic excitation forms a basis for quantitative description of various nonlinear processes in polyatomic molecules and ions in strong laser fields

  1. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  2. Combined KHFAC + DC nerve block without onset or reduced nerve conductivity after block

    Science.gov (United States)

    Franke, Manfred; Vrabec, Tina; Wainright, Jesse; Bhadra, Niloy; Bhadra, Narendra; Kilgore, Kevin

    2014-10-01

    Objective. Kilohertz frequency alternating current (KHFAC) waveforms have been shown to provide peripheral nerve conductivity block in many acute and chronic animal models. KHFAC nerve block could be used to address multiple disorders caused by neural over-activity, including blocking pain and spasticity. However, one drawback of KHFAC block is a transient activation of nerve fibers during the initiation of the nerve block, called the onset response. The objective of this study is to evaluate the feasibility of using charge balanced direct current (CBDC) waveforms to temporarily block motor nerve conductivity distally to the KHFAC electrodes to mitigate the block onset-response. Approach. A total of eight animals were used in this study. A set of four animals were used to assess feasibility and reproducibility of a combined KHFAC + CBDC block. A following randomized study, conducted on a second set of four animals, compared the onset response resulting from KHFAC alone and combined KHFAC + CBDC waveforms. To quantify the onset, peak forces and the force-time integral were measured during KHFAC block initiation. Nerve conductivity was monitored throughout the study by comparing muscle twitch forces evoked by supra-maximal stimulation proximal and distal to the block electrodes. Each animal of the randomized study received at least 300 s (range: 318-1563 s) of cumulative dc to investigate the impact of combined KHFAC + CBDC on nerve viability. Main results. The peak onset force was reduced significantly from 20.73 N (range: 18.6-26.5 N) with KHFAC alone to 0.45 N (range: 0.2-0.7 N) with the combined CBDC and KHFAC block waveform (p conductivity was observed after application of the combined KHFAC + CBDC block relative to KHFAC waveforms. Significance. The distal application of CBDC can significantly reduce or even completely prevent the KHFAC onset response without a change in nerve conductivity.

  3. Comparison of Nursing Student and Instructor Preferences for Block and Nonblock Clinical Models.

    Science.gov (United States)

    Rohatinsky, Noelle; Chachula, Kathryn; Sedgwick, Monique; Press, Madeline M; Compton, Roslyn M; Lane, Brenda

    2017-10-05

    Clinical experiences are the hallmark of prelicensure nursing programs and assist students with applying nursing theory into practice. The literature is limited with respect to nursing student and instructor preferences for type of clinical model to facilitate student learning. This article explores these perceptions in the nursing programs of 5 universities located in 4 Western Canadian provinces. Findings support the use of both nonblock and block clinical models throughout nursing education programs.

  4. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  5. Modelling the sequential geographical exploitation and potential collapse of marine fisheries through economic globalization, climate change and management alternatives

    Directory of Open Access Journals (Sweden)

    Gorka Merino

    2011-07-01

    Full Text Available Global marine fisheries production has reached a maximum and may even be declining. Underlying this trend is a well-understood sequence of development, overexploitation, depletion and in some instances collapse of individual fish stocks, a pattern that can sequentially link geographically distant populations. Ineffective governance, economic considerations and climate impacts are often responsible for this sequence, although the relative contribution of each factor is contentious. In this paper we use a global bioeconomic model to explore the synergistic effects of climate variability, economic pressures and management measures in causing or avoiding this sequence. The model shows how a combination of climate-induced variability in the underlying fish population production, particular patterns of demand for fish products and inadequate management is capable of driving the world’s fisheries into development, overexploitation, collapse and recovery phases consistent with observations. Furthermore, it demonstrates how a sequential pattern of overexploitation can emerge as an endogenous property of the interaction between regional environmental fluctuations and a globalized trade system. This situation is avoidable through adaptive management measures that ensure the sustainability of regional production systems in the face of increasing global environmental change and markets. It is concluded that global management measures are needed to ensure that global food supply from marine products is optimized while protecting long-term ecosystem services across the world’s oceans.

  6. Losing a dime with a satisfied mind: positive affect predicts less search in sequential decision making.

    Science.gov (United States)

    von Helversen, Bettina; Mata, Rui

    2012-12-01

    We investigated the contribution of cognitive ability and affect to age differences in sequential decision making by asking younger and older adults to shop for items in a computerized sequential decision-making task. Older adults performed poorly compared to younger adults partly due to searching too few options. An analysis of the decision process with a formal model suggested that older adults set lower thresholds for accepting an option than younger participants. Further analyses suggested that positive affect, but not fluid abilities, was related to search in the sequential decision task. A second study that manipulated affect in younger adults supported the causal role of affect: Increased positive affect lowered the initial threshold for accepting an attractive option. In sum, our results suggest that positive affect is a key factor determining search in sequential decision making. Consequently, increased positive affect in older age may contribute to poorer sequential decisions by leading to insufficient search. 2013 APA, all rights reserved

  7. A ligand exchange strategy for one-pot sequential synthesis of (hyperbranched polyethylene)-b-(linear polyketone) block polymers.

    Science.gov (United States)

    Zhang, Zhichao; Ye, Zhibin

    2012-08-18

    Upon the addition of an equimolar amount of 2,2'-bipyridine, a cationic Pd-diimine complex capable of facilitating "living" ethylene polymerization is switched to catalyze "living" alternating copolymerization of 4-tertbutylstyrene and CO. This unique chemistry is thus employed to synthesize a range of well-defined treelike (hyperbranched polyethylene)-b-(linear polyketone) block polymers.

  8. Morphology-properties relationship on nanocomposite films based on poly(styrene-block-diene-block-styrene copolymers and silver nanoparticles

    Directory of Open Access Journals (Sweden)

    2011-02-01

    Full Text Available A comparative study on the self-assembled nanostructured morphology and the rheological and mechanical properties of four different triblock copolymers, based on poly(styrene-block-diene-block-styrene and poly(styrene-block-diene-block-styrene matrices, and of their respective nanocomposites with 1 wt% silver nanoparticles, is reported in this work. In order to obtain well-dispersed nanoparticles in the block copolymer matrix, dodecanethiol was used as surfactant, showing good affinity with both nanoparticles and the polystyrene phase of the matrices as predicted by the solubility parameters calculated based on Hoftyzer and Van Krevelen theory. The block copolymer with the highest PS content shows the highest tensile modulus and tensile strength, but also the smallest elongation at break. When silver nanoparticles treated with surfactant were added to the block copolymer matrices, each system studied shows higher mechanical properties due to the good dispersion and the good interface of Ag nanoparticles in the matrices. Furthermore, it has been shown that semiempirical models such as Guth and Gold equation and Halpin-Tsai model can be used to predict the tensile modulus of the analyzed nanocomposites.

  9. Modelling sequential Biosphere systems under Climate change for radioactive waste disposal. Project BIOCLIM

    International Nuclear Information System (INIS)

    Texier, D.; Degnan, P.; Loutre, M.F.; Lemaitre, G.; Paillard, D.; Thorne, M.

    2000-01-01

    The BIOCLIM project (Modelling Sequential Biosphere systems under Climate change for Radioactive Waste Disposal) is part of the EURATOM fifth European framework programme. The project was launched in October 2000 for a three-year period. It is coordinated by ANDRA, the French national radioactive waste management agency. The project brings together a number of European radioactive waste management organisations that have national responsibilities for the safe disposal of radioactive wastes, and several highly experienced climate research teams. Waste management organisations involved are: NIREX (UK), GRS (Germany), ENRESA (Spain), NRI (Czech Republic) and ANDRA (France). Climate research teams involved are: LSCE (CEA/CNRS, France), CIEMAT (Spain), UPMETSIMM (Spain), UCL/ASTR (Belgium) and CRU (UEA, UK). The Environmental Agency for England and Wales provides a regulatory perspective. The consulting company Enviros Consulting (UK) assists ANDRA by contributing to both the administrative and scientific aspects of the project. This paper describes the project and progress to date. (authors)

  10. Double tracer autoradiographic method for sequential evaluation of regional cerebral perfusion

    International Nuclear Information System (INIS)

    Matsuda, H.; Tsuji, S.; Oba, H.; Kinuya, K.; Terada, H.; Sumiya, H.; Shiba, K.; Mori, H.; Hisada, K.; Maeda, T.

    1989-01-01

    A new double tracer autoradiographic method for the sequential evaluation of altered regional cerebral perfusion in the same animal is presented. This method is based on the sequential injection of two tracers, 99m Tc-hexamethylpropyleneamine oxime and N-isopropyl-( 125 I)p-iodoamphetamine. This method is validated in the assessment of brovincamine effects on regional cerebral perfusion in an experimental model of chronic brain ischemia in the rat. The drug enhanced perfusion recovery in low-flow areas, selectively in surrounding areas of infarction. The results suggest that this technique is of potential use in the study of neuropharmacological effects applied during the experiment

  11. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

    Science.gov (United States)

    Landsberger, David; Galvin, John J

    2011-09-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

  12. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  13. Block effect on HCV infection by HMGB1 released from virus-infected cells: An insight from mathematical modeling

    Science.gov (United States)

    Wang, Wei; Ma, Wanbiao

    2018-06-01

    The nuclear protein high-mobility group box 1 (HMGB1) can have an active role in deoxyribonucleic acid (DNA) organization and the regulation of transcription. Based on the new findings from a recent experimental study, the blocking effect on HCV infection by HMGB1 released from virus-infected cells is investigated using a diffusive model for viral infection dynamics. In the model, the diffusion of the virus depends not only on its concentration gradient, but also on the concentration of HMGB1. The basic reproduction number, threshold dynamics, stability properties of the steady states, travelling wave solutions, and spreading speed for the proposed model are studied. We show that the HMGB1-induced blocking of HCV infection slows the spread of virus compared with random diffusion only. Numerically, it is shown that a high concentration of HMGB1 can block the spread of virus and this confirms, not only qualitatively but also quantitatively, the experimental result.

  14. Sequential assimilation of multi-mission dynamical topography into a global finite-element ocean model

    Directory of Open Access Journals (Sweden)

    S. Skachko

    2008-12-01

    Full Text Available This study focuses on an accurate estimation of ocean circulation via assimilation of satellite measurements of ocean dynamical topography into the global finite-element ocean model (FEOM. The dynamical topography data are derived from a complex analysis of multi-mission altimetry data combined with a referenced earth geoid. The assimilation is split into two parts. First, the mean dynamic topography is adjusted. To this end an adiabatic pressure correction method is used which reduces model divergence from the real evolution. Second, a sequential assimilation technique is applied to improve the representation of thermodynamical processes by assimilating the time varying dynamic topography. A method is used according to which the temperature and salinity are updated following the vertical structure of the first baroclinic mode. It is shown that the method leads to a partially successful assimilation approach reducing the rms difference between the model and data from 16 cm to 2 cm. This improvement of the mean state is accompanied by significant improvement of temporal variability in our analysis. However, it remains suboptimal, showing a tendency in the forecast phase of returning toward a free run without data assimilation. Both the mean difference and standard deviation of the difference between the forecast and observation data are reduced as the result of assimilation.

  15. Geologic characterization of fractures as an aid to hydrologic modeling of the SCV block at the Stripa mine

    International Nuclear Information System (INIS)

    Martel, S.

    1992-04-01

    A series of hydrologic tests have been conducted at the Stripa research mine in Sweden to develop hydrologic characterization techniques for rock masses in which fractures form the primary flow paths. The structural studies reported here were conducted to aid in the hydrologic examination of a cubic block of granite with dimensions of 150 m on a side. This block (the SCV block) is located between the 310- and 460-m depth levels at the Stripa mine. This report describes and interprets the fracture system geology at Stripa as revealed in drift exposures, checks the interpretive model against borehole records and discusses the hydrologic implication of the model, and examines the likely effects of stress redistribution around a drift (the Validation drift) on inflow to the drift along a prominent fracture zone. (72 refs.) (au)

  16. Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models

    OpenAIRE

    Liang, Yuli

    2015-01-01

    This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data....

  17. Block Copolymers of Macrolactones/Small Lactones by a “Catalyst-Switch” Organocatalytic Strategy. Thermal Properties and Phase Behavior

    KAUST Repository

    Ladelta, Viko

    2018-03-16

    Poly(macrolactones) (PMLs) can be considered as biodegradable alternatives of polyethylene; however, controlling the ring-opening polymerization (ROP) of macrolactone (ML) monomers remains a challenge due to their low ring strain. To overcome this problem, phosphazene (t-BuP4), a strong superbase, has to be used as catalyst. Unfortunately, the one-pot sequential block copolymerization of MLs with small lactones (SLs) is impossible since the high basicity of t-BuP4 promotes both intra- and intermolecular transesterification reactions, thus leading to random copolymers. By using ROP and the “catalyst-switch” strategy [benzyl alcohol, t-BuP4/neutralization with diphenyl phosphate/(t-BuP2)], we were able to synthesize different well-defined PML-b-PSL block copolymers (MLs: dodecalactone, ω-pentadecalactone, and ω-hexadecalactone; SLs: δ-valerolactone and ε-caprolactone). The thermal properties and the phase behavior of these block copolymers were studied by differential scanning calorimetry and X-ray diffraction spectroscopy. This study shows that the thermal properties and phase behavior of PMLs-b-PSLs are largely influenced by the PMLs block if PMLs components constitute the majority of the block copolymers.

  18. Block Copolymers of Macrolactones/Small Lactones by a “Catalyst-Switch” Organocatalytic Strategy. Thermal Properties and Phase Behavior

    KAUST Repository

    Ladelta, Viko; Kim, Joey D.; Bilalis, Panagiotis; Gnanou, Yves; Hadjichristidis, Nikolaos

    2018-01-01

    Poly(macrolactones) (PMLs) can be considered as biodegradable alternatives of polyethylene; however, controlling the ring-opening polymerization (ROP) of macrolactone (ML) monomers remains a challenge due to their low ring strain. To overcome this problem, phosphazene (t-BuP4), a strong superbase, has to be used as catalyst. Unfortunately, the one-pot sequential block copolymerization of MLs with small lactones (SLs) is impossible since the high basicity of t-BuP4 promotes both intra- and intermolecular transesterification reactions, thus leading to random copolymers. By using ROP and the “catalyst-switch” strategy [benzyl alcohol, t-BuP4/neutralization with diphenyl phosphate/(t-BuP2)], we were able to synthesize different well-defined PML-b-PSL block copolymers (MLs: dodecalactone, ω-pentadecalactone, and ω-hexadecalactone; SLs: δ-valerolactone and ε-caprolactone). The thermal properties and the phase behavior of these block copolymers were studied by differential scanning calorimetry and X-ray diffraction spectroscopy. This study shows that the thermal properties and phase behavior of PMLs-b-PSLs are largely influenced by the PMLs block if PMLs components constitute the majority of the block copolymers.

  19. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

  20. Impact of sequential disorder on the scaling behavior of airplane boarding time

    Science.gov (United States)

    Baek, Yongjoo; Ha, Meesoon; Jeong, Hawoong

    2013-05-01

    Airplane boarding process is an example where disorder properties of the system are relevant to the emergence of universality classes. Based on a simple model, we present a systematic analysis of finite-size effects in boarding time, and propose a comprehensive view of the role of sequential disorder in the scaling behavior of boarding time against the plane size. Using numerical simulations and mathematical arguments, we find how the scaling behavior depends on the number of seat columns and the range of sequential disorder. Our results show that new scaling exponents can arise as disorder is localized to varying extents.

  1. Monte Carlo Tree Search for Continuous and Stochastic Sequential Decision Making Problems

    International Nuclear Information System (INIS)

    Couetoux, Adrien

    2013-01-01

    In this thesis, I studied sequential decision making problems, with a focus on the unit commitment problem. Traditionally solved by dynamic programming methods, this problem is still a challenge, due to its high dimension and to the sacrifices made on the accuracy of the model to apply state of the art methods. I investigated on the applicability of Monte Carlo Tree Search methods for this problem, and other problems that are single player, stochastic and continuous sequential decision making problems. In doing so, I obtained a consistent and anytime algorithm, that can easily be combined with existing strong heuristic solvers. (author)

  2. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  3. Sequential method for the assessment of innovations in computer assisted industrial processes

    International Nuclear Information System (INIS)

    Suarez Antola R.

    1995-01-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs

  4. Rectangular amplitudes, conformal blocks, and applications to loop models

    Energy Technology Data Exchange (ETDEWEB)

    Bondesan, Roberto, E-mail: roberto.bondesan@cea.fr [LPTENS, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Institute de Physique Theorique, CEA Saclay, F-91191 Gif-sur-Yvette (France); Jacobsen, Jesper L. [LPTENS, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Universite Pierre et Marie Curie, 4 place Jussieu, 75252 Paris (France); Saleur, Hubert [Institute de Physique Theorique, CEA Saclay, F-91191 Gif-sur-Yvette (France); Physics Department, USC, Los Angeles, CA 90089-0484 (United States)

    2013-02-21

    In this paper we continue the investigation of partition functions of critical systems on a rectangle initiated in [R. Bondesan, et al., Nucl. Phys. B 862 (2012) 553-575]. Here we develop a general formalism of rectangle boundary states using conformal field theory, adapted to describe geometries supporting different boundary conditions. We discuss the computation of rectangular amplitudes and their modular properties, presenting explicit results for the case of free theories. In a second part of the paper we focus on applications to loop models, discussing in details lattice discretizations using both numerical and analytical calculations. These results allow to interpret geometrically conformal blocks, and as an application we derive new probability formulas for self-avoiding walks.

  5. Sequential effects in judgements of attractiveness: the influences of face race and sex.

    Directory of Open Access Journals (Sweden)

    Robin S S Kramer

    Full Text Available In perceptual decision-making, a person's response on a given trial is influenced by their response on the immediately preceding trial. This sequential effect was initially demonstrated in psychophysical tasks, but has now been found in more complex, real-world judgements. The similarity of the current and previous stimuli determines the nature of the effect, with more similar items producing assimilation in judgements, while less similarity can cause a contrast effect. Previous research found assimilation in ratings of facial attractiveness, and here, we investigated whether this effect is influenced by the social categories of the faces presented. Over three experiments, participants rated the attractiveness of own- (White and other-race (Chinese faces of both sexes that appeared successively. Through blocking trials by race (Experiment 1, sex (Experiment 2, or both dimensions (Experiment 3, we could examine how sequential judgements were altered by the salience of different social categories in face sequences. For sequences that varied in sex alone, own-race faces showed significantly less opposite-sex assimilation (male and female faces perceived as dissimilar, while other-race faces showed equal assimilation for opposite- and same-sex sequences (male and female faces were not differentiated. For sequences that varied in race alone, categorisation by race resulted in no opposite-race assimilation for either sex of face (White and Chinese faces perceived as dissimilar. For sequences that varied in both race and sex, same-category assimilation was significantly greater than opposite-category. Our results suggest that the race of a face represents a superordinate category relative to sex. These findings demonstrate the importance of social categories when considering sequential judgements of faces, and also highlight a novel approach for investigating how multiple social dimensions interact during decision-making.

  6. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  7. Using Priced Options to Solve the Exposure Problem in Sequential Auctions

    Science.gov (United States)

    Mous, Lonneke; Robu, Valentin; La Poutré, Han

    This paper studies the benefits of using priced options for solving the exposure problem that bidders with valuation synergies face when participating in multiple, sequential auctions. We consider a model in which complementary-valued items are auctioned sequentially by different sellers, who have the choice of either selling their good directly or through a priced option, after fixing its exercise price. We analyze this model from a decision-theoretic perspective and we show, for a setting where the competition is formed by local bidders, that using options can increase the expected profit for both buyers and sellers. Furthermore, we derive the equations that provide minimum and maximum bounds between which a synergy buyer's bids should fall in order for both sides to have an incentive to use the options mechanism. Next, we perform an experimental analysis of a market in which multiple synergy bidders are active simultaneously.

  8. In Vitro Model for Predicting the Protective Effect of Ultraviolet-Blocking Contact Lens in Human Corneal Epithelial Cells.

    Science.gov (United States)

    Abengózar-Vela, Antonio; Arroyo, Cristina; Reinoso, Roberto; Enríquez-de-Salamanca, Amalia; Corell, Alfredo; González-García, María Jesús

    2015-01-01

    To develop an in vitro method to determine the protective effect of UV-blocking contact lenses (CLs) in human corneal epithelial (HCE) cells exposed to UV-B radiation. SV-40-transformed HCE cells were covered with non-UV-blocking CL, UV-blocking CL or not covered, and exposed to UV-B radiation. As control, HCE cells were covered with both types of CLs or not covered, but not exposed to UV-B radiation. Cell viability at 24, 48 and 72 h, after UV-B exposure and removing CLs, was determined by alamarBlue(®) assay. Percentage of live, dead and apoptotic cells was also assessed by flow cytometry after 24 h of UV-B exposure. Intracellular reactive oxygen species (ROS) production after 1 h of exposure was assessed using the dye H(2)DCF-DA. Cell viability significantly decreased, apoptotic cells and intracellular ROS production significantly increased when UVB-exposed cells were covered with non-UV-blocking CL or not covered compared to non-irradiated cells. When cells were covered with UV-blocking CL, cell viability significantly increased and apoptotic cells and intracellular ROS production did not increase compared to exposed cells. UV-B radiation induces cell death by apoptosis, increases ROS production and decreases viable cells. UV-blocking CL is able to avoid these effects increasing cell viability and protecting HCE cells from apoptosis and ROS production induced by UV-B radiation. This in vitro model is an alternative to in vivo methods to determine the protective effect of UV-blocking ophthalmic biomaterials because it is a quicker, cheaper and reliable model that avoids the use of animals.

  9. The Use of a Block Diagram Simulation Language for Rapid Model Prototyping

    Science.gov (United States)

    Whitlow, Johnathan E.; Engrand, Peter

    1996-01-01

    The research performed this summer was a continuation of work performed during the 1995 NASA/ASEE Summer Fellowship. The focus of the work was to expand previously generated predictive models for liquid oxygen (LOX) loading into the external fuel tank of the shuttle. The models which were developed using a block diagram simulation language known as VisSim, were evaluated on numerous shuttle flights and found to well in most cases. Once the models were refined and validated, the predictive methods were integrated into the existing Rockwell software propulsion advisory tool (PAT). Although time was not sufficient to completely integrate the models developed into PAT, the ability to predict flows and pressures in the orbiter section and graphically display the results was accomplished.

  10. Absence of Rapid Propagation through the Purkinje Network as a Potential Cause of Line Block in the Human Heart with Left Bundle Branch Block.

    Science.gov (United States)

    Okada, Jun-Ichi; Washio, Takumi; Nakagawa, Machiko; Watanabe, Masahiro; Kadooka, Yoshimasa; Kariya, Taro; Yamashita, Hiroshi; Yamada, Yoko; Momomura, Shin-Ichi; Nagai, Ryozo; Hisada, Toshiaki; Sugiura, Seiryo

    2018-01-01

    Background: Cardiac resynchronization therapy is an effective device therapy for heart failure patients with conduction block. However, a problem with this invasive technique is the nearly 30% of non-responders. A number of studies have reported a functional line of block of cardiac excitation propagation in responders. However, this can only be detected using non-contact endocardial mapping. Further, although the line of block is considered a sign of responders to therapy, the mechanism remains unclear. Methods: Herein, we created two patient-specific heart models with conduction block and simulated the propagation of excitation based on a cellmodel of electrophysiology. In one model with a relatively narrow QRS width (176 ms), we modeled the Purkinje network using a thin endocardial layer with rapid conduction. To reproduce a wider QRS complex (200 ms) in the second model, we eliminated the Purkinje network, and we simulated the endocardial mapping by solving the inverse problem according to the actual mapping system. Results: We successfully observed the line of block using non-contact mapping in the model without the rapid propagation of excitation through the Purkinje network, although the excitation in the wall propagated smoothly. This model of slow conduction also reproduced the characteristic properties of the line of block, including dense isochronal lines and fractionated local electrocardiograms. Further, simulation of ventricular pacing from the lateral wall shifted the location of the line of block. By contrast, in the model with the Purkinje network, propagation of excitation in the endocardial map faithfully followed the actual propagation in the wall, without showing the line of block. Finally, switching the mode of propagation between the two models completely reversed these findings. Conclusions: Our simulation data suggest that the absence of rapid propagation of excitation through the Purkinje network is the major cause of the functional line

  11. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  12. PEO-related block copolymer surfactants

    DEFF Research Database (Denmark)

    Mortensen, K.

    2001-01-01

    Non-ionic block copolymer systems based on hydrophilic poly(ethylene oxide) and more hydrophobic co-polymer blocks are used intensively in a variety of industrial and personal applications. A brief description on the applications is presented. The physical properties of more simple model systems...... of such PEG-based block copolymers in aqueous suspensions are reviewed. Based on scattering experiments using either X-ray or neutrons, the phase behavior is characterized, showing that the thermo-reversible gelation is a result of micellar ordering into mesoscopic crystalline phases of cubic, hexagonal...

  13. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  14. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    Science.gov (United States)

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  15. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  16. Comparing blends and blocks: Synthesis of partially fluorinated diblock polythiophene copolymers to investigate the thermal stability of optical and morphological properties

    Directory of Open Access Journals (Sweden)

    Pierre Boufflet

    2016-10-01

    Full Text Available The microstructure of the active blend layer has been shown to be a critically important factor in the performance of organic solar devices. Block copolymers provide a potentially interesting avenue for controlling this active layer microstructure in solar cell blends. Here we explore the impact of backbone fluorination in block copolymers of poly(3-octyl-4-fluorothiophenes and poly(3-octylthiophene (F-P3OT-b-P3OT. Two block co-polymers with varying block lengths were prepared via sequential monomer addition under Kumada catalyst transfer polymerisation (KCTP conditions. We compare the behavior of the block copolymer to that of the corresponding homopolymer blends. In both types of system, we find the fluorinated segments tend to dominate the UV–visible absorption and molecular vibrational spectral features, as well as the thermal behavior. In the block copolymer case, non-fluorinated segments appear to slightly frustrate the aggregation of the more fluorinated block. However, in situ temperature dependent Raman spectroscopy shows that the intramolecular order is more thermally stable in the block copolymer than in the corresponding blend, suggesting that such materials may be interesting for enhanced thermal stability of organic photovoltaic active layers based on similar systems.

  17. Minimum Effective Volume of Lidocaine for Ultrasound-Guided Costoclavicular Block.

    Science.gov (United States)

    Sotthisopha, Thitipan; Elgueta, Maria Francisca; Samerchua, Artid; Leurcharusmee, Prangmalee; Tiyaprasertkul, Worakamol; Gordon, Aida; Finlayson, Roderick J; Tran, De Q

    This dose-finding study aimed to determine the minimum effective volume in 90% of patients (MEV90) of lidocaine 1.5% with epinephrine 5 μg/mL for ultrasound-guided costoclavicular block. Using an in-plane technique and a lateral-to-medial direction, the block needle was positioned in the middle of the 3 cords of the brachial plexus in the costoclavicular space. The entire volume of lidocaine was deposited in this location. Dose assignment was carried out using a biased-coin-design up-and-down sequential method, where the total volume of local anesthetic administered to each patient depended on the response of the previous one. In case of failure, the next subject received a higher volume (defined as the previous volume with an increment of 2.5 mL). If the previous patient had a successful block, the next subject was randomized to a lower volume (defined as the previous volume with a decrement of 2.5 mL), with a probability of b = 0.11, or the same volume, with a probability of 1 - b = 0.89. Success was defined, at 30 minutes, as a minimal score of 14 of 16 points using a sensorimotor composite scale. Patients undergoing surgery of the elbow, forearm, wrist, or hand were prospectively enrolled until 45 successful blocks were obtained. This clinical trial was registered with ClinicalTrials.gov (ID NCT02932670). Fifty-seven patients were included in the study. Using isotonic regression and bootstrap confidence interval, the MEV90 for ultrasound-guided costoclavicular block was estimated to be 34.0 mL (95% confidence interval, 33.4-34.4 mL). All patients with a minimal composite score of 14 points at 30 minutes achieved surgical anesthesia intraoperatively. For ultrasound-guided costoclavicular block, the MEV90 of lidocaine 1.5% with epinephrine 5 μg/mL is 34 mL. Further dose-finding studies are required for other concentrations of lidocaine, other local anesthetic agents, and multiple-injection techniques.

  18. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

    International Nuclear Information System (INIS)

    Yoo, T. S.; Garcia, H. E.

    2006-01-01

    This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

  19. Anthropogenic Changes in Mid-latitude Storm and Blocking Activities from Observations and Climate Models

    Science.gov (United States)

    Li, D.

    2017-12-01

    Fingerprints of anthropogenic climate change can be most readily detected in the high latitudes of Northern Hemisphere, where temperature has been rising faster than the rest of the globe and sea ice cover has shrunk dramatically over recent decades. Reducing the meridional temperature gradient, this amplified warming over the high latitudes influences weather in the middle latitudes by modulating the jet stream, storms, and atmospheric blocking activities. Whether observational records have revealed significant changes in mid-latitude storms and blocking activities, however, has remained a subject of much debate. Buried deep in strong year-to-year variations, the long-term dynamic responses of the atmosphere are more difficult to identify, compared with its thermodynamic responses. Variabilities of decadal and longer timescales further obscure any trends diagnosed from satellite observations, which are often shorter than 40 years. Here, new metrics reflecting storm and blocking activities are developed using surface air temperature and pressure records, and their variations and long-term trends are examined. This approach gives an inkling of the changes in storm and blocking activities since the Industrial Revolution in regions with abundant long-term observational records, e.g. Europe and North America. The relationship between Atlantic Multi-decadal Oscillation and variations in storm and blocking activities across the Atlantic is also scrutinized. The connection between observed centennial trends and anthropogenic forcings is investigated using a hierarchy of numerical tools, from highly idealized to fully coupled atmosphere-ocean models. Pre-industrial control simulations and a set of large ensemble simulations forced by increased CO2 are analyzed to evaluate the range of natural variabilities, which paves the way to singling out significant anthropogenic changes from observational records, as well as predicting future changes in mid-latitude storm and

  20. Systems-level modeling the effects of arsenic exposure with sequential pulsed and fluctuating patterns for tilapia and freshwater clam

    International Nuclear Information System (INIS)

    Chen, W.-Y.; Tsai, J.-W.; Ju, Y.-R.; Liao, C.-M.

    2010-01-01

    The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.

  1. Systems-level modeling the effects of arsenic exposure with sequential pulsed and fluctuating patterns for tilapia and freshwater clam

    Energy Technology Data Exchange (ETDEWEB)

    Chen, W.-Y. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Tsai, J.-W. [Institute of Ecology and Evolutionary Ecology, China Medical University, Taichung 40402, Taiwan (China); Ju, Y.-R. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Liao, C.-M., E-mail: cmliao@ntu.edu.t [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China)

    2010-05-15

    The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.

  2. Analysis of Block OMP using Block RIP

    OpenAIRE

    Wang, Jun; Li, Gang; Zhang, Hao; Wang, Xiqin

    2011-01-01

    Orthogonal matching pursuit (OMP) is a canonical greedy algorithm for sparse signal reconstruction. When the signal of interest is block sparse, i.e., it has nonzero coefficients occurring in clusters, the block version of OMP algorithm (i.e., Block OMP) outperforms the conventional OMP. In this paper, we demonstrate that a new notion of block restricted isometry property (Block RIP), which is less stringent than standard restricted isometry property (RIP), can be used for a very straightforw...

  3. Immediately sequential bilateral cataract surgery: advantages and disadvantages.

    Science.gov (United States)

    Singh, Ranjodh; Dohlman, Thomas H; Sun, Grace

    2017-01-01

    The number of cataract surgeries performed globally will continue to rise to meet the needs of an aging population. This increased demand will require healthcare systems and providers to find new surgical efficiencies while maintaining excellent surgical outcomes. Immediately sequential bilateral cataract surgery (ISBCS) has been proposed as a solution and is increasingly being performed worldwide. The purpose of this review is to discuss the advantages and disadvantages of ISBCS. When appropriate patient selection occurs and guidelines are followed, ISBCS is comparable with delayed sequential bilateral cataract surgery in long-term patient satisfaction, visual acuity and complication rates. In addition, the risk of bilateral postoperative endophthalmitis and concerns of poorer refractive outcomes have not been supported by the literature. ISBCS is cost-effective for the patient, healthcare payors and society, but current reimbursement models in many countries create significant financial barriers for facilities and surgeons. As demand for cataract surgery rises worldwide, ISBCS will become increasingly important as an alternative to delayed sequential bilateral cataract surgery. Advantages include potentially decreased wait times for surgery, patient convenience and cost savings for healthcare payors. Although they are comparable in visual acuity and complication rates, hurdles that prevent wide adoption include liability concerns as ISBCS is not an established standard of care, economic constraints for facilities and surgeons and inability to fine-tune intraocular lens selection in the second eye. Given these considerations, an open discussion regarding the advantages and disadvantages of ISBCS is important for appropriate patient selection.

  4. 31 CFR 595.301 - Blocked account; blocked property.

    Science.gov (United States)

    2010-07-01

    ... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY TERRORISM SANCTIONS REGULATIONS General Definitions § 595.301 Blocked account; blocked property. The terms blocked account and blocked...

  5. Effects of a combined parent-student alcohol prevention program on intermediate factors and adolescents' drinking behavior: A sequential mediation model.

    Science.gov (United States)

    Koning, Ina M; Maric, Marija; MacKinnon, David; Vollebergh, Wilma A M

    2015-08-01

    Previous work revealed that the combined parent-student alcohol prevention program (PAS) effectively postponed alcohol initiation through its hypothesized intermediate factors: increase in strict parental rule setting and adolescents' self-control (Koning, van den Eijnden, Verdurmen, Engels, & Vollebergh, 2011). This study examines whether the parental strictness precedes an increase in adolescents' self-control by testing a sequential mediation model. A cluster randomized trial including 3,245 Dutch early adolescents (M age = 12.68, SD = 0.50) and their parents randomized over 4 conditions: (1) parent intervention, (2) student intervention, (3) combined intervention, and (4) control group. Outcome measure was amount of weekly drinking measured at age 12 to 15; baseline assessment (T0) and 3 follow-up assessments (T1-T3). Main effects of the combined and parent intervention on weekly drinking at T3 were found. The effect of the combined intervention on weekly drinking (T3) was mediated via an increase in strict rule setting (T1) and adolescents' subsequent self-control (T2). In addition, the indirect effect of the combined intervention via rule setting (T1) was significant. No reciprocal sequential mediation (self-control at T1 prior to rules at T2) was found. The current study is 1 of the few studies reporting sequential mediation effects of youth intervention outcomes. It underscores the need of involving parents in youth alcohol prevention programs, and the need to target both parents and adolescents, so that change in parents' behavior enables change in their offspring. (c) 2015 APA, all rights reserved).

  6. Automation of block assignment planning using a diagram-based scenario modeling method

    Directory of Open Access Journals (Sweden)

    In Hyuck Hwang

    2014-03-01

    Full Text Available Most shipbuilding scheduling research so far has focused on the load level on the dock plan. This is because the dock is the least extendable resource in shipyards, and its overloading is difficult to resolve. However, once dock scheduling is completed, making a plan that makes the best use of the rest of the resources in the shipyard to minimize any additional cost is also important. Block assignment planning is one of the midterm planning tasks; it assigns a block to the facility (factory/shop or surface plate that will actually manufacture the block according to the block characteristics and current situation of the facility. It is one of the most heavily loaded midterm planning tasks and is carried out manually by experienced workers. In this study, a method of representing the block assignment rules using a diagram was suggested through analysis of the existing manual process. A block allocation program was developed which automated the block assignment process according to the rules represented by the diagram. The planning scenario was validated through a case study that compared the manual assignment and two automated block assignment results.

  7. Automation of block assignment planning using a diagram-based scenario modeling method

    Directory of Open Access Journals (Sweden)

    Hwang In Hyuck

    2014-03-01

    Full Text Available Most shipbuilding scheduling research so far has focused on the load level on the dock plan. This is be¬cause the dock is the least extendable resource in shipyards, and its overloading is difficult to resolve. However, once dock scheduling is completed, making a plan that makes the best use of the rest of the resources in the shipyard to minimize any additional cost is also important. Block assignment planning is one of the midterm planning tasks; it assigns a block to the facility (factory/shop or surface plate that will actually manufacture the block according to the block characteristics and current situation of the facility. It is one of the most heavily loaded midterm planning tasks and is carried out manu¬ally by experienced workers. In this study, a method of representing the block assignment rules using a diagram was su¬ggested through analysis of the existing manual process. A block allocation program was developed which automated the block assignment process according to the rules represented by the diagram. The planning scenario was validated through a case study that compared the manual assignment and two automated block assignment results.

  8. Sequential analysis in neonatal research-systematic review.

    Science.gov (United States)

    Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

    2018-05-01

    As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

  9. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

  10. Exploiting an automated microfluidic hydrodynamic sequential injection system for determination of phosphate.

    Science.gov (United States)

    Khongpet, Wanpen; Pencharee, Somkid; Puangpila, Chanida; Kradtap Hartwell, Supaporn; Lapanantnoppakhun, Somchai; Jakmunee, Jaroon

    2018-01-15

    A microfluidic hydrodynamic sequential injection (μHSI) spectrophotometric system was designed and fabricated. The system was built by laser engraving a manifold pattern on an acrylic block and sealing with another flat acrylic plate to form a microfluidic channel platform. The platform was incorporated with small solenoid valves to obtain a portable setup for programmable control of the liquid flow into the channel according to the HSI principle. The system was demonstrated for the determination of phosphate using a molybdenum blue method. An ascorbic acid, standard or sample, and acidic molybdate solutions were sequentially aspirated to fill the channel forming a stack zone before flowing to the detector. Under the optimum condition, a linear calibration graph in the range of 0.1-6mg P L -1 was obtained. The detection limit was 0.1mgL -1 . The system is compact (5.0mm thick, 80mm wide × 140mm long), durable, portable, cost-effective, and consumes little amount of chemicals (83μL each of molybdate and ascorbic acid, 133μL of the sample solution and 1.7mL of water carrier/run). It was applied for the determination of phosphate content in extracted soil samples. The percent recoveries of the analysis were obtained in the range of 91.2-107.3. The results obtained agreed well with those of the batch spectrophotometric method. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Nerve Blocks

    Science.gov (United States)

    ... News Physician Resources Professions Site Index A-Z Nerve Blocks A nerve block is an injection to ... the limitations of Nerve Block? What is a Nerve Block? A nerve block is an anesthetic and/ ...

  12. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

    Science.gov (United States)

    Lin, Jane-Ming; Tsai, Yi-Yu

    2005-01-01

    To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

  13. Block-based approach to modelling of granulated fertilizers' quality

    DEFF Research Database (Denmark)

    Kohonen, J.; Reinikainen, S. P.; Høskuldsson, Agnar

    2009-01-01

    be defined through testing the flow rate with, e.g., seed drill. Besides the chemical composition, flowability can be considered as one of the most important characteristics. There are numerous factors affecting the flowability of a granulated fertilizer, several of them related to the particle size......Fertilizer manufacturing is a customer-driven industry, where the quality of a product is a key factor in order to survive the competition. However, measuring the most important feature with granulated fertilizers, flowability, is tedious, time-consuming and thus expensive. Flowability can...... size distribution. The goals are to find a reliable model for flowability using this data and to find the most important variables and to identify the effect of blocks to the quality....

  14. A Survey of Multi-Objective Sequential Decision-Making

    NARCIS (Netherlands)

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential

  15. Microstructure history effect during sequential thermomechanical processing

    International Nuclear Information System (INIS)

    Yassar, Reza S.; Murphy, John; Burton, Christina; Horstemeyer, Mark F.; El kadiri, Haitham; Shokuhfar, Tolou

    2008-01-01

    The key to modeling the material processing behavior is the linking of the microstructure evolution to its processing history. This paper quantifies various microstructural features of an aluminum automotive alloy that undergoes sequential thermomechanical processing which is comprised hot rolling of a 150-mm billet to a 75-mm billet, rolling to 3 mm, annealing, and then cold rolling to a 0.8-mm thickness sheet. The microstructural content was characterized by means of electron backscatter diffraction, scanning electron microscopy, and transmission electron microscopy. The results clearly demonstrate the evolution of precipitate morphologies, dislocation structures, and grain orientation distributions. These data can be used to improve material models that claim to capture the history effects of the processing materials

  16. Sequential lineups: shift in criterion or decision strategy?

    Science.gov (United States)

    Gronlund, Scott D

    2004-04-01

    R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

  17. Thermal-stress analysis of HTGR fuel and control rod fuel blocks in in-block carbonization and annealing furnace

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; McAfee, W.J.

    1977-01-01

    A new method for performing thermal stress analyses in structures with multiple penetrations was applied to these analyses. This method couples the development of an equivalent thermal conductivity for the blocks, a technique that has been used extensively for modeling the thermal characteristics of reactor cores, with the use of the equivalent solid plate method for stress analysis. Using this equivalent thermal conductivity, which models as one material the heat transfer characteristics of the fuel, coolant, and graphite two-dimensional, steady-state thermal analyses of the fuel and control rod fuel blocks were performed to establish all temperature boundaries required for the stress analyses. In applying the equivalent solid plate method, the region of penetrations being modeled was replaced by a pseudo material having the same dimensions but whose materials properties were adjusted to account for the penetration. The peak stresses and strains were determined by applying stress and strain intensification factors to the calculated distributions. The condition studied was where the blocks were located near the center of the furnace. In this position, the axial surface of the block is heated near one end and cooled near the other. The approximate axial surface temperatures ranged from 1521 0 C at both the heated and the cooled ends to a peak of 1800 0 C near the center. Five specific cases were analyzed: plane (two-dimensional thermal, plane stress strain) analyses of each end of a standard fuel block (2 cases), plane analyses of each end of a control rod fuel block (2 cases), and a two-dimensional analysis of a fuel block treated as an axisymmetric cylind

  18. Numerical modeling of block structure dynamics: Application to the Vrancea region and study of earthquakes sequences in the synthetic catalogs

    International Nuclear Information System (INIS)

    Soloviev, A.A.; Vorobieva, I.A.

    1995-08-01

    A seismically active region is represented as a system of absolutely rigid blocks divided by infinitely thin plane faults. The interaction of the blocks along the fault planes and with the underlying medium is viscous-elastic. The system of blocks moves as a consequence of prescribed motion of boundary blocks and the underlying medium. When for some part of a fault plane the stress surpasses a certain strength level a stress-drop (''a failure'') occurs. It can cause a failure for other parts of fault planes. The failures are considered as earthquakes. As a result of the numerical simulation a synthetic earthquake catalogue is produced. This procedure is applied for numerical modeling of dynamics of the block structure approximating the tectonic structure of the Vrancea region. By numerical experiments the values of the model parameters were obtained which supplied the synthetic earthquake catalog with the space distribution of epicenters close to the real distribution of the earthquake epicenters in the Vrancea region. The frequency-magnitude relations (Gutenberg-Richter curves) obtained for the synthetic and real catalogs have some common features. The sequences of earthquakes arising in the model are studied for some artificial structures. It is found that ''foreshocks'', ''main shocks'', and ''aftershocks'' could be detected among earthquakes forming the sequences. The features of aftershocks, foreshocks, and catalogs of main shocks are analysed. (author). 5 refs, 12 figs, 16 tabs

  19. Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.

    Science.gov (United States)

    Domenger, D; Schwarting, R K W

    2008-10-31

    Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired

  20. Examining the Self-Assembly of Rod-Coil Block Copolymers via Physics Based Polymer Models and Polarized X-Ray Scattering

    Science.gov (United States)

    Hannon, Adam; Sunday, Daniel; Windover, Donald; Liman, Christopher; Bowen, Alec; Khaira, Gurdaman; de Pablo, Juan; Delongchamp, Dean; Kline, R. Joseph

    Photovoltaics, flexible electronics, and stimuli-responsive materials all require enhanced methodology to examine their nanoscale molecular orientation. The mechanical, electronic, optical, and transport properties of devices made from these materials are all a function of this orientation. The polymer chains in these materials are best modeled as semi-flexible to rigid rods. Characterizing the rigidity and molecular orientation of these polymers non-invasively is currently being pursued by using polarized resonant soft X-ray scattering (P-RSoXS). In this presentation, we show recent work on implementing such a characterization process using a rod-coil block copolymer system in the rigid-rod limit. We first demonstrate how we have used physics based models such as self-consistent field theory (SCFT) in non-polarized RSoXS work to fit scattering profiles for thin film coil-coil PS- b-PMMA block copolymer systems. We then show by using a wormlike chain partition function in the SCFT formulism to model the rigid-rod block, the methodology can be used there as well to extract the molecular orientation of the rod block from a simulated P-RSoXS experiment. The results from the work show the potential of the technique to extract thermodynamic and morphological sample information.

  1. Impact-Contact Analysis of Prismatic Graphite Blocks Using Abaqus

    International Nuclear Information System (INIS)

    Kang, Ji Ho; Kim, Gyeong Ho; Choi, Woo Seok

    2010-12-01

    Graphite blocks are the important core components of the high temperature gas-cooled reactor. As these blocks are simply stacked in array, collisions among neighboring components may occur during earthquakes or accidents. The final objective of the research project is to develop a reliable seismic model of the stacked graphite blocks from which their behavior can be predicted and, thus, they are designed to have sufficient strength to maintain their structural integrity during the anticipated occurrences. The work summarized in this report is a first step toward the big picture and is dedicated to build a realistic impact-contact dynamics model of the graphite block using a commercial FEM package, Abaqus. The developed model will be further used to assist building a reliable lumped dynamics model of these stacked graphite components

  2. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  3. A physical model of laser-assisted blocking of blood flow: I. Rectangular radiation pulses

    CSIR Research Space (South Africa)

    Zheltov, GI

    2007-03-01

    Full Text Available as to the investigation of destructive changes in these objects [1–16]. Various models were considered in these studies: a model of skin as a set of plane layers with different optical and physical properties (epider- mis, dermis, blood layer) [1–9], a similar model... conditions of minimal damage to adjacent healthy tissues. The necessity of local block- ing of the blood flow arises, e.g., upon dissection of tis- sues (stanching blood flow), upon treatment of vascular malformations (including those of diabetic origin...

  4. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  5. Decoding restricted participation in sequential electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Knaut, Andreas; Paschmann, Martin

    2017-06-15

    Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.

  6. Changes in the Intensity and Frequency of Atmospheric Blocking and Associated Heat Waves During Northern Summer Over Eurasia in the CMIP5 Model Simulations

    Science.gov (United States)

    Kim, Kyu-Myong; Lau, K. M.; Wu, H. T.; Kim, Maeng-Ki; Cho, Chunho

    2012-01-01

    The Russia heat wave and wild fires of the summer of 2010 was the most extreme weather event in the history of the country. Studies show that the root cause of the 2010 Russia heat wave/wild fires was an atmospheric blocking event which started to develop at the end of June and peaked around late July and early August. Atmospheric blocking in the summer of 2010 was anomalous in terms of the size, duration, and the location, which shifted to the east from the normal location. This and other similar continental scale severe summertime heat waves and blocking events in recent years have raised the question of whether such events are occurring more frequently and with higher intensity in a warmer climate induced by greenhouse gases. We studied the spatial and temporal distributions of the occurrence and intensity of atmospheric blocking and associated heat waves for northern summer over Eurasia based on CMIPS model simulations. To examine the global warming induced change of atmospheric blocking and heat waves, experiments for a high emissions scenario (RCP8.S) and a medium mitigation scenario (RCP4.S) are compared to the 20th century simulations (historical). Most models simulate the mean distributions of blockings reasonably well, including major blocking centers over Eurasia, northern Pacific, and northern Atlantic. However, the models tend to underestimate the number of blockings compared to MERRA and NCEPIDOE reanalysis, especially in western Siberia. Models also reproduced associated heat waves in terms of the shifting in the probability distribution function of near surface temperature. Seven out of eight models used in this study show that the frequency of atmospheric blocking over the Europe will likely decrease in a warmer climate, but slightly increase over the western Siberia. This spatial pattern resembles the blocking in the summer of 2010, indicating the possibility of more frequent occurrences of heat waves in western Siberia. In this talk, we will also

  7. Modeling and Optimization of Compressive Strength of Hollow Sandcrete Block with Rice Husk Ash Admixture

    Directory of Open Access Journals (Sweden)

    2016-11-01

    Full Text Available The paper presents the report of an investigation into the model development and optimization of the compressive strength of 55/45 to 70/30 cement/Rice Husk Ash (RHA in hollow sandcrete block. The low cost and local availability potential of RHA, a pozzolanic material gasps for exploitation. The study applies the Scheffe\\'s optimization approach to obtain a mathematical model of the form f(xi1 ,xi2 ,xi3 xi4 , where x are proportions of the concrete components, viz: cement, RHA, sand and water. Scheffe\\'s i experimental design techniques are followed to mould various hollow block samples measuring 450mm x 225mm x 150mm and tested for 28 days strength. The task involved experimentation and design, applying the second order polynomial characterization process of the simplex lattice method. The model adequacy is checked using the control factors. Finally, a software is prepared to handle the design computation process to take the desired property of the mix, and generate the optimal mix ratios. Reversibly, any mix ratios can be desired and the attainable strength obtained.

  8. Sequential fragmentation of Pleistocene forests in an East Africa biodiversity hotspot: chameleons as a model to track forest history.

    Directory of Open Access Journals (Sweden)

    G John Measey

    Full Text Available The Eastern Arc Mountains (EAM is an example of naturally fragmented tropical forests, which contain one of the highest known concentrations of endemic plants and vertebrates. Numerous paleo-climatic studies have not provided direct evidence for ancient presence of Pleistocene forests, particularly in the regions in which savannah presently occurs. Knowledge of the last period when forests connected EAM would provide a sound basis for hypothesis testing of vicariance and dispersal models of speciation. Dated phylogenies have revealed complex patterns throughout EAM, so we investigated divergence times of forest fauna on four montane isolates in close proximity to determine whether forest break-up was most likely to have been simultaneous or sequential, using population genetics of a forest restricted arboreal chameleon, Kinyongia boehmei.We used mitochondrial and nuclear genetic sequence data and mutation rates from a fossil-calibrated phylogeny to estimate divergence times between montane isolates using a coalescent approach. We found that chameleons on all mountains are most likely to have diverged sequentially within the Pleistocene from 0.93-0.59 Ma (95% HPD 0.22-1.84 Ma. In addition, post-hoc tests on chameleons on the largest montane isolate suggest a population expansion ∼182 Ka.Sequential divergence is most likely to have occurred after the last of three wet periods within the arid Plio-Pleistocene era, but was not correlated with inter-montane distance. We speculate that forest connection persisted due to riparian corridors regardless of proximity, highlighting their importance in the region's historic dispersal events. The population expansion coincides with nearby volcanic activity, which may also explain the relative paucity of the Taita's endemic fauna. Our study shows that forest chameleons are an apposite group to track forest fragmentation, with the inference that forest extended between some EAM during the Pleistocene 1

  9. Multichannel, sequential or combined X-ray spectrometry

    International Nuclear Information System (INIS)

    Florestan, J.

    1979-01-01

    X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr

  10. Memory and decision making: Effects of sequential presentation of probabilities and outcomes in risky prospects.

    Science.gov (United States)

    Millroth, Philip; Guath, Mona; Juslin, Peter

    2018-06-07

    The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Ultrasound-guided approach for axillary brachial plexus, femoral nerve, and sciatic nerve blocks in dogs.

    Science.gov (United States)

    Campoy, Luis; Bezuidenhout, Abraham J; Gleed, Robin D; Martin-Flores, Manuel; Raw, Robert M; Santare, Carrie L; Jay, Ariane R; Wang, Annie L

    2010-03-01

    To describe an ultrasound-guided technique and the anatomical basis for three clinically useful nerve blocks in dogs. Prospective experimental trial. Four hound-cross dogs aged 2 +/- 0 years (mean +/- SD) weighing 30 +/- 5 kg and four Beagles aged 2 +/- 0 years and weighing 8.5 +/- 0.5 kg. Axillary brachial plexus, femoral, and sciatic combined ultrasound/electrolocation-guided nerve blocks were performed sequentially and bilaterally using a lidocaine solution mixed with methylene blue. Sciatic nerve blocks were not performed in the hounds. After the blocks, the dogs were euthanatized and each relevant site dissected. Axillary brachial plexus block Landmark blood vessels and the roots of the brachial plexus were identified by ultrasound in all eight dogs. Anatomical examination confirmed the relationship between the four ventral nerve roots (C6, C7, C8, and T1) and the axillary vessels. Three roots (C7, C8, and T1) were adequately stained bilaterally in all dogs. Femoral nerve block Landmark blood vessels (femoral artery and femoral vein), the femoral and saphenous nerves and the medial portion of the rectus femoris muscle were identified by ultrasound in all dogs. Anatomical examination confirmed the relationship between the femoral vessels, femoral nerve, and the rectus femoris muscle. The femoral nerves were adequately stained bilaterally in all dogs. Sciatic nerve block. Ultrasound landmarks (semimembranosus muscle, the fascia of the biceps femoris muscle and the sciatic nerve) could be identified in all of the dogs. In the four Beagles, anatomical examination confirmed the relationship between the biceps femoris muscle, the semimembranosus muscle, and the sciatic nerve. In the Beagles, all but one of the sciatic nerves were stained adequately. Ultrasound-guided needle insertion is an accurate method for depositing local anesthetic for axillary brachial plexus, femoral, and sciatic nerve blocks.

  12. Group sequential designs for stepped-wedge cluster randomised trials.

    Science.gov (United States)

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into

  13. Induction of simultaneous and sequential malolactic fermentation in durian wine.

    Science.gov (United States)

    Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan

    2016-08-02

    This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  15. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  16. Equivalence between quantum simultaneous games and quantum sequential games

    OpenAIRE

    Kobayashi, Naoki

    2007-01-01

    A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...

  17. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  18. A Comparison of Ultimate Loads from Fully and Sequentially Coupled Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Damiani, Rick R [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-11-14

    This poster summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between two modeling approaches (fully coupled and sequentially coupled) through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  19. Structures of PEP–PEO Block Copolymer Micelles: Effects of Changing Solvent and PEO Length and Comparison to a Thermodynamic Model

    DEFF Research Database (Denmark)

    Jensen, Grethe Vestergaard; Shi, Qing; Deen, G. Roshan

    2012-01-01

    Structures of poly(ethylene propylene)–poly(ethylene oxide) (PEP–PEO) block copolymer micelles were determined from small-angle X-ray scattering and static light scattering and compared to predictions from a thermodynamic model. Both the corona block length and the solvent water–ethanol ratio were...... changed, leading to a thorough test of this model. With increasing ethanol fraction, the PEP core–solvent interfacial tension decreases, and the solvent quality for PEO changes. The weight-average block masses were 5.0 kDa for PEP and 2.8–49 kDa for PEO. For the lowest PEO molar mass and samples in pure...... water (except for the highest PEO molar mass), the micelles were cylindrical; for other conditions they were spherical. The structural parameters can be reasonably well described by the thermodynamic model by Zhulina et al. [Macromolecules2005, 38 (12), 5330–5351]; however, they have a stronger...

  20. Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.

    Science.gov (United States)

    Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A

    The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.

  1. Sequential Exposure of Bortezomib and Vorinostat is Synergistic in Multiple Myeloma Cells

    Science.gov (United States)

    Nanavati, Charvi; Mager, Donald E.

    2018-01-01

    Purpose To examine the combination of bortezomib and vorinostat in multiple myeloma cells (U266) and xenografts, and to assess the nature of their potential interactions with semi-mechanistic pharmacodynamic models and biomarkers. Methods U266 proliferation was examined for a range of bortezomib and vorinostat exposure times and concentrations (alone and in combination). A non-competitive interaction model was used with interaction parameters that reflect the nature of drug interactions after simultaneous and sequential exposures. p21 and cleaved PARP were measured using immunoblotting to assess critical biomarker dynamics. For xenografts, data were extracted from literature and modeled with a PK/PD model with an interaction parameter. Results Estimated model parameters for simultaneous in vitro and xenograft treatments suggested additive drug effects. The sequence of bortezomib preincubation for 24 hours, followed by vorinostat for 24 hours, resulted in an estimated interaction term significantly less than 1, suggesting synergistic effects. p21 and cleaved PARP were also up-regulated the most in this sequence. Conclusions Semi-mechanistic pharmacodynamic modeling suggests synergistic pharmacodynamic interactions for the sequential administration of bortezomib followed by vorinostat. Increased p21 and cleaved PARP expression can potentially explain mechanisms of their enhanced effects, which require further PK/PD systems analysis to suggest an optimal dosing regimen. PMID:28101809

  2. Use of sequential infiltration synthesis to improve the pattern transfer of PS-b-PLA DSA (Conference Presentation)

    Science.gov (United States)

    Evangelio Araujo, Laura; Fernández-Regúlez, Marta; Chevalier, Xavier; Nicolet, Célia; Cayrefourcq, Ian; Navarro, Christophe; Fleury, Guillaume; Pérez-Murano, Francesc

    2017-03-01

    Directed self-assembly of block copolymers is considered one of the candidates to fulfill the requirements of the next technological nodes [1,2]. Polymer domains are aligned by using a chemical and/or a topographical pre-pattern in which preferential surfaces to one of the two blocks or neutral wetting areas are created. In particular, polystyrene-block-polymetylmethacrylate (PS-b-PMMA) has been extensively studied during the last years showing strong capabilities to define periodic nanostructures. However, the relatively low Flory-Huggins parameter and, the resulting low segregation strength of PS-b-PMMA systems, limit their achievable resolution to around 11 nm [3]. The application of block copolymer on sub-10 nm technologies requires the development of the new block copolymer generation known as high- block copolymers. Additionally, an important requirement for their integration on the lithography roadmap is the capability of selectively remove one of the two blocks. The etch contrast between the two domains is typically low due to their organic chemistry. In this sense, selective sequential infiltration synthesis by ALD into one of the blocks can be used in order to incorporate an inorganic material. The formed organic/inorganic blend sustains better the plasma etching to remove the non-infiltrated organic block. In this contribution, we show the use of high- polystyrene-b-polylactide acid (PS-b-PLA) lamellar block copolymer for line/space applications. PS-b-PLA has a larger Flory-Huggins parameter (=0.218 at room temperature[4]) compared with PS-b-PMMA, allowing the size reduction of the self-assembled domains. The method to control the orientation of the polymer domains is similar to the one typically used for PS-b-PMMA. Chemical contrast and the subsequent alignment of the polymer domains are achieved by the definition of a chemical pre-pattern on a random copolymer PS-r-PMMA (48% of PS) (figure 1). The polymer brush is grafted on the substrate and then

  3. DIAGRAM SOLVE THE USE OF SIMULINK BLOCK DIAGRAM TO SOLVE MATHEMA THEMATICAL CONTROL EQU MATHEMATICAL MODELS AND CONTROL EQUATIONS

    Directory of Open Access Journals (Sweden)

    N.M. Ghasem

    2003-12-01

    Full Text Available In this paper, the simulink block diagram is used to solve a model consists of a set of ordinary differential and algebraic equations to control the temperature inside a simple stirred tank heater. The flexibility of simulink block diagram gives students a better understanding of the control systems. The simulink also allows solution of mathematical models and easy visualization of the system variables. A polyethylene fluidized bed reactor is considered as an industrial example and the effect of the Proportional, Integral and Derivative control policy is presented for comparison.

  4. Coastal protection using topological interlocking blocks

    Science.gov (United States)

    Pasternak, Elena; Dyskin, Arcady; Pattiaratchi, Charitha; Pelinovsky, Efim

    2013-04-01

    The coastal protection systems mainly rely on the self-weight of armour blocks to ensure its stability. We propose a system of interlocking armour blocks, which form plate-shape assemblies. The shape and the position of the blocks are chosen in such a way as to impose kinematic constraints that prevent the blocks from being removed from the assembly. The topological interlocking shapes include simple convex blocks such as platonic solids, the most practical being tetrahedra, cubes and octahedra. Another class of topological interlocking blocks is so-called osteomorphic blocks, which form plate-like assemblies tolerant to random block removal (almost 25% of blocks need to be removed for the assembly to loose integrity). Both classes require peripheral constraint, which can be provided either by the weight of the blocks or post-tensioned internal cables. The interlocking assemblies provide increased stability because lifting one block involves lifting (and bending) the whole assembly. We model the effect of interlocking by introducing an equivalent additional self-weight of the armour blocks. This additional self-weight is proportional to the critical pressure needed to cause bending of the interlocking assembly when it loses stability. Using beam approximation we find an equivalent stability coefficient for interlocking. It is found to be greater than the stability coefficient of a structure with similar blocks without interlocking. In the case when the peripheral constraint is provided by the weight of the blocks and for the slope angle of 45o, the effective stability coefficient for a structure of 100 blocks is 33% higher than the one for a similar structure without interlocking. Further increase in the stability coefficient can be reached by a specially constructed peripheral constraint system, for instance by using post-tension cables.

  5. AUTOMATIC 3D BUILDING MODEL GENERATION FROM LIDAR AND IMAGE DATA USING SEQUENTIAL MINIMUM BOUNDING RECTANGLE

    Directory of Open Access Journals (Sweden)

    E. Kwak

    2012-07-01

    Full Text Available Digital Building Model is an important component in many applications such as city modelling, natural disaster planning, and aftermath evaluation. The importance of accurate and up-to-date building models has been discussed by many researchers, and many different approaches for efficient building model generation have been proposed. They can be categorised according to the data source used, the data processing strategy, and the amount of human interaction. In terms of data source, due to the limitations of using single source data, integration of multi-senor data is desired since it preserves the advantages of the involved datasets. Aerial imagery and LiDAR data are among the commonly combined sources to obtain 3D building models with good vertical accuracy from laser scanning and good planimetric accuracy from aerial images. The most used data processing strategies are data-driven and model-driven ones. Theoretically one can model any shape of buildings using data-driven approaches but practically it leaves the question of how to impose constraints and set the rules during the generation process. Due to the complexity of the implementation of the data-driven approaches, model-based approaches draw the attention of the researchers. However, the major drawback of model-based approaches is that the establishment of representative models involves a manual process that requires human intervention. Therefore, the objective of this research work is to automatically generate building models using the Minimum Bounding Rectangle algorithm and sequentially adjusting them to combine the advantages of image and LiDAR datasets.

  6. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    Directory of Open Access Journals (Sweden)

    D. Herckenrath

    2013-10-01

    Full Text Available Increasingly, ground-based and airborne geophysical data sets are used to inform groundwater models. Recent research focuses on establishing coupling relationships between geophysical and groundwater parameters. To fully exploit such information, this paper presents and compares different hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM and electrical resistivity tomography (ERT data. In a sequential hydrogeophysical inversion (SHI a groundwater model is calibrated with geophysical data by coupling groundwater model parameters with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI. In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical relationship and its accuracy. Simulations for a synthetic groundwater model and TDEM data showed improved estimates for groundwater model parameters that were coupled to relatively well-resolved geophysical parameters when employing a high-quality petrophysical relationship. Compared to a SHI these improvements were insignificant and geophysical parameter estimates became slightly worse. When employing a low-quality petrophysical relationship, groundwater model parameters improved less for both the SHI and JHI, where the SHI performed relatively better. When comparing a SHI and JHI for a real-world groundwater model and ERT data, differences in parameter estimates were small. For both cases investigated in this paper, the SHI seems favorable, taking into account parameter error, data fit and the complexity of implementing a JHI in combination with its larger computational burden.

  7. Reading Remediation Based on Sequential and Simultaneous Processing.

    Science.gov (United States)

    Gunnison, Judy; And Others

    1982-01-01

    The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…

  8. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation

    Science.gov (United States)

    Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori

    2017-12-01

    It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.

  9. The micellization and dissociation transitions of thermo-, pH- and sugar-sensitive block copolymer investigated by laser light scattering

    Directory of Open Access Journals (Sweden)

    Y. C. Tang

    2012-08-01

    Full Text Available A triple-stimuli responsive polymer, poly(3-acrylamidophenylboronic acid-b-poly(N-isopropylacrylamide (PAAPBA-b-PNIPAM, has been synthesized by reversible addition-fragmentation chain transfer polymerization. Temperature, pH, and fructose induced micellization and dissociation transition of block copolymer was investigated by a combination of static and dynamic laser light scattering. PAAPBA-b-PNIPAM copolymer self-assembles into micelles with PAAPBA block as core and PNIPAM as shell in lower pH aqueous solution at room temperature. Increasing the temperature causes the micelle to shrink due to the dehydration of PNIPAM segments at pH 6.2. After the elevation of solution pH from 6.2 to 10.0, the increase in the hydrophilicity of PAAPBA block leads to an expulsion of unimers from micelles. In addition, the fructose addition further enhances the dissociation of micelles. Our experiments demonstrate that the micelle to unimer transition process proceeds via the step-by-step sequential expulsion of individual chains.

  10. The impact of comorbid body dysmorphic disorder on the response to sequential pharmacological trials for obsessive-compulsive disorder.

    Science.gov (United States)

    Diniz, Juliana B; Costa, Daniel Lc; Cassab, Raony Cc; Pereira, Carlos Ab; Miguel, Euripedes C; Shavitt, Roseli G

    2014-06-01

    Our aim was to investigate the impact of comorbid body dysmorphic disorder (BDD) on the response to sequential pharmacological trials in adult obsessive-compulsive disorder (OCD) patients. The sequential trial initially involved fluoxetine monotherapy followed by one of three randomized, add-on strategies: placebo, clomipramine or quetiapine. We included 138 patients in the initial phase of fluoxetine, up to 80 mg or the maximum tolerated dosage, for 12 weeks. We invited 70 non-responders to participate in the add-on trial; as 54 accepted, we allocated 18 to each treatment group and followed them for an additional 12 weeks. To evaluate the combined effects of sex, age, age at onset, initial severity, type of augmentation and BDD on the response to sequential treatments, we constructed a model using generalized estimating equations (GEE). Of the 39 patients who completed the study (OCD-BDD, n = 13; OCD-non-BDD, n = 26), the OCD-BDD patients were less likely to be classified as responders than the OCD-non-BDD patients (Pearson Chi-Square = 4.4; p = 0.036). In the GEE model, BDD was not significantly associated with a worse response to sequential treatments (z-robust = 1.77; p = 0.07). The predictive potential of BDD regarding sequential treatment strategies for OCD did not survive when the analyses were controlled for other clinical characteristics. © The Author(s) 2013.

  11. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

    DEFF Research Database (Denmark)

    Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

    2016-01-01

    generated by a mathematical model of the competitive growth of multiple strains of Escherichia coli.Results: Simulation studies showed that sequential use of tetracycline and ampicillin reduced the level of double resistance, when compared to the combination treatment. The effect of the cycling frequency...... frequency did not play a role in suppressing the growth of resistant strains, but the specific order of the two antimicrobials did. Predictions made from the study could be used to redesign multidrug treatment strategies not only for intramuscular treatment in pigs, but also for other dosing routes.......Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...

  12. Cutaneous Sensory Block Area, Muscle-Relaxing Effect, and Block Duration of the Transversus Abdominis Plane Block

    DEFF Research Database (Denmark)

    Støving, Kion; Rothe, Christian; Rosenstock, Charlotte V

    2015-01-01

    BACKGROUND AND OBJECTIVES: The transversus abdominis plane (TAP) block is a widely used nerve block. However, basic block characteristics are poorly described. The purpose of this study was to assess the cutaneous sensory block area, muscle-relaxing effect, and block duration. METHODS: Sixteen...... healthy volunteers were randomized to receive an ultrasound-guided unilateral TAP block with 20 mL 7.5 mg/mL ropivacaine and placebo on the contralateral side. Measurements were performed at baseline and 90 minutes after performing the block. Cutaneous sensory block area was mapped and separated...... into a medial and lateral part by a vertical line through the anterior superior iliac spine. We measured muscle thickness of the 3 lateral abdominal muscle layers with ultrasound in the relaxed state and during maximal voluntary muscle contraction. The volunteers reported the duration of the sensory block...

  13. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

    DEFF Research Database (Denmark)

    Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

    2016-01-01

    Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...... the sensitive fraction of the commensal flora.Growth parameters for competing bacterial strains were estimated from the combined in vitro pharmacodynamic effect of two antimicrobials using the relationship between concentration and net bacterial growth rate. Predictions of in vivo bacterial growth were...... (how frequently antibiotics are alternated in a sequential treatment) of the two drugs was dependent upon the order in which the two drugs were used.Conclusion: Sequential treatment was more effective in preventing the growth of resistant strains when compared to the combination treatment. The cycling...

  14. Tectonic drivers of the Wrangell block: Insights on fore-arc sliver processes from 3-D geodynamic models of Alaska

    Science.gov (United States)

    Haynie, K. L.; Jadamec, M. A.

    2017-07-01

    Intracontinental shear zones can play a key role in understanding how plate convergence is manifested in the upper plate in regions of oblique subduction. However, the relative role of the driving forces from the subducting plate and the resisting force from within intracontinental shear zones is not well understood. Results from high-resolution, geographically referenced, instantaneous 3-D geodynamic models of flat slab subduction at the oblique convergent margin of Alaska are presented. These models investigate how viscosity and length of the Denali fault intracontinental shear zone as well as coupling along the plate boundary interface modulate motion of the Wrangell block fore-arc sliver and slip across the Denali fault. Models with a weak Denali fault (1017 Pa s) and strong plate coupling (1021 Pa s) were found to produce the fastest motions of the Wrangell block (˜10 mm/yr). The 3-D models predict along-strike variation in motion along the Denali fault, changing from dextral strike-slip motion in the eastern segment to oblique convergence toward the fault apex. Models further show that the flat slab drives oblique motion of the Wrangell block and contributes to 20% (models with a short fault) and 28% (models with a long fault) of the observed Quaternary slip rates along the Denali fault. The 3-D models provide insight into the general processes of fore-arc sliver mechanics and also offer a 3-D framework for interpreting hazards in regions of flat slab subduction.

  15. Investigation of local load effect on damping characteristics of synchronous generator using transfer-function block-diagram model

    Directory of Open Access Journals (Sweden)

    Pichai Aree

    2005-07-01

    Full Text Available The transfer-function block-diagram model of single-machine infinite-bus power system has been a popular analytical tool amongst power engineers for explaining and assessing synchronous generator dynamic behaviors. In previous studies, the effects of local load together with damper circuit on generator damping have not yet been addressed because neither of them was integrated into this model. Since the model only accounts for the generator main field circuit, it may not always yield a realistic damping assessment due to lack of damper circuit representation. This paper presents an extended transfer-function block-diagram model, which includes one of the q-axis damper circuits as well as local load. This allows a more realistic investigation of the local load effect on the generator damping. The extended model is applied to assess thegenerator dynamic performance. The results show that the damping power components mostly derived from the q-axis damper and the field circuits can be improved according to the local load. The frequency response method is employed to carry out the fundamental analysis.

  16. C-quence: a tool for analyzing qualitative sequential data.

    Science.gov (United States)

    Duncan, Starkey; Collier, Nicholson T

    2002-02-01

    C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

  17. 31 CFR 594.301 - Blocked account; blocked property.

    Science.gov (United States)

    2010-07-01

    ... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY GLOBAL TERRORISM SANCTIONS REGULATIONS General Definitions § 594.301 Blocked account; blocked property. The terms blocked account and...

  18. Confirmatory Analysis of Simultaneous, Sequential, and Achievement Factors on the K-ABC at 11 Age Levels Ranging from 2 1/2 to 12 1/2 years.

    Science.gov (United States)

    Willson, Victor L.; And Others

    1985-01-01

    Presents results of confirmatory factor analysis of the Kaufman Assessment Battery for children which is based on the underlying theoretical model of sequential, simultaneous, and achievement factors. Found support for the two-factor, simultaneous and sequential processing model. (MCF)

  19. Measuring sustainability by Energy Efficiency Analysis for Korean Power Companies: A Sequential Slacks-Based Efficiency Measure

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2014-03-01

    Full Text Available Improving energy efficiency has been widely regarded as one of the most cost-effective ways to improve sustainability and mitigate climate change. This paper presents a sequential slack-based efficiency measure (SSBM application to model total-factor energy efficiency with undesirable outputs. This approach simultaneously takes into account the sequential environmental technology, total input slacks, and undesirable outputs for energy efficiency analysis. We conduct an empirical analysis of energy efficiency incorporating greenhouse gas emissions of Korean power companies during 2007–2011. The results indicate that most of the power companies are not performing at high energy efficiency. Sequential technology has a significant effect on the energy efficiency measurements. Some policy suggestions based on the empirical results are also presented.

  20. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    Science.gov (United States)

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  1. 3-D crustal-scale gravity model of the San Rafael Block and Payenia volcanic province in Mendoza, Argentina

    Directory of Open Access Journals (Sweden)

    Daniel Richarte

    2018-01-01

    Based on gravimetric and magnetic data, together with isostatic and elastic thickness analyses, we modeled the crustal structure of the area. Information obtained has allowed us to understand the crust where the SRB and the Payenia volcanic province are located. Bouguer anomalies indicate that the SRB presents higher densities to the North of Cerro Nevado and Moho calculations suggest depths for this block between 40 and 50 km. Determinations of elastic thickness would indicate that the crust supporting the San Rafael Block presents values of approximately 10 km, being enough to support the block loading. However, in the Payenia region, elastic thickness values are close to zero due to the regional temperature increase.

  2. Comparison of periodontal ligament injection and inferior alveolar nerve block in mandibular primary molars pulpotomy: a randomized control trial.

    Science.gov (United States)

    Haghgoo, Roza; Taleghani, Ferial

    2015-05-01

    Inferior alveolar nerve block is a common technique for anesthesia of the primary mandibular molars. A number of disadvantages have been shown to be associated with this technique. Periodontal ligament (PDL) injection could be considered as an alternative to inferior alveolar nerve block. The aim of this study was to evaluate the effectiveness of PDL injection in the anesthesia of primary molar pulpotomy with mandibular block. This study was performed using a sequential double-blind randomized trial design. 80 children aged 3-7 years old who required pulpotomy in symmetrical mandibular primary molars were selected. The teeth of these children were anesthetized with periodontal injection on one side of the mandible and block on the other. Pulpotomy was performed on each patient during the same appointment. Signs of discomfort, including hand and body tension and eye movement, the verbal complaint and crying (SEM scale), were evaluated by a dental assistant who was blinded to the treatment allocation of the patients. Finally, the data were analyzed using the exact Fisher test and Pearson Chi-squared exact test. Success rate was 88/75 and 91/25 in the PDL injection and nerve block groups, respectively. There was no statistically significant difference between the two techniques (P = 0.250). Results showed that PDL injection can be used as an alternative to nerve block in pulpotomy of the mandibular primary molars.

  3. Examining Age-Related Movement Representations for Sequential (Fine-Motor) Finger Movements

    Science.gov (United States)

    Gabbard, Carl; Cacola, Priscila; Bobbio, Tatiana

    2011-01-01

    Theory suggests that imagined and executed movement planning relies on internal models for action. Using a chronometry paradigm to compare the movement duration of imagined and executed movements, we tested children aged 7-11 years and adults on their ability to perform sequential finger movements. Underscoring this tactic was our desire to gain a…

  4. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Ground reaction curve based upon block theory

    International Nuclear Information System (INIS)

    Yow, J.L. Jr.; Goodman, R.E.

    1985-09-01

    Discontinuities in a rock mass can intersect an excavation surface to form discrete blocks (keyblocks) which can be unstable. Once a potentially unstable block is identified, the forces affecting it can be calculated to assess its stability. The normal and shear stresses on each block face before displacement are calculated using elastic theory and are modified in a nonlinear way by discontinuity deformations as the keyblock displaces. The stresses are summed into resultant forces to evaluate block stability. Since the resultant forces change with displacement, successive increments of block movement are examined to see whether the block ultimately becomes stable or fails. Two-dimensional (2D) and three-dimensional (3D) analytic models for the stability of simple pyramidal keyblocks were evaluated. Calculated stability is greater for 3D analyses than for 2D analyses. Calculated keyblock stability increases with larger in situ stress magnitudes, larger lateral stress ratios, and larger shear strengths. Discontinuity stiffness controls blocks displacement more strongly than it does stability itself. Large keyblocks are less stable than small ones, and stability increases as blocks become more slender

  6. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  7. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  8. Guidelines for Vehicle Robbery Prevention using Remote Blocking Signals

    Directory of Open Access Journals (Sweden)

    Narong Sangwaranatee

    2016-01-01

    Full Text Available In this paper, the radio signal remote sensing device was used to control the vehicle door switching control, which was the field trials experiment. The switching "On" and "Off" of the switching signals were used to control the vehicle door and investigated. In application, the blocking signal from the commit the remote vehicle crime in the venerable place can be protected. The results obtained have shown that the signal blocking by using another remote control over 5 meters, 10 meters and 15 meters could be achieved. The proposed models and tested results have shown that the Vehicle Brand A Model No. 1 could be blocked by 83.33 percent, while Brand A Model No.2 by 83.33 percent, Brand B Model No.1 by 40 percent, Brand B Model No.2 by 60 percent, Brand C Model No. 1 by 83.33 percent, Brand C Model No. 2 by 83.33 percent, meanwhile, the remote control for general vehicle are used radio waves with frequency 315 and 433 MHz, where the criminal will use the interference signals to form the blocking (jamming signals, the vehicle can be robbed.

  9. Computing sequential equilibria for two-player games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...

  10. Computing Sequential Equilibria for Two-Player Games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...

  11. From sequential to parallel programming with patterns

    CERN Document Server

    CERN. Geneva

    2018-01-01

    To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

  12. Membrane fouling mechanism of biofilm-membrane bioreactor (BF-MBR): Pore blocking model and membrane cleaning.

    Science.gov (United States)

    Zheng, Yi; Zhang, Wenxiang; Tang, Bing; Ding, Jie; Zheng, Yi; Zhang, Zhien

    2018-02-01

    Biofilm membrane bioreactor (BF-MBR) is considered as an important wastewater treatment technology that incorporates advantages of both biofilm and MBR process, as well as can alleviate membrane fouling, with respect to the conventional activated sludge MBR. But, to be efficient, it necessitates the establishment of proper methods for the assessment of membrane fouling. Four Hermia membrane blocking models were adopted to quantify and evaluate the membrane fouling of BF-MBR. The experiments were conducted with various operational conditions, including membrane types, agitation speeds and transmembrane pressure (TMP). Good agreement between cake formation model and experimental data was found, confirming the validity of the Hermia models for assessing the membrane fouling of BF-MBR and that cake layer deposits on membrane. Moreover, the influences of membrane types, agitation speeds and transmembrane pressure on the Hermia pore blocking coefficient of cake layer were investigated. In addition, the permeability recovery after membrane cleaning at various operational conditions was studied. This work confirms that, unlike conventional activated sludge MBR, BF-MBR possesses a low degree of membrane fouling and a higher membrane permeability recovery after cleaning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. MODELING STRATEGIES FOR THE ANALYSIS OF EXPERIMENTS IN AUGMENTED BLOCK DESIGN IN CLONAL TESTS OF Eucalyptus spp.

    Directory of Open Access Journals (Sweden)

    Paulo Eduardo Rodrigues Prado

    2013-08-01

    Full Text Available http://dx.doi.org/10.5902/1980509810546The objective of this work was to compare analyses of experiment strategies when there is a large number of clones and a reduced number of seedlings to be evaluated. Data from girth at breast height of two seasons of evaluation, 30 and 90 months, from a clonal test of Eucalyptus were analyzed in three locations. The experiments were carried out in the augmented block design with 400 regular clones distributed in 20 blocks and with four common clones (controls.  Each plot consisted of five plants spaced 3 x 3 meters. The individual statistic analyses were carried out by season and local, a combined one by local at each season and a combined one involving the three locals and the two seasons. Each analysis was carried out according to two models: augmented design (AD and one way classification (OWC. The variance components, the heritability, the Speaman’s rank correlation and the coincidence indexes in the clone selection at the two models were estimated. It was found that the augmented block design and the one way classification provide similar results in Eucalyptus clone evaluation. The coincidence indexes between the two models in the clone selection, in general, were high, showing values of 100% in the local combined analyses at 90 months. The Spearman’s rank

  14. ["Habitual" left branch block alternating with 2 "disguised" bracnch block].

    Science.gov (United States)

    Lévy, S; Jullien, G; Mathieu, P; Mostefa, S; Gérard, R

    1976-10-01

    Two cases of alternating left bundle branch block and "masquerading block" (with left bundle branch morphology in the stnadard leads and right bundle branch block morphology in the precordial leads) were studied by serial tracings and his bundle electrocardiography. In case 1 "the masquerading" block was associated with a first degree AV block related to a prolongation of HV interval. This case is to our knowledge the first cas of alternating bundle branch block in which his bundle activity was recorded in man. In case 2, the patient had atrial fibrilation and His bundle recordings were performed while differents degrees of left bundle branch block were present: The mechanism of the alternation and the concept of "masquerading" block are discussed. It is suggested that this type of block represents a right bundle branch block associated with severe lesions of the "left system".

  15. Programs for the calculi of blocks permeabilities

    International Nuclear Information System (INIS)

    Gomez Hernandez, J.J.; Sovero Sovero, H.F.

    1993-01-01

    This report studies the stochastic analysis of radionuclide transport. The permeability values of blocks are necessary to do a numeric model for the flux and transport problems in ground soils. The determination of block value by function on grill value is the objective of this program

  16. Parameter sampling capabilities of sequential and simultaneous data assimilation: I. Analytical comparison

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess the parameter sampling capabilities of some Bayesian, ensemble-based, joint state-parameter (JS) estimation methods. The forward model is assumed to be non-chaotic and have nonlinear components, and the emphasis is on results obtained for the parameters in the state-parameter vector. A variety of approximate sampling methods exist, and a number of numerical comparisons between such methods have been performed. Often, more than one of the defining characteristics vary from one method to another, so it can be difficult to point out which characteristic of the more successful method in such a comparison was decisive. In this study, we single out one defining characteristic for comparison; whether or not data are assimilated sequentially or simultaneously. The current paper is concerned with analytical investigations into this issue. We carefully select one sequential and one simultaneous JS method for the comparison. We also design a corresponding pair of pure parameter estimation methods, and we show how the JS methods and the parameter estimation methods are pairwise related. It is shown that the sequential and the simultaneous parameter estimation methods are equivalent for one particular combination of observations with different degrees of nonlinearity. Strong indications are presented for why one may expect the sequential parameter estimation method to outperform the simultaneous parameter estimation method for all other combinations of observations. Finally, the conditions for when similar relations can be expected to hold between the corresponding JS methods are discussed. A companion paper, part II (Fossum and Mannseth 2014 Inverse Problems 30 114003), is concerned with statistical analysis of results from a range of numerical experiments involving sequential and simultaneous JS estimation, where the design of the numerical investigation is motivated by our findings in the current paper. (paper)

  17. A one-sided sequential test

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-04-16

    The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

  18. Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2016-04-01

    Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.

  19. Seamount characteristics and mine-site model applied to exploration- and mining-lease-block selection for cobalt-rich ferromanganese crusts

    Science.gov (United States)

    Hein, James R.; Conrad, Tracey A.; Dunham, Rachel E.

    2009-01-01

    Regulations are being developed through the International Seabed Authority (ISBA) for the exploration and mining of cobalt-rich ferromanganese crusts. This paper lays out geologic and geomorphologic criteria that can be used to determine the size and number of exploration and mine-site blocks that will be the focus of much discussion within the ISBA Council deliberations. The surface areas of 155 volcanic edifices in the central equatorial Pacific were measured and used to develop a mine-site model. The mine-site model considers areas above 2,500 m water depth as permissive, and narrows the general area available for exploration and mining to 20% of that permissive area. It is calculated that about eighteen 100 km2 explora-tion blocks, each composed of five 20km2 contiguous sub-blocks, would be adequate to identify a 260 km2 20-year-mine site; the mine site would be composed of thirteen of the 20km2 sub-blocks. In this hypothetical example, the 260 km2 mine site would be spread over four volcanic edifices and comprise 3.7% of the permissive area of the four edifices and 0.01% of the total area of those four edifices. The eighteen 100km2 exploration blocks would be selected from a limited geographic area. That confinement area is defined as having a long dimension of not more than 1,000 km and an area of not more than 300,000 km2.

  20. The sequential application of macroalgal biosorbents for the bioremediation of a complex industrial effluent.

    Directory of Open Access Journals (Sweden)

    Joel T Kidgell

    Full Text Available Fe-treated biochar and raw biochar produced from macroalgae are effective biosorbents of metalloids and metals, respectively. However, the treatment of complex effluents that contain both metalloid and metal contaminants presents a challenging scenario. We test a multiple-biosorbent approach to bioremediation using Fe-biochar and biochar to remediate both metalloids and metals from the effluent from a coal-fired power station. First, a model was derived from published data for this effluent to predict the biosorption of 21 elements by Fe-biochar and biochar. The modelled outputs were then used to design biosorption experiments using Fe-biochar and biochar, both simultaneously and in sequence, to treat effluent containing multiple contaminants in excess of water quality criteria. The waste water was produced during ash disposal at an Australian coal-fired power station. The application of Fe-biochar and biochar, either simultaneously or sequentially, resulted in a more comprehensive remediation of metalloids and metals compared to either biosorbent used individually. The most effective treatment was the sequential use of Fe-biochar to remove metalloids from the waste water, followed by biochar to remove metals. Al, Cd, Cr, Cu, Mn, Ni, Pb, Zn were reduced to the lowest concentration following the sequential application of the two biosorbents, and their final concentrations were predicted by the model. Overall, 17 of the 21 elements measured were remediated to, or below, the concentrations that were predicted by the model. Both metalloids and metals can be remediated from complex effluent using biosorbents with different characteristics but derived from a single feedstock. Furthermore, the extent of remediation can be predicted for similar effluents using additive models.

  1. The sequential application of macroalgal biosorbents for the bioremediation of a complex industrial effluent.

    Science.gov (United States)

    Kidgell, Joel T; de Nys, Rocky; Paul, Nicholas A; Roberts, David A

    2014-01-01

    Fe-treated biochar and raw biochar produced from macroalgae are effective biosorbents of metalloids and metals, respectively. However, the treatment of complex effluents that contain both metalloid and metal contaminants presents a challenging scenario. We test a multiple-biosorbent approach to bioremediation using Fe-biochar and biochar to remediate both metalloids and metals from the effluent from a coal-fired power station. First, a model was derived from published data for this effluent to predict the biosorption of 21 elements by Fe-biochar and biochar. The modelled outputs were then used to design biosorption experiments using Fe-biochar and biochar, both simultaneously and in sequence, to treat effluent containing multiple contaminants in excess of water quality criteria. The waste water was produced during ash disposal at an Australian coal-fired power station. The application of Fe-biochar and biochar, either simultaneously or sequentially, resulted in a more comprehensive remediation of metalloids and metals compared to either biosorbent used individually. The most effective treatment was the sequential use of Fe-biochar to remove metalloids from the waste water, followed by biochar to remove metals. Al, Cd, Cr, Cu, Mn, Ni, Pb, Zn were reduced to the lowest concentration following the sequential application of the two biosorbents, and their final concentrations were predicted by the model. Overall, 17 of the 21 elements measured were remediated to, or below, the concentrations that were predicted by the model. Both metalloids and metals can be remediated from complex effluent using biosorbents with different characteristics but derived from a single feedstock. Furthermore, the extent of remediation can be predicted for similar effluents using additive models.

  2. Superintegrability of d-dimensional conformal blocks

    International Nuclear Information System (INIS)

    Isachenkov, Mikhail

    2016-02-01

    We observe that conformal blocks of scalar 4-point functions in a d-dimensional conformal field theory can mapped to eigenfunctions of a 2-particle hyperbolic Calogero-Sutherland Hamiltonian. The latter describes two coupled Poeschl-Teller particles. Their interaction, whose strength depends smoothly on the dimension d, is known to be superintegrable. Our observation enables us to exploit the rich mathematical literature on Calogero-Sutherland models in deriving various results for conformal field theory. These include an explicit construction of conformal blocks in terms of Heckman-Opdam hypergeometric functions and a remarkable duality that relates the blocks of theories in different dimensions.

  3. Superintegrability of d-dimensional conformal blocks

    Energy Technology Data Exchange (ETDEWEB)

    Isachenkov, Mikhail [Weizmann Institute of Science, Rehovot (Israel). Dept. of Particle Physics and Astronomy; Schomerus, Volker [DESY Theory Group, Hamburg (Germany)

    2016-02-15

    We observe that conformal blocks of scalar 4-point functions in a d-dimensional conformal field theory can mapped to eigenfunctions of a 2-particle hyperbolic Calogero-Sutherland Hamiltonian. The latter describes two coupled Poeschl-Teller particles. Their interaction, whose strength depends smoothly on the dimension d, is known to be superintegrable. Our observation enables us to exploit the rich mathematical literature on Calogero-Sutherland models in deriving various results for conformal field theory. These include an explicit construction of conformal blocks in terms of Heckman-Opdam hypergeometric functions and a remarkable duality that relates the blocks of theories in different dimensions.

  4. Discrimination between sequential and simultaneous virtual channels with electrical hearing

    OpenAIRE

    Landsberger, David; Galvin, John J.

    2011-01-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation mode...

  5. E-Block: A Tangible Programming Tool with Graphical Blocks

    Directory of Open Access Journals (Sweden)

    Danli Wang

    2013-01-01

    Full Text Available This paper designs a tangible programming tool, E-Block, for children aged 5 to 9 to experience the preliminary understanding of programming by building blocks. With embedded artificial intelligence, the tool defines the programming blocks with the sensors as the input and enables children to write programs to complete the tasks in the computer. The symbol on the programming block's surface is used to help children understanding the function of each block. The sequence information is transferred to computer by microcomputers and then translated into semantic information. The system applies wireless and infrared technologies and provides user with feedbacks on both screen and programming blocks. Preliminary user studies using observation and user interview methods are shown for E-Block's prototype. The test results prove that E-Block is attractive to children and easy to learn and use. The project also highlights potential advantages of using single chip microcomputer (SCM technology to develop tangible programming tools for children.

  6. Final report of the TRUE Block Scale project. 3. Modelling of flow and transport

    Energy Technology Data Exchange (ETDEWEB)

    Poteri, Antti [VTT Processes, Helsinki (Finland); Billaux, Daniel [Itasca Consultants SA, Ecully (France); Dershowitz, William [Golder Associates Inc., Redmond, WA (United States); Gomez-Hernandez, J. Jaime [Univ. Politecnica de Valencia (Spain). Dept. of Hydrahulic and Environmental Engineering; Cvetkovic, Vladimir [Royal Inst. of Tech., Stockholm (Sweden). Div. of Water Resources Engineering; Hautojaervi, Aimo [Posiva Oy, Olkiluoto (Finland); Holton, David [Serco Assurance, Harwell (United Kingdom); Medina, Agustin [UPC, Barcelona (Spain); Winberg, Anders (ed.) [Conterra AB, Uppsala (Sweden)

    2002-12-01

    A series of tracer experiments were performed as part of the TRUE Block Scale experiment over length scales ranging from 10 to 100 m. The in situ experimentation was preceded by a comprehensive iterative characterisation campaign - the results from one borehole was used to update descriptive models and provide the basis for continued characterisation. Apart from core drilling, various types of laboratory investigations, core logging, borehole TV imaging and various types of hydraulic tests (single hole and cross-hole) were performed. Based on the characterisation data a hydro structural model of the investigated rock volume was constructed including deterministic structures and a stochastic background fracture population, and their material properties. In addition, a generic microstructure conceptual model of the investigated structures was developed. Tracer tests with radioactive sorbing tracers performed in three flow paths were preceded by various pre-tests including tracer dilution tests, which were used to select suitable configurations of tracer injection and pumping in the established borehole array. The in situ experimentation was preceded by formulation of basic questions and associated hypotheses to be addressed by the tracer tests and the subsequent evaluation. The hypotheses included address of the validity of the hydro structural model, the effects of heterogeneity and block scale retention. Model predictions and subsequent evaluation modelling was performed using a wide variety of model concepts. These included stochastic continuum, discrete feature network and channel network models formulated in 3D, which also solved the flow problem. In addition, two 'single channel' approaches (Posiva Streamtube and LaSAR extended to the block scale) were employed. A common basis for transport was formulated. The difference between the approaches was found in how heterogeneity is accounted for, both in terms of number of different types of immobile zones

  7. Final report of the TRUE Block Scale project. 3. Modelling of flow and transport

    International Nuclear Information System (INIS)

    Poteri, Antti; Billaux, Daniel; Dershowitz, William; Gomez-Hernandez, J. Jaime; Holton, David; Medina, Agustin; Winberg, Anders

    2002-12-01

    A series of tracer experiments were performed as part of the TRUE Block Scale experiment over length scales ranging from 10 to 100 m. The in situ experimentation was preceded by a comprehensive iterative characterisation campaign - the results from one borehole was used to update descriptive models and provide the basis for continued characterisation. Apart from core drilling, various types of laboratory investigations, core logging, borehole TV imaging and various types of hydraulic tests (single hole and cross-hole) were performed. Based on the characterisation data a hydro structural model of the investigated rock volume was constructed including deterministic structures and a stochastic background fracture population, and their material properties. In addition, a generic microstructure conceptual model of the investigated structures was developed. Tracer tests with radioactive sorbing tracers performed in three flow paths were preceded by various pre-tests including tracer dilution tests, which were used to select suitable configurations of tracer injection and pumping in the established borehole array. The in situ experimentation was preceded by formulation of basic questions and associated hypotheses to be addressed by the tracer tests and the subsequent evaluation. The hypotheses included address of the validity of the hydro structural model, the effects of heterogeneity and block scale retention. Model predictions and subsequent evaluation modelling was performed using a wide variety of model concepts. These included stochastic continuum, discrete feature network and channel network models formulated in 3D, which also solved the flow problem. In addition, two 'single channel' approaches (Posiva Streamtube and LaSAR extended to the block scale) were employed. A common basis for transport was formulated. The difference between the approaches was found in how heterogeneity is accounted for, both in terms of number of different types of immobile zones included

  8. Behavioral preference in sequential decision-making and its association with anxiety.

    Science.gov (United States)

    Zhang, Dandan; Gu, Ruolei

    2018-06-01

    In daily life, people often make consecutive decisions before the ultimate goal is reached (i.e., sequential decision-making). However, this kind of decision-making has been largely overlooked in the literature. The current study investigated whether behavioral preference would change during sequential decisions, and the neural processes underlying the potential changes. For this purpose, we revised the classic balloon analogue risk task and recorded the electroencephalograph (EEG) signals associated with each step of decision-making. Independent component analysis performed on EEG data revealed that four EEG components elicited by periodic feedback in the current step predicted participants' decisions (gamble vs. no gamble) in the next step. In order of time sequence, these components were: bilateral occipital alpha rhythm, bilateral frontal theta rhythm, middle frontal theta rhythm, and bilateral sensorimotor mu rhythm. According to the information flows between these EEG oscillations, we proposed a brain model that describes the temporal dynamics of sequential decision-making. Finally, we found that the tendency to gamble (as well as the power intensity of bilateral frontal theta rhythms) was sensitive to the individual level of trait anxiety in certain steps, which may help understand the role of emotion in decision-making. © 2018 Wiley Periodicals, Inc.

  9. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    Science.gov (United States)

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  10. Sequential dependencies in magnitude scaling of loudness

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2013-01-01

    Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...

  11. Visual short-term memory for sequential arrays.

    Science.gov (United States)

    Kumar, Arjun; Jiang, Yuhong

    2005-04-01

    The capacity of visual short-term memory (VSTM) for a single visual display has been investigated in past research, but VSTM for multiple sequential arrays has been explored only recently. In this study, we investigate the capacity of VSTM across two sequential arrays separated by a variable stimulus onset asynchrony (SOA). VSTM for spatial locations (Experiment 1), colors (Experiments 2-4), orientations (Experiments 3 and 4), and conjunction of color and orientation (Experiment 4) were tested, with the SOA across the two sequential arrays varying from 100 to 1,500 msec. We find that VSTM for the trailing array is much better than VSTM for the leading array, but when averaged across the two arrays VSTM has a constant capacity independent of the SOA. We suggest that multiple displays compete for retention in VSTM and that separating information into two temporally discrete groups does not enhance the overall capacity of VSTM.

  12. E-Block: A Tangible Programming Tool with Graphical Blocks

    OpenAIRE

    Danli Wang; Yang Zhang; Shengyong Chen

    2013-01-01

    This paper designs a tangible programming tool, E-Block, for children aged 5 to 9 to experience the preliminary understanding of programming by building blocks. With embedded artificial intelligence, the tool defines the programming blocks with the sensors as the input and enables children to write programs to complete the tasks in the computer. The symbol on the programming block's surface is used to help children understanding the function of each block. The sequence information is transfer...

  13. The target-to-foils shift in simultaneous and sequential lineups.

    Science.gov (United States)

    Clark, Steven E; Davey, Sherrie L

    2005-04-01

    A theoretical cornerstone in eyewitness identification research is the proposition that witnesses, in making decisions from standard simultaneous lineups, make relative judgments. The present research considers two sources of support for this proposal. An experiment by G. L. Wells (1993) showed that if the target is removed from a lineup, witnesses shift their responses to pick foils, rather than rejecting the lineups, a result we will term a target-to-foils shift. Additional empirical support is provided by results from sequential lineups which typically show higher accuracy than simultaneous lineups, presumably because of a decrease in the use of relative judgments in making identification decisions. The combination of these two lines of research suggests that the target-to-foils shift should be reduced in sequential lineups relative to simultaneous lineups. Results of two experiments showed an overall advantage for sequential lineups, but also showed a target-to-foils shift equal in size for simultaneous and sequential lineups. Additional analyses indicated that the target-to-foils shift in sequential lineups was moderated in part by an order effect and was produced with (Experiment 2) or without (Experiment 1) a shift in decision criterion. This complex pattern of results suggests that more work is needed to understand the processes which underlie decisions in simultaneous and sequential lineups.

  14. Introgression of a Block of Genome Under Infinitesimal Selection.

    Science.gov (United States)

    Sachdeva, Himani; Barton, Nicholas H

    2018-06-12

    Adaptive introgression is common in nature and can be driven by selection acting on multiple, linked genes. We explore the effects of polygenic selection on introgression under the infinitesimal model with linkage. This model assumes that the introgressing block has an effectively infinite number of loci, each with an infinitesimal effect on the trait under selection. The block is assumed to introgress under directional selection within a native population that is genetically homogeneous. We use individual-based simulations and a branching process approximation to compute various statistics of the introgressing block, and explore how these depend on parameters such as the map length and initial trait value associated with the introgressing block, the genetic variability along the block, and the strength of selection. Our results show that the introgression dynamics of a block under infinitesimal selection are qualitatively different from the dynamics of neutral introgression. We also find that in the long run, surviving descendant blocks are likely to have intermediate lengths, and clarify how their length is shaped by the interplay between linkage and infinitesimal selection. Our results suggest that it may be difficult to distinguish the long-term introgression of a block of genome with a single strongly selected locus from the introgression of a block with multiple, tightly linked and weakly selected loci. Copyright © 2018, Genetics.

  15. Exploring Mixed Membership Stochastic Block Models via Non-negative Matrix Factorization

    KAUST Repository

    Peng, Chengbin

    2014-12-01

    Many real-world phenomena can be modeled by networks in which entities and connections are represented by nodes and edges respectively. When certain nodes are highly connected with each other, those nodes forms a cluster, which is called community in our context. It is usually assumed that each node belongs to one community only, but evidences in biology and social networks reveal that the communities often overlap with each other. In other words, one node can probably belong to multiple communities. In light of that, mixed membership stochastic block models (MMB) have been developed to model those networks with overlapping communities. Such a model contains three matrices: two incidence matrices indicating in and out connections and one probability matrix. When the probability of connections for nodes between communities are significantly small, the parameter inference problem to this model can be solved by a constrained non-negative matrix factorization (NMF) algorithm. In this paper, we explore the connection between the two models and propose an algorithm based on NMF to infer the parameters of MMB. The proposed algorithms can detect overlapping communities regardless of knowing or not the number of communities. Experiments show that our algorithm can achieve a better community detection performance than the traditional NMF algorithm. © 2014 IEEE.

  16. Mobility of radionuclides based on sequential extraction of soils

    International Nuclear Information System (INIS)

    Salbu, B.; Oughton, D.H.; Lien, H.N.; Oestby, G.; Strand, P.

    1992-01-01

    Since 1989, core samples of soil and vegetation from semi-natural pastures have been collected at selected sites in Norway during the growing season. The activity concentrations in soil and vegetation as well as transfer coefficients vary significantly between regions, within regions and even within sampling plot areas. In order to differentiate between mobil and inert fractions of radioactive and stable isotopes of Cs and Sr in soils, samples were extracted sequentially using agents with increasing dissolution power. The reproducibility of the sequential extraction technique is good and the data obtained seems most informative. As the distribution pattern for radioactive and stable isotopes of Cs and Sr are similar, a high degree of isotopic exchange is indicated. Based on easily leachable fractions, mobility factors are calculated. In general the mobility of 90 Sr is higher than for 137 Cs. Mobility factors are not significantly influenced by seasonal variations, but a decrease in the mobile fraction in soil with time is indicated. Mobility factors should be considered useful for modelling purposes. (au)

  17. Sequential, progressive, equal-power, reflective beam-splitter arrays

    Science.gov (United States)

    Manhart, Paul K.

    2017-11-01

    The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.

  18. Basal ganglia and cortical networks for sequential ordering and rhythm of complex movements

    Directory of Open Access Journals (Sweden)

    Jeffery G. Bednark

    2015-07-01

    Full Text Available Voluntary actions require the concurrent engagement and coordinated control of complex temporal (e.g. rhythm and ordinal motor processes. Using high-resolution functional magnetic resonance imaging (fMRI and multi-voxel pattern analysis (MVPA, we sought to determine the degree to which these complex motor processes are dissociable in basal ganglia and cortical networks. We employed three different finger-tapping tasks that differed in the demand on the sequential temporal rhythm or sequential ordering of submovements. Our results demonstrate that sequential rhythm and sequential order tasks were partially dissociable based on activation differences. The sequential rhythm task activated a widespread network centered around the SMA and basal-ganglia regions including the dorsomedial putamen and caudate nucleus, while the sequential order task preferentially activated a fronto-parietal network. There was also extensive overlap between sequential rhythm and sequential order tasks, with both tasks commonly activating bilateral premotor, supplementary motor, and superior/inferior parietal cortical regions, as well as regions of the caudate/putamen of the basal ganglia and the ventro-lateral thalamus. Importantly, within the cortical regions that were active for both complex movements, MVPA could accurately classify different patterns of activation for the sequential rhythm and sequential order tasks. In the basal ganglia, however, overlapping activation for the sequential rhythm and sequential order tasks, which was found in classic motor circuits of the putamen and ventro-lateral thalamus, could not be accurately differentiated by MVPA. Overall, our results highlight the convergent architecture of the motor system, where complex motor information that is spatially distributed in the cortex converges into a more compact representation in the basal ganglia.

  19. The sequential price of anarchy for atomic congestion games

    NARCIS (Netherlands)

    de Jong, Jasper; Uetz, Marc Jochen; Liu, Tie-Yan; Qi, Qi; Ye, Yinyu

    2014-01-01

    In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, we consider games where players choose their actions sequentially. The sequential

  20. A Bayesian Optimal Design for Sequential Accelerated Degradation Testing

    Directory of Open Access Journals (Sweden)

    Xiaoyang Li

    2017-07-01

    Full Text Available When optimizing an accelerated degradation testing (ADT plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures.

  1. Parametric Optimization Design of Brake Block Based on Reverse Engineering

    Directory of Open Access Journals (Sweden)

    Jin Hua-wei

    2017-01-01

    Full Text Available As one of the key part of automotive brake,the performance of brake block has a direct impact on the safety and comfort of cars. Modeling the brake block of disc brake in reverse parameterization by reverse engineering software, analyzing and optimizing the reconstructed model by CAE software. Processing the scanned point cloud by Geomagic Studio and reconstructing the CAD model of the brake block with the parametric surface function of the software, then analyzing and optimizing it by Wrokbench. The example shows that it is quick to reconstruct the CAD model of parts by using reverse parameterization method and reduce part re-design development cycle significantly.

  2. Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation

    Science.gov (United States)

    Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.

    2018-03-01

    A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.

  3. Non-perturbative topological strings and conformal blocks

    NARCIS (Netherlands)

    Cheng, M.C.N.; Dijkgraaf, R.; Vafa, C.

    2011-01-01

    We give a non-perturbative completion of a class of closed topological string theories in terms of building blocks of dual open strings. In the specific case where the open string is given by a matrix model these blocks correspond to a choice of integration contour. We then apply this definition to

  4. Morphological and physical characterization of poly(styrene-isobutylene-styrene) block copolymers and ionomers thereof

    Science.gov (United States)

    Baugh, Daniel Webster, III

    Poly(styrene-isobutylene-styrene) block copolymers made by living cationic polymerization using a difunctional initiator and the sequential monomer addition technique were analyzed using curve-resolution software in conjunction with high-resolution GPC. Fractional precipitation and selective solvent extraction were applied to a representative sample in order to confirm the identity of contaminating species. The latter were found to be low molecular weight polystyrene homopolymer, diblock copolymer, and higher molecular weight segmented block copolymers formed by intermolecular electrophilic aromatic substitution linking reactions occurring late in the polymerization of the styrene outer blocks. Solvent-cast films of poly(styrene-isobutylene-styrene) (PS-PIB-PS) block copolymers and block ionomers were analyzed using small-angle X-ray scattering (SAXS) and transmission electron microscopy (TEM). Four block copolymer samples with center block molecular weights of 52,000 g/mol and PS volume fractions (o sbPS) ranging from 0.17 to 0.31 were studied. All samples exhibited hexagonally packed cylinders of PS within the PIB matrix. Cylinder spacing was in the range 32 to 36 nm for most samples, while cylinder diameters varied from 14 to 21 nm. Porod analysis of the scattering data indicated the presence of isolated phase mixing and sharp phase boundaries. PS-PIB-PS block copolymers and ionomers therefrom were analyzed using dynamic mechanical analysis (DMA) and tensile testing. The study encompassed five block copolymer samples with similar PIB center blocks with molecular weights of approx52,000 g/mol and PS weight fractions ranging from 0.127 to 0.337. Ionomers were prepared from two of these materials by lightly sulfonating the PS outer blocks. Sulfonation levels varied from 1.7 to 4.7 mol % and the sodium and potassium neutralized forms were compared to the parent block copolymers. Dynamic mechanical analysis (DMA) of the block copolymer films indicated the existence

  5. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  6. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  7. Ricci time in the Lemaître-Tolman model and the block universe

    Science.gov (United States)

    Elmahalawy, Yasser; Hellaby, Charles; Ellis, George F. R.

    2015-10-01

    It is common to think of our universe according to the "block universe" concept, which says that spacetime consists of many "stacked" three-surfaces, labelled by some kind of proper time, . Standard ideas do not distinguish past and future, but Ellis' "evolving block universe" tries to make a fundamental distinction. One proposal for this proper time is the proper time measured along the timelike Ricci eigenlines, starting from the big bang. This work investigates the shape of the "Ricci time" surfaces relative to the the null surfaces. We use the Lemaître-Tolman metric as our inhomogeneous spacetime model, and we find the necessary and sufficient conditions for these constant surfaces, , to be spacelike or timelike. Furthermore, we look at the effect of strong gravity domains by determining the location of timelike S regions relative to apparent horizons. We find that constant Ricci time surfaces are always spacelike near the big bang, while at late times (near the crunch or the extreme far future), they are only timelike under special circumstances. At intermediate times, timelike S regions are common unless the variation of the bang time is restricted. The regions where these surfaces become timelike are often adjacent to apparent horizons, but always outside them, and in particular timelike S regions do not occur inside the horizons of black-hole-like models.

  8. Pharmacokinetic studies of neuromuscular blocking agents: Good Clinical Research Practice (GCRP)

    DEFF Research Database (Denmark)

    Viby-Mogensen, J.; Østergaard, D.; Donati, F.

    2000-01-01

    Good Clinical Research Practice (GCRP), neuromuscular blocking agents, pharmacokinetics, pharmacokinetic/pharmacodynamic modeling, population pharmacokinetics, statistics, study design......Good Clinical Research Practice (GCRP), neuromuscular blocking agents, pharmacokinetics, pharmacokinetic/pharmacodynamic modeling, population pharmacokinetics, statistics, study design...

  9. Random sequential adsorption: from continuum to lattice and pre-patterned substrates

    International Nuclear Information System (INIS)

    Cadilhe, A; Araujo, N A M; Privman, Vladimir

    2007-01-01

    The random sequential adsorption (RSA) model has served as a paradigm for diverse phenomena in physical chemistry, as well as in other areas such as biology, ecology, and sociology. In the present work, we survey aspects of the RSA model with emphasis on the approach to and properties of jammed states obtained for large times in continuum deposition versus that on lattice substrates, and on pre-patterned surfaces. The latter model has been of recent interest in the context of efforts to use pre-patterning as a tool to improve self-assembly in micro- and nanoscale surface structure engineering

  10. Normative personality trait development in adulthood: A 6-year cohort-sequential growth model.

    Science.gov (United States)

    Milojev, Petar; Sibley, Chris G

    2017-03-01

    The present study investigated patterns of normative change in personality traits across the adult life span (19 through 74 years of age). We examined change in extraversion, agreeableness, conscientiousness, neuroticism, openness to experience and honesty-humility using data from the first 6 annual waves of the New Zealand Attitudes and Values Study (N = 10,416; 61.1% female, average age = 49.46). We present a cohort-sequential latent growth model assessing patterns of mean-level change due to both aging and cohort effects. Extraversion decreased as people aged, with the most pronounced declines occurring in young adulthood, and then again in old age. Agreeableness, indexed with a measure focusing on empathy, decreased in young adulthood and remained relatively unchanged thereafter. Conscientiousness increased among young adults then leveled off and remained fairly consistent for the rest of the adult life span. Neuroticism and openness to experience decreased as people aged. However, the models suggest that these latter effects may also be partially due to cohort differences, as older people showed lower levels of neuroticism and openness to experience more generally. Honesty-humility showed a pronounced and consistent increase across the adult life span. These analyses of large-scale longitudinal national probability panel data indicate that different dimensions of personality follow distinct developmental processes throughout adulthood. Our findings also highlight the importance of young adulthood (up to about the age of 30) in personality trait development, as well as continuing change throughout the adult life span. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Additive Manufacturing of Molds for Fabrication of Insulated Concrete Block

    Energy Technology Data Exchange (ETDEWEB)

    Love, Lonnie J. [ORNL; Lloyd, Peter D. [ORNL

    2018-02-01

    ORNL worked with concrete block manufacturer, NRG Insulated Block, to demonstrate additive manufacturing of a multi-component block mold for its line of insulated blocks. Solid models of the mold parts were constructed from existing two-dimensional drawings and the parts were fabricated on a Stratasys Fortus 900 using ULTEM 9085. Block mold parts were delivered to NRG and installed on one of their fabrication lines. While form and fit were acceptable, the molds failed to function during NRG’s testing.

  12. Ship Block Transportation Scheduling Problem Based on Greedy Algorithm

    Directory of Open Access Journals (Sweden)

    Chong Wang

    2016-05-01

    Full Text Available Ship block transportation problems are crucial issues to address in reducing the construction cost and improving the productivity of shipyards. Shipyards aim to maximize the workload balance of transporters with time constraint such that all blocks should be transported during the planning horizon. This process leads to three types of penalty time: empty transporter travel time, delay time, and tardy time. This study aims to minimize the sum of the penalty time. First, this study presents the problem of ship block transportation with the generalization of the block transportation restriction on the multi-type transporter. Second, the problem is transformed into the classical traveling salesman problem and assignment problem through a reasonable model simplification and by adding a virtual node to the proposed directed graph. Then, a heuristic algorithm based on greedy algorithm is proposed to assign blocks to available transporters and sequencing blocks for each transporter simultaneously. Finally, the numerical experiment method is used to validate the model, and its result shows that the proposed algorithm is effective in realizing the efficient use of the transporters in shipyards. Numerical simulation results demonstrate the promising application of the proposed method to efficiently improve the utilization of transporters and to reduce the cost of ship block logistics for shipyards.

  13. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  14. A Survey of Multi-Objective Sequential Decision-Making

    OpenAIRE

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...

  15. Poly(ferrocenylsilane)-block-Polylactide Block Copolymers

    NARCIS (Netherlands)

    Roerdink, M.; van Zanten, Thomas S.; Hempenius, Mark A.; Zhong, Zhiyuan; Feijen, Jan; Vancso, Gyula J.

    2007-01-01

    A PFS/PLA block copolymer was studied to probe the effect of strong surface interactions on pattern formation in PFS block copolymer thin films. Successful synthesis of PFS-b-PLA was demonstrated. Thin films of these polymers show phase separation to form PFS microdomains in a PLA matrix, and

  16. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita; Richtarik, Peter

    2018-01-01

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\cal O}(1/\\epsilon)$, ${\\cal O}(1/\\sqrt{\\epsilon})$ and ${\\cal O}(\\log (1/\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  17. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita

    2018-02-12

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\\\cal O}(1/\\\\epsilon)$, ${\\\\cal O}(1/\\\\sqrt{\\\\epsilon})$ and ${\\\\cal O}(\\\\log (1/\\\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  18. A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams

    Science.gov (United States)

    Molnar, P.

    2012-04-01

    Coarse grains (cobbles to boulders) are set in motion in steep mountain streams by floods with sufficient energy to erode the particles locally and transport them downstream. During transport, grains are often blocked and form width-spannings structures called steps, separated by pools. The step-pool system is a transient, self-organizing and self-sustaining structure. The temporary storage of sediment in steps and the release of that sediment in avalanche-like pulses when steps collapse, leads to a complex nonlinear threshold-driven dynamics in sediment transport which has been observed in laboratory experiments (e.g., Zimmermann et al., 2010) and in the field (e.g., Turowski et al., 2011). The basic question in this paper is if the emergent statistical properties of sediment transport in step-pool systems may be linked to the transient state of the bed, i.e. sediment storage and morphology, and to the dynamics in sediment input. The hypothesis is that this state, in which sediment transporting events due to the collapse and rebuilding of steps of all sizes occur, is analogous to a critical state in self-organized open dissipative dynamical systems (Bak et al., 1988). To exlore the process of self-organization, a cellular automaton sandpile model is used to simulate the processes of grain blocking and hydraulically-driven step collapse in a 1-d channel. Particles are injected at the top of the channel and are allowed to travel downstream based on various local threshold rules, with the travel distance drawn from a chosen probability distribution. In sandpile modelling this is a simple 1-d limited non-local model, however it has been shown to have nontrivial dynamical behaviour (Kadanoff et al., 1989), and it captures the essence of stochastic sediment transport in step-pool systems. The numerical simulations are used to illustrate the differences between input and output sediment transport rates, mainly focussing on the magnification of intermittency and

  19. A spring-block analogy for the dynamics of stock indexes

    Science.gov (United States)

    Sándor, Bulcsú; Néda, Zoltán

    2015-06-01

    A spring-block chain placed on a running conveyor belt is considered for modeling stylized facts observed in the dynamics of stock indexes. Individual stocks are modeled by the blocks, while the stock-stock correlations are introduced via simple elastic forces acting in the springs. The dragging effect of the moving belt corresponds to the expected economic growth. The spring-block system produces collective behavior and avalanche like phenomena, similar to the ones observed in stock markets. An artificial index is defined for the spring-block chain, and its dynamics is compared with the one measured for the Dow Jones Industrial Average. For certain parameter regions the model reproduces qualitatively well the dynamics of the logarithmic index, the logarithmic returns, the distribution of the logarithmic returns, the avalanche-size distribution and the distribution of the investment horizons. A noticeable success of the model is that it is able to account for the gain-loss asymmetry observed in the inverse statistics. Our approach has mainly a pedagogical value, bridging between a complex socio-economic phenomena and a basic (mechanical) model in physics.

  20. High-performance serial block-face SEM of nonconductive biological samples enabled by focal gas injection-based charge compensation.

    Science.gov (United States)

    Deerinck, T J; Shone, T M; Bushong, E A; Ramachandra, R; Peltier, S T; Ellisman, M H

    2018-05-01

    A longstanding limitation of imaging with serial block-face scanning electron microscopy is specimen surface charging. This charging is largely due to the difficulties in making biological specimens and the resins in which they are embedded sufficiently conductive. Local accumulation of charge on the specimen surface can result in poor image quality and distortions. Even minor charging can lead to misalignments between sequential images of the block-face due to image jitter. Typically, variable-pressure SEM is used to reduce specimen charging, but this results in a significant reduction to spatial resolution, signal-to-noise ratio and overall image quality. Here we show the development and application of a simple system that effectively mitigates specimen charging by using focal gas injection of nitrogen over the sample block-face during imaging. A standard gas injection valve is paired with a precisely positioned but retractable application nozzle, which is mechanically coupled to the reciprocating action of the serial block-face ultramicrotome. This system enables the application of nitrogen gas precisely over the block-face during imaging while allowing the specimen chamber to be maintained under high vacuum to maximise achievable SEM image resolution. The action of the ultramicrotome drives the nozzle retraction, automatically moving it away from the specimen area during the cutting cycle of the knife. The device described was added to a Gatan 3View system with minimal modifications, allowing high-resolution block-face imaging of even the most charge prone of epoxy-embedded biological samples. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  1. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    Science.gov (United States)

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Analysis of natural stone block pavements in urban shared areas

    Directory of Open Access Journals (Sweden)

    Pablo Zoccali

    2018-06-01

    Full Text Available This paper analysed and verified an existing block stone pavement in an urban shared area. Fatigue and rutting verification was performed respectively for bound and unbound pavement materials using analytical curves available in the literature. The commercial finite element (FE software Abaqus® was used to calculate the response of the pavement when subjected to different loading, construction and geometrical configurations (i.e. type of analysis, shape and size of meshes, boundary conditions, and bonding contacts between the pavements layers. At the end of this study, a static model of a structure with hexahedral blocks having sides of 0.02 m, with full bonded layers and restrained horizontal displacements on the model sides, was implemented to evaluate the maximum tensile stress induced in a block when the load is applied at its centre. This analysis highlighted the need for rigorous criteria for a correct design, in order to avoid inappropriate and expensive use of road materials. Keywords: Block pavement, Commercial vehicle loads, Finite element model, Hexagonal Stone block, Pedestrian pavement, Shared area

  3. Mechanical behavior analysis of small-scale modeling of ceramic block masonry structures: geometries effect

    Directory of Open Access Journals (Sweden)

    E. Rizzatti

    Full Text Available This paper presents the experimental results of a research program with ceramic block masonry under compression. Four different block geometries were investigated. Two of them had circular hollows with different net area. The third one had two rectangular hollow and the last block was with rectangular hollows and a double central webs. The prisms and walls were built with two mortar type 1:1:6 (I and 1:0,5:4 (II (proportions by volume of cement: lime: sand. One:three small scale blocks were used to test block, prisms and walls on compression. It was possible to conclude that the block with double central webs gave better results of compressive strength showing to be more efficient. The mortar didn't influenced the compressive strength of prisms and walls.

  4. Impact of Diagrams on Recalling Sequential Elements in Expository Texts.

    Science.gov (United States)

    Guri-Rozenblit, Sarah

    1988-01-01

    Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)

  5. Statistical analysis of dose heterogeneity in circulating blood: Implications for sequential methods of total body irradiation

    International Nuclear Information System (INIS)

    Molloy, Janelle A.

    2010-01-01

    Purpose: Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these ''sequential'' techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Methods: Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. Results: The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than ±10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times

  6. Statistical analysis of dose heterogeneity in circulating blood: implications for sequential methods of total body irradiation.

    Science.gov (United States)

    Molloy, Janelle A

    2010-11-01

    Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these "sequential" techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than +/- 10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times. However, the EUD was

  7. Extending the Simultaneous-Sequential Paradigm to Measure Perceptual Capacity for Features and Words

    Science.gov (United States)

    Scharff, Alec; Palmer, John; Moore, Cathleen M.

    2011-01-01

    In perception, divided attention refers to conditions in which multiple stimuli are relevant to an observer. To measure the effect of divided attention in terms of perceptual capacity, we introduce an extension of the simultaneous-sequential paradigm. The extension makes predictions for fixed-capacity models as well as for unlimited-capacity…

  8. A Bullet-Block Experiment that Explains the Chain Fountain

    Science.gov (United States)

    Pantaleone, J.; Smith, R.

    2018-05-01

    It is common in science for two phenomena to appear to be very different, but in fact follow from the same basic principles. Here we consider such a case, the connection between the chain fountain and a bullet-block collision experiment. When an upward moving bullet strikes a wooden block resting on a horizontal table, the block will rise to a higher height when the bullet strikes near the end of the block. This is because the quickly rotating block experiences an additional upward "reaction" force from its contact with the table. Such a reaction force also explains the chain fountain. When a chain falls from a pile in a container to the floor below, the chain rises up above the container. This rise occurs because the quickly rotating links in the container push off of the surface beneath them. We derive a model that accurately describes our measurements in the bullet-block experiment, and then use this same model to calculate an approximate expression for the distance the chain rises above the container. More extensive discussions of the chain fountain are available elsewhere.

  9. Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.

    Science.gov (United States)

    Carrière, Olivier; Hermand, Jean-Pierre

    2012-04-01

    Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.

  10. Quantum Probability Zero-One Law for Sequential Terminal Events

    Science.gov (United States)

    Rehder, Wulf

    1980-07-01

    On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.

  11. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  12. Digital integrated circuits design-for-test using Simulink and Stateflow

    CERN Document Server

    Perelroyzen, Evgeni

    2006-01-01

    INTRODUCTIONSIMULINK®: DYNAMIC SYSTEM SIMULATION FOR MATLAB®IntroductionCreating a Model Running a Simulation Analyzing Simulation Results Subsystems: Using Masks to Customize Blocks Reference BlocksSimulink Debugger STATEFLOW®: CREATING FINITE STATE MACHINE MODELSIntroduction Creating Charts Entering a Stateflow DiagramDefining Events and Data Defining Stateflow InterfacesExploring and Searching DebuggingFAULT MODELING AND SIMULATION Fault ModelingFault SimulationTESTABILITY ANALYSIS METHODS Combinational Controllability and Observability Analysis ModelsSequential Controllability and Observab

  13. Block and sub-block boundary strengthening in lath martensite

    NARCIS (Netherlands)

    Du, C.; Hoefnagels, J.P.M.; Vaes, R.; Geers, M.G.D.

    2016-01-01

    Well-defined uniaxial micro-tensile tests were performed on lath martensite single block specimens and multi-block specimens with different number of block boundaries parallel to the loading direction. Detailed slip trace analyses consistently revealed that in the {110}<111> slip system with the

  14. On the Eigenvalues and Eigenvectors of Block Triangular Preconditioned Block Matrices

    KAUST Repository

    Pestana, Jennifer

    2014-01-01

    Block lower triangular matrices and block upper triangular matrices are popular preconditioners for 2×2 block matrices. In this note we show that a block lower triangular preconditioner gives the same spectrum as a block upper triangular preconditioner and that the eigenvectors of the two preconditioned matrices are related. © 2014 Society for Industrial and Applied Mathematics.

  15. Powder wastes confinement block and manufacturing process of this block

    International Nuclear Information System (INIS)

    Dagot, L.; Brunel, G.

    1996-01-01

    This invention concerns a powder wastes containment block and a manufacturing process of this block. In this block, the waste powder is encapsulated in a thermo hardening polymer as for example an epoxy resin, the encapsulated resin being spread into cement. This block can contain between 45 and 55% in mass of wastes, between 18 and 36% in mass of polymer and between 14 and 32% in mass of cement. Such a containment block can be used for the radioactive wastes storage. (O.M.). 4 refs

  16. Lineup Composition, Suspect Position, and the Sequential Lineup Advantage

    Science.gov (United States)

    Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.

    2008-01-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…

  17. Trial Sequential Analysis in systematic reviews with meta-analysis

    Directory of Open Access Journals (Sweden)

    Jørn Wetterslev

    2017-03-01

    Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in

  18. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  19. Population Blocks.

    Science.gov (United States)

    Smith, Martin H.

    1992-01-01

    Describes an educational game called "Population Blocks" that is designed to illustrate the concept of exponential growth of the human population and some potential effects of overpopulation. The game material consists of wooden blocks; 18 blocks are painted green (representing land), 7 are painted blue (representing water); and the remaining…

  20. Social Influences in Sequential Decision Making.

    Directory of Open Access Journals (Sweden)

    Markus Schöbel

    Full Text Available People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

  1. Social Influences in Sequential Decision Making

    Science.gov (United States)

    Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

    2016-01-01

    People often make decisions in a social environment. The present work examines social influence on people’s decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others’ authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions. PMID:26784448

  2. Social Influences in Sequential Decision Making.

    Science.gov (United States)

    Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

    2016-01-01

    People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

  3. Modeling of Interface and Internal Disorder Applied to XRD Analysis of Ag-Based Nano-Multilayers.

    Science.gov (United States)

    Ariosa, Daniel; Cancellieri, Claudia; Araullo-Peters, Vicente; Chiodi, Mirco; Klyatskina, Elizaveta; Janczak-Rusch, Jolanta; Jeurgens, Lars P H

    2018-06-07

    Multilayered structures are a promising route to tailor electronic, magnetic, optical, and/or mechanical properties and durability of functional materials. Sputter deposition at room temperature, being an out-of-equilibrium process, introduces structural defects and confers to these nanosystems an intrinsic thermodynamical instability. As-deposited materials exhibit a large amount of internal atomic displacements within each constituent block as well as severe interface roughness between different layers. To access and characterize the internal multilayer disorder and its thermal evolution, X-ray diffraction investigation and analysis are performed systematically at differently grown Ag-Ge/aluminum nitride (AlN) multilayers (co-deposited, sequentially deposited with and without radio frequency (RF) bias) samples and after high-temperature annealing treatment. We report here on model calculations based on a kinematic formalism describing the displacement disorder both within the multilayer blocks and at the interfaces to reproduce the experimental X-ray diffraction intensities. Mixing and displacements at the interface are found to be considerably reduced after thermal treatment for co- and sequentially deposited Ag-Ge/AlN samples. The application of a RF bias during the deposition causes the highest interface mixing and introduces random intercalates in the AlN layers. X-ray analysis is contrasted to transmission electron microscopy pictures to validate the approach.

  4. A randomised, controlled, double-blind trial of ultrasound-guided phrenic nerve block to prevent shoulder pain after thoracic surgery

    DEFF Research Database (Denmark)

    Blichfeldt-Eckhardt, M R; Laursen, C B; Berg, H

    2016-01-01

    to receive an ultrasound-guided supraclavicular phrenic nerve block with 10 ml ropivacaine or 10 ml saline (placebo) immediately following surgery. A nerve catheter was subsequently inserted and treatment continued for 3 days. The study drug was pharmaceutically pre-packed in sequentially numbered identical...... vials assuring that all participants, healthcare providers and data collectors were blinded. The primary outcome was the incidence of unilateral shoulder pain within the first 6 h after surgery. Pain was evaluated using a numeric rating scale. Nine of 38 patients in the ropivacaine group and 26 of 38...

  5. Heat accumulation during sequential cortical bone drilling.

    Science.gov (United States)

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  6. The determinants of the block premium and of private benefits of control

    NARCIS (Netherlands)

    Albuquerque, R.A.; Schroth, E.J.

    2008-01-01

    We study the determinants of private benefits of control in negotiated block transactions. We estimate the block pricing model in Burkart, Gromb, and Panunzi (2000) explicitly dealing with the existence of both block premia and block discounts in the data. We find evidence that the occurrence of

  7. Image Quality of 3rd Generation Spiral Cranial Dual-Source CT in Combination with an Advanced Model Iterative Reconstruction Technique: A Prospective Intra-Individual Comparison Study to Standard Sequential Cranial CT Using Identical Radiation Dose.

    Science.gov (United States)

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas

    2015-01-01

    To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (pspiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.

  8. Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.

    Science.gov (United States)

    Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M

    2011-02-01

    To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.

  9. Parameter sampling capabilities of sequential and simultaneous data assimilation: II. Statistical analysis of numerical results

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess and compare parameter sampling capabilities of one sequential and one simultaneous Bayesian, ensemble-based, joint state-parameter (JS) estimation method. In the companion paper, part I (Fossum and Mannseth 2014 Inverse Problems 30 114002), analytical investigations lead us to propose three claims, essentially stating that the sequential method can be expected to outperform the simultaneous method for weakly nonlinear forward models. Here, we assess the reliability and robustness of these claims through statistical analysis of results from a range of numerical experiments. Samples generated by the two approximate JS methods are compared to samples from the posterior distribution generated by a Markov chain Monte Carlo method, using four approximate measures of distance between probability distributions. Forward-model nonlinearity is assessed from a stochastic nonlinearity measure allowing for sufficiently large model dimensions. Both toy models (with low computational complexity, and where the nonlinearity is fairly easy to control) and two-phase porous-media flow models (corresponding to down-scaled versions of problems to which the JS methods have been frequently applied recently) are considered in the numerical experiments. Results from the statistical analysis show strong support of all three claims stated in part I. (paper)

  10. Dihydroazulene photoswitch operating in sequential tunneling regime

    DEFF Research Database (Denmark)

    Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg

    2012-01-01

    to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...

  11. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  12. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    Science.gov (United States)

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  13. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2016-01-01

    Full Text Available We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1 the projection vectors for dimension reduction, (2 the input weights, biases, and output weights, and (3 the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD approach, adaptive multihyperplane machine (AMM, primal estimated subgradient solver (Pegasos, online sequential extreme learning machine (OSELM, and SVD + OSELM (feature selection based on SVD is performed before OSELM. The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  14. Simulation modeling analysis of sequential relations among therapeutic alliance, symptoms, and adherence to child-centered play therapy between a child with autism spectrum disorder and two therapists.

    Science.gov (United States)

    Goodman, Geoff; Chung, Hyewon; Fischel, Leah; Athey-Lloyd, Laura

    2017-07-01

    This study examined the sequential relations among three pertinent variables in child psychotherapy: therapeutic alliance (TA) (including ruptures and repairs), autism symptoms, and adherence to child-centered play therapy (CCPT) process. A 2-year CCPT of a 6-year-old Caucasian boy diagnosed with autism spectrum disorder was conducted weekly with two doctoral-student therapists, working consecutively for 1 year each, in a university-based community mental-health clinic. Sessions were video-recorded and coded using the Child Psychotherapy Process Q-Set (CPQ), a measure of the TA, and an autism symptom measure. Sequential relations among these variables were examined using simulation modeling analysis (SMA). In Therapist 1's treatment, unexpectedly, autism symptoms decreased three sessions after a rupture occurred in the therapeutic dyad. In Therapist 2's treatment, adherence to CCPT process increased 2 weeks after a repair occurred in the therapeutic dyad. The TA decreased 1 week after autism symptoms increased. Finally, adherence to CCPT process decreased 1 week after autism symptoms increased. The authors concluded that (1) sequential relations differ by therapist even though the child remains constant, (2) therapeutic ruptures can have an unexpected effect on autism symptoms, and (3) changes in autism symptoms can precede as well as follow changes in process variables.

  15. Meso-scale Modeling of Block Copolymers Self-Assembly in Casting Solutions for Membrane Manufacture

    KAUST Repository

    Moreno Chaparro, Nicolas

    2016-05-01

    Isoporous membranes manufactured from diblock copolymer are successfully produced at laboratory scale under controlled conditions. Because of the complex phenomena involved, membrane preparation requires trial and error methodologies to find the optimal conditions, leading to a considerable demand of resources. Experimental insights demonstrate that the self-assembly of the block copolymers in solution has an effect on the final membrane structure. Nevertheless, the complete understanding of these multi-scale phenomena is elusive. Herein we use the coarse-grained method Dissipative Particle Dynamics to study the self-assembly of block copolymers that are used for the preparation of the membranes. To simulate representative time and length scales, we introduce a framework for model reduction of polymer chain representations for dissipative particle dynamics, which preserves the properties governing the phase equilibria. We reduce the number of degrees of freedom by accounting for the correlation between beads in fine-grained models via power laws and the consistent scaling of the simulation parameters. The coarse-graining models are consistent with the experimental evidence, showing a morphological transition of the aggregates as the polymer concentration and solvent affinity change. We show that hexagonal packing of the micelles can occur in solution within different windows of polymer concentration depending on the solvent affinity. However, the shape and size dispersion of the micelles determine the characteristic arrangement. We describe the order of crew-cut micelles using a rigid-sphere approximation and propose different phase parameters that characterize the emergence of monodisperse-spherical micelles in solution. Additionally, we investigate the effect of blending asymmetric diblock copolymers (AB/AC) over the properties of the membranes. We observe that the co-assembly mechanism localizes the AC molecules at the interface of A and B domains, and induces

  16. Bootstrapped efficiency measures of oil blocks in Angola

    International Nuclear Information System (INIS)

    Barros, C.P.; Assaf, A.

    2009-01-01

    This paper investigates the technical efficiency of Angola oil blocks over the period 2002-2007. A double bootstrap data envelopment analysis (DEA) model is adopted composed in the first stage of a DEA-variable returns to scale (VRS) model and then followed in the second stage by a bootstrapped truncated regression. Results showed that on average, the technical efficiency has fluctuated over the period of study, but deep and ultradeep oil blocks have generally maintained a consistent efficiency level. Policy implications are derived.

  17. Establishing an Appropriate Level of Detail (LoD) for a Building Information Model (BIM) - West Block, Parliament Hill, Ottawa, Canada

    Science.gov (United States)

    Fai, S.; Rafeiro, J.

    2014-05-01

    In 2011, Public Works and Government Services Canada (PWGSC) embarked on a comprehensive rehabilitation of the historically significant West Block of Canada's Parliament Hill. With over 17 thousand square meters of floor space, the West Block is one of the largest projects of its kind in the world. As part of the rehabilitation, PWGSC is working with the Carleton Immersive Media Studio (CIMS) to develop a building information model (BIM) that can serve as maintenance and life-cycle management tool once construction is completed. The scale and complexity of the model have presented many challenges. One of these challenges is determining appropriate levels of detail (LoD). While still a matter of debate in the development of international BIM standards, LoD is further complicated in the context of heritage buildings because we must reconcile the LoD of the BIM with that used in the documentation process (terrestrial laser scan and photogrammetric survey data). In this paper, we will discuss our work to date on establishing appropriate LoD within the West Block BIM that will best serve the end use. To facilitate this, we have developed a single parametric model for gothic pointed arches that can be used for over seventy-five unique window types present in the West Block. Using the AEC (CAN) BIM as a reference, we have developed a workflow to test each of these window types at three distinct levels of detail. We have found that the parametric Gothic arch significantly reduces the amount of time necessary to develop scenarios to test appropriate LoD.

  18. Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test

    Directory of Open Access Journals (Sweden)

    Closas Pau

    2012-10-01

    Full Text Available Abstract Background Influenza is a well known and common human respiratory infection, causing significant morbidity and mortality every year. Despite Influenza variability, fast and reliable outbreak detection is required for health resource planning. Clinical health records, as published by the Diagnosticat database in Catalonia, host useful data for probabilistic detection of influenza outbreaks. Methods This paper proposes a statistical method to detect influenza epidemic activity. Non-epidemic incidence rates are modeled against the exponential distribution, and the maximum likelihood estimate for the decaying factor λ is calculated. The sequential detection algorithm updates the parameter as new data becomes available. Binary epidemic detection of weekly incidence rates is assessed by Kolmogorov-Smirnov test on the absolute difference between the empirical and the cumulative density function of the estimated exponential distribution with significance level 0 ≤ α ≤ 1. Results The main advantage with respect to other approaches is the adoption of a statistically meaningful test, which provides an indicator of epidemic activity with an associated probability. The detection algorithm was initiated with parameter λ0 = 3.8617 estimated from the training sequence (corresponding to non-epidemic incidence rates of the 2008-2009 influenza season and sequentially updated. Kolmogorov-Smirnov test detected the following weeks as epidemic for each influenza season: 50−10 (2008-2009 season, 38−50 (2009-2010 season, weeks 50−9 (2010-2011 season and weeks 3 to 12 for the current 2011-2012 season. Conclusions Real medical data was used to assess the validity of the approach, as well as to construct a realistic statistical model of weekly influenza incidence rates in non-epidemic periods. For the tested data, the results confirmed the ability of the algorithm to detect the start and the end of epidemic periods. In general, the proposed test could

  19. A tribo-mechanical analysis of PVA-based building-blocks for implementation in a 2-layered skin model.

    Science.gov (United States)

    Morales Hurtado, M; de Vries, E G; Zeng, X; van der Heide, E

    2016-09-01

    Poly(vinyl) alcohol hydrogel (PVA) is a well-known polymer widely used in the medical field due to its biocompatibility properties and easy manufacturing. In this work, the tribo-mechanical properties of PVA-based blocks are studied to evaluate their suitability as a part of a structure simulating the length scale dependence of human skin. Thus, blocks of pure PVA and PVA mixed with Cellulose (PVA-Cel) were synthesised via freezing/thawing cycles and their mechanical properties were determined by Dynamic Mechanical Analysis (DMA) and creep tests. The dynamic tests addressed to elastic moduli between 38 and 50kPa for the PVA and PVA-Cel, respectively. The fitting of the creep compliance tests in the SLS model confirmed the viscoelastic behaviour of the samples with retardation times of 23 and 16 seconds for the PVA and PVA-Cel, respectively. Micro indentation tests were also achieved and the results indicated elastic moduli in the same range of the dynamic tests. Specifically, values between 45-55 and 56-81kPa were obtained for the PVA and PVA-Cel samples, respectively. The tribological results indicated values of 0.55 at low forces for the PVA decreasing to 0.13 at higher forces. The PVA-Cel blocks showed lower friction even at low forces with values between 0.2 and 0.07. The implementation of these building blocks in the design of a 2-layered skin model (2LSM) is also presented in this work. The 2LSM was stamped with four different textures and their surface properties were evaluated. The hydration of the 2LSM was also evaluated with a corneometer and the results indicated a gradient of hydration comparable to the human skin. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...