WorldWideScience

Sample records for modeling non-negative sequential

  1. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    Science.gov (United States)

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522

  2. Work-Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress.

    Science.gov (United States)

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  3. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    Directory of Open Access Journals (Sweden)

    Shiyi Zhou

    2018-04-01

    Full Text Available After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  4. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    Directory of Open Access Journals (Sweden)

    Wakhid Slamet Ciptono

    2006-05-01

    Full Text Available This study extends the prior research (Zahra and Das 1993 by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment that may lead to higher company non-financial performance (productivity and operational reliability. The study distributed a questionnaire (by mail, e-mailed web system, and focus group discussion to three levels of managers (top, middle, and first-line of 49 oil and gas companies with 140 SBUs in Indonesia. These qualified samples fell into 47 upstream (supply-chain companies with 132 SBUs, and 2 downstream (demand-chain companies with 8 SBUs. A total of 1,332 individual usable questionnaires were returned thus qualified for analysis, representing an effective response rate of 50.19 percent. The researcher conducts structural equation modeling (SEM and hierarchical multiple regression analysis to assess the goodness-of-fit between the research models and the sample data and to test whether innovation strategy mediates the impact of leadership orientation on company non-financial performance. SEM reveals that the models have met goodness-of-fit criteria, thus the interpretation of the sequential models fits with the data. The results of SEM and hierarchical multiple regression: (1 support the importance of innovation strategy as a determinant of company non-financial performance, (2 suggest that the sequential model is appropriate for examining the relationships between six dimensions of innovation strategy and company non-financial performance, and (3 show that the sequential model provides additional insights into the indirect contribution of the individual

  5. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    OpenAIRE

    Ciptono, Wakhid Slamet

    2006-01-01

    This study extends the prior research (Zahra and Das 1993) by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs) of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment) that may ...

  6. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    OpenAIRE

    Shiyi Zhou; Shu Da; Heng Guo; Xichao Zhang

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relat...

  7. The sequential pathway between trauma-related symptom severity and cognitive-based smoking processes through perceived stress and negative affect reduction expectancies among trauma exposed smokers.

    Science.gov (United States)

    Garey, Lorra; Cheema, Mina K; Otal, Tanveer K; Schmidt, Norman B; Neighbors, Clayton; Zvolensky, Michael J

    2016-10-01

    Smoking rates are markedly higher among trauma-exposed individuals relative to non-trauma-exposed individuals. Extant work suggests that both perceived stress and negative affect reduction smoking expectancies are independent mechanisms that link trauma-related symptoms and smoking. Yet, no work has examined perceived stress and negative affect reduction smoking expectancies as potential explanatory variables for the relation between trauma-related symptom severity and smoking in a sequential pathway model. Methods The present study utilized a sample of treatment-seeking, trauma-exposed smokers (n = 363; 49.0% female) to examine perceived stress and negative affect reduction expectancies for smoking as potential sequential explanatory variables linking trauma-related symptom severity and nicotine dependence, perceived barriers to smoking cessation, and severity of withdrawal-related problems and symptoms during past quit attempts. As hypothesized, perceived stress and negative affect reduction expectancies had a significant sequential indirect effect on trauma-related symptom severity and criterion variables. Findings further elucidate the complex pathways through which trauma-related symptoms contribute to smoking behavior and cognitions, and highlight the importance of addressing perceived stress and negative affect reduction expectancies in smoking cessation programs among trauma-exposed individuals. (Am J Addict 2016;25:565-572). © 2016 American Academy of Addiction Psychiatry.

  8. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  9. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  10. Negative affect and smoking motives sequentially mediate the effect of panic attacks on tobacco-relevant processes.

    Science.gov (United States)

    Farris, Samantha G; Zvolensky, Michael J; Blalock, Janice A; Schmidt, Norman B

    2014-05-01

    Empirical work has documented a robust and consistent relation between panic attacks and smoking behavior. Theoretical models posit smokers with panic attacks may rely on smoking to help them manage chronically elevated negative affect due to uncomfortable bodily states, which may explain higher levels of nicotine dependence and quit problems. The current study examined the effects of panic attack history on nicotine dependence, perceived barriers for quitting, smoking inflexibility when emotionally distressed, and expired carbon monoxide among 461 treatment-seeking smokers. A multiple mediator path model was evaluated to examine the indirect effects of negative affect and negative affect reduction motives as mediators of the panic attack-smoking relations. Panic attack history was indirectly related to greater levels of nicotine dependence (b = 0.039, CI95% = 0.008, 0.097), perceived barriers to smoking cessation (b = 0.195, CI95% = 0.043, 0.479), smoking inflexibility/avoidance when emotionally distressed (b = 0.188, CI95% = 0.041, 0.445), and higher levels of expired carbon monoxide (b = 0.071, CI95% = 0.010, 0.230) through the sequential effects of negative affect and negative affect smoking motives. The present results provide empirical support for the sequential mediating role of negative affect and smoking motives for negative affect reduction in the relation between panic attacks and a variety of smoking variables in treatment-seeking smokers. These mediating variables are likely important processes to address in smoking cessation treatment, especially in panic-vulnerable smokers.

  11. Strong-field non-sequential ionization: The vector momentum distribution of multiply charged Ne ions

    International Nuclear Information System (INIS)

    Rottke, H.; Trump, C.; Wittmann, M.; Korn, G.; Becker, W.; Hoffmann, K.; Sandner, W.; Moshammer, R.; Feuerstein, B.; Dorn, A.; Schroeter, C.D.; Ullrich, J.; Schmitt, W.

    2000-01-01

    COLTRIMS (COLd Target Recoil-Ion Momentum Spectroscopy) was used to measure the vector momentum distribution of Ne n+ (n=1,2,3) ions formed in ultrashort (30 fsec) high-intensity (≅10 15 W/cm 2 ) laser pulses with center wavelength at 795 nm. To a high degree of accuracy the length of the Ne n+ ion momentum vector is equal to the length of the total momentum vector of the n photoelectrons released, with both vectors pointing into opposite directions. At a light intensity where non-sequential ionization of the atom dominates the Ne 2+ and Ne 3+ momentum distributions show distinct maxima at 4.0 a.u. and 7.5 a.u. along the polarization axis of the linearly polarized light beam. First, this is a clear signature of non-sequential multiple ionization. Second, it indicates that instantaneous emission of two (or more) electrons at electric field strength maxima of the light wave can be ruled out as main mechanism of non-sequential strong-field multiple ionization. In contrast, this experimental result is in accordance with the kinematical constraints of the 'rescattering model'

  12. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  13. On affine non-negative matrix factorization

    DEFF Research Database (Denmark)

    Laurberg, Hans; Hansen, Lars Kai

    2007-01-01

    We generalize the non-negative matrix factorization (NMF) generative model to incorporate an explicit offset. Multiplicative estimation algorithms are provided for the resulting sparse affine NMF model. We show that the affine model has improved uniqueness properties and leads to more accurate id...

  14. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  15. A continuous-time neural model for sequential action.

    Science.gov (United States)

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  16. Shifted Non-negative Matrix Factorization

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2007-01-01

    Non-negative matrix factorization (NMF) has become a widely used blind source separation technique due to its part based representation and ease of interpretability. We currently extend the NMF model to allow for delays between sources and sensors. This is a natural extension for spectrometry data...

  17. Effect of cryoablation sequential chemotherapy on patients with advanced non-small cell lung cancer

    Directory of Open Access Journals (Sweden)

    Shu-Hui Yao

    2016-03-01

    Full Text Available Objective: To evaluate the effect of cryoablation sequential chemotherapy on patients with advanced non-small cell lung cancer. Methods: A total of 39 cases with advanced non-small cell lung cancer who received cryoablation sequential chemotherapy and 39 cases with advanced non-small cell lung cancer who received chemotherapy alone were selected and enrolled in sequential group and control group, disease progression and survival of two groups were followed up, and contents of tumor markers and angiogenesis molecules in serum as well as contents of T-lymphocyte subsets in peripheral blood were detected. Results: Progressionfree survival and median overall survival (mOS of sequential group were longer than those of control group, and cumulative cases of tumor progression at various points in time were significantly less than those of control group (P<0.05; 1 month after treatment, serum tumor markers CEA, CYFRA21-1 and NSE contents, serum angiogenesis molecules PCDGF, VEGF and HDGF contents as well as CD3+CD4-CD8+CD28-T cell content in peripheral blood of sequential group were significantly lower than those of control group (P<0.05, and contents of CD3+CD4+CD8-T cell and CD3+CD4-CD8+CD28+T cell in peripheral blood were higher than those of control group (P<0.05. Conclusions: Cryoablation sequential chemotherapy can improve the prognosis of patients with advanced non-small cell lung cancer, delay disease progression, prolong survival time, inhibit angiogenesis and improve immune function.

  18. Polarization control of direct (non-sequential) two-photon double ionization of He

    International Nuclear Information System (INIS)

    Pronin, E A; Manakov, N L; Marmo, S I; Starace, Anthony F

    2007-01-01

    An ab initio parametrization of the doubly-differential cross section (DDCS) for two-photon double ionization (TPDI) from an s 2 subshell of an atom in a 1 S 0 -state is presented. Analysis of the elliptic dichroism (ED) effect in the DDCS for TPDI of He and its comparison with the same effect in the concurrent process of sequential double ionization shows their qualitative and quantitative differences, thus providing a means to control and to distinguish sequential and non-sequential processes by measuring the relative ED parameter

  19. Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines.

    Science.gov (United States)

    Biedka, Mathew M; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan

    2017-01-06

    Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency.

  20. Simultaneous and Sequential Feature Negative Discriminations: Elemental Learning and Occasion Setting in Human Pavlovian Conditioning

    Science.gov (United States)

    Baeyens, Frank; Vervliet, Bram; Vansteenwegen, Debora; Beckers, Tom; Hermans, Dirk; Eelen, Paul

    2004-01-01

    Using a conditioned suppression task, we investigated simultaneous (XA-/A+) vs. sequential (X [right arrow] A-/A+) Feature Negative (FN) discrimination learning in humans. We expected the simultaneous discrimination to result in X (or alternatively the XA configuration) becoming an inhibitor acting directly on the US, and the sequential…

  1. MODELS OF COVARIANCE FUNCTIONS OF GAUSSIAN RANDOM FIELDS ESCAPING FROM ISOTROPY, STATIONARITY AND NON NEGATIVITY

    Directory of Open Access Journals (Sweden)

    Pablo Gregori

    2014-03-01

    Full Text Available This paper represents a survey of recent advances in modeling of space or space-time Gaussian Random Fields (GRF, tools of Geostatistics at hand for the understanding of special cases of noise in image analysis. They can be used when stationarity or isotropy are unrealistic assumptions, or even when negative covariance between some couples of locations are evident. We show some strategies in order to escape from these restrictions, on the basis of rich classes of well known stationary or isotropic non negative covariance models, and through suitable operations, like linear combinations, generalized means, or with particular Fourier transforms.

  2. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  3. Non-negative Matrix Factorization for Binary Data

    DEFF Research Database (Denmark)

    Larsen, Jacob Søgaard; Clemmensen, Line Katrine Harder

    We propose the Logistic Non-negative Matrix Factorization for decomposition of binary data. Binary data are frequently generated in e.g. text analysis, sensory data, market basket data etc. A common method for analysing non-negative data is the Non-negative Matrix Factorization, though...... this is in theory not appropriate for binary data, and thus we propose a novel Non-negative Matrix Factorization based on the logistic link function. Furthermore we generalize the method to handle missing data. The formulation of the method is compared to a previously proposed method (Tome et al., 2015). We compare...... the performance of the Logistic Non-negative Matrix Factorization to Least Squares Non-negative Matrix Factorization and Kullback-Leibler (KL) Non-negative Matrix Factorization on sets of binary data: a synthetic dataset, a set of student comments on their professors collected in a binary term-document matrix...

  4. Observation of non-classical correlations in sequential measurements of photon polarization

    International Nuclear Information System (INIS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F

    2016-01-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength. (paper)

  5. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  6. Non-Saccharomyces Yeasts Nitrogen Source Preferences: Impact on Sequential Fermentation and Wine Volatile Compounds Profile

    Directory of Open Access Journals (Sweden)

    Antoine Gobert

    2017-11-01

    Full Text Available Nitrogen sources in the must are important for yeast metabolism, growth, and performance, and wine volatile compounds profile. Yeast assimilable nitrogen (YAN deficiencies in grape must are one of the main causes of stuck and sluggish fermentation. The nitrogen requirement of Saccharomyces cerevisiae metabolism has been described in detail. However, the YAN preferences of non-Saccharomyces yeasts remain unknown despite their increasingly widespread use in winemaking. Furthermore, the impact of nitrogen consumption by non-Saccharomyces yeasts on YAN availability, alcoholic performance and volatile compounds production by S. cerevisiae in sequential fermentation has been little studied. With a view to improving the use of non-Saccharomyces yeasts in winemaking, we studied the use of amino acids and ammonium by three strains of non-Saccharomyces yeasts (Starmerella bacillaris, Metschnikowia pulcherrima, and Pichia membranifaciens in grape juice. We first determined which nitrogen sources were preferentially used by these yeasts in pure cultures at 28 and 20°C (because few data are available. We then carried out sequential fermentations at 20°C with S. cerevisiae, to assess the impact of the non-Saccharomyces yeasts on the availability of assimilable nitrogen for S. cerevisiae. Finally, 22 volatile compounds were quantified in sequential fermentation and their levels compared with those in pure cultures of S. cerevisiae. We report here, for the first time, that non-Saccharomyces yeasts have specific amino-acid consumption profiles. Histidine, methionine, threonine, and tyrosine were not consumed by S. bacillaris, aspartic acid was assimilated very slowly by M. pulcherrima, and glutamine was not assimilated by P. membranifaciens. By contrast, cysteine appeared to be a preferred nitrogen source for all non-Saccharomyces yeasts. In sequential fermentation, these specific profiles of amino-acid consumption by non-Saccharomyces yeasts may account for

  7. Non-Saccharomyces Yeasts Nitrogen Source Preferences: Impact on Sequential Fermentation and Wine Volatile Compounds Profile

    Science.gov (United States)

    Gobert, Antoine; Tourdot-Maréchal, Raphaëlle; Morge, Christophe; Sparrow, Céline; Liu, Youzhong; Quintanilla-Casas, Beatriz; Vichi, Stefania; Alexandre, Hervé

    2017-01-01

    Nitrogen sources in the must are important for yeast metabolism, growth, and performance, and wine volatile compounds profile. Yeast assimilable nitrogen (YAN) deficiencies in grape must are one of the main causes of stuck and sluggish fermentation. The nitrogen requirement of Saccharomyces cerevisiae metabolism has been described in detail. However, the YAN preferences of non-Saccharomyces yeasts remain unknown despite their increasingly widespread use in winemaking. Furthermore, the impact of nitrogen consumption by non-Saccharomyces yeasts on YAN availability, alcoholic performance and volatile compounds production by S. cerevisiae in sequential fermentation has been little studied. With a view to improving the use of non-Saccharomyces yeasts in winemaking, we studied the use of amino acids and ammonium by three strains of non-Saccharomyces yeasts (Starmerella bacillaris, Metschnikowia pulcherrima, and Pichia membranifaciens) in grape juice. We first determined which nitrogen sources were preferentially used by these yeasts in pure cultures at 28 and 20°C (because few data are available). We then carried out sequential fermentations at 20°C with S. cerevisiae, to assess the impact of the non-Saccharomyces yeasts on the availability of assimilable nitrogen for S. cerevisiae. Finally, 22 volatile compounds were quantified in sequential fermentation and their levels compared with those in pure cultures of S. cerevisiae. We report here, for the first time, that non-Saccharomyces yeasts have specific amino-acid consumption profiles. Histidine, methionine, threonine, and tyrosine were not consumed by S. bacillaris, aspartic acid was assimilated very slowly by M. pulcherrima, and glutamine was not assimilated by P. membranifaciens. By contrast, cysteine appeared to be a preferred nitrogen source for all non-Saccharomyces yeasts. In sequential fermentation, these specific profiles of amino-acid consumption by non-Saccharomyces yeasts may account for some of the

  8. Three-dimensional classical-ensemble modeling of non-sequential double ionization

    International Nuclear Information System (INIS)

    Haan, S.L.; Breen, L.; Tannor, D.; Panfili, R.; Ho, Phay J.; Eberly, J.H.

    2005-01-01

    Full text: We have been using 1d ensembles of classical two-electron atoms to simulate helium atoms that are exposed to pulses of intense laser radiation. In this talk we discuss the challenges in setting up a 3d classical ensemble that can mimic the quantum ground state of helium. We then report studies in which each one of 500,000 two-electron trajectories is followed in 3d through a ten-cycle (25 fs) 780 nm laser pulse. We examine double-ionization yield for various intensities, finding the familiar knee structure. We consider the momentum spread of outcoming electrons in directions both parallel and perpendicular to the direction of laser polarization, and find results that are consistent with experiment. We examine individual trajectories and recollision processes that lead to double ionization, considering the best phases of the laser cycle for recollision events and looking at the possible time delay between recollision and emergence. We consider also the number of recollision events, and find that multiple recollisions are common in the classical ensemble. We investigate which collisional processes lead to various final electron momenta. We conclude with comments regarding the ability of classical mechanics to describe non-sequential double ionization, and a quick summary of similarities and differences between 1d and 3d classical double ionization using energy-trajectory comparisons. Refs. 3 (author)

  9. Non-cross resistant sequential single agent chemotherapy in first-line advanced non-small cell lung cancer patients: Results of a phase II study

    NARCIS (Netherlands)

    V. Surmont; J.G.J.V. Aerts (Joachim); K.Y. Tan; F.M.N.H. Schramel (Franz); R. Vernhout (Rene); H.C. Hoogsteden (Henk); R.J. van Klaveren (Rob)

    2009-01-01

    textabstractBackground. sequential chemotherapy can maintain dose intensity and preclude cumulative toxicity by increasing drug diversity. Purpose. to investigate the toxicity and efficacy of the sequential regimen of gemcitabine followed by paclitaxel in first line advanced stage non-small cell

  10. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  11. Single-channel source separation using non-negative matrix factorization

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard

    -determined and its solution relies on making appropriate assumptions concerning the sources. This dissertation is concerned with model-based probabilistic single-channel source separation based on non-negative matrix factorization, and consists of two parts: i) three introductory chapters and ii) five published...... papers. The first part introduces the single-channel source separation problem as well as non-negative matrix factorization and provides a comprehensive review of existing approaches, applications, and practical algorithms. This serves to provide context for the second part, the published papers......, in which a number of methods for single-channel source separation based on non-negative matrix factorization are presented. In the papers, the methods are applied to separating audio signals such as speech and musical instruments and separating different types of tissue in chemical shift imaging....

  12. Exploring Mixed Membership Stochastic Block Models via Non-negative Matrix Factorization

    KAUST Repository

    Peng, Chengbin

    2014-12-01

    Many real-world phenomena can be modeled by networks in which entities and connections are represented by nodes and edges respectively. When certain nodes are highly connected with each other, those nodes forms a cluster, which is called community in our context. It is usually assumed that each node belongs to one community only, but evidences in biology and social networks reveal that the communities often overlap with each other. In other words, one node can probably belong to multiple communities. In light of that, mixed membership stochastic block models (MMB) have been developed to model those networks with overlapping communities. Such a model contains three matrices: two incidence matrices indicating in and out connections and one probability matrix. When the probability of connections for nodes between communities are significantly small, the parameter inference problem to this model can be solved by a constrained non-negative matrix factorization (NMF) algorithm. In this paper, we explore the connection between the two models and propose an algorithm based on NMF to infer the parameters of MMB. The proposed algorithms can detect overlapping communities regardless of knowing or not the number of communities. Experiments show that our algorithm can achieve a better community detection performance than the traditional NMF algorithm. © 2014 IEEE.

  13. Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model

    Science.gov (United States)

    Vazifedan, Turaj; Shitan, Mahendran

    Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.

  14. Reduction of Non-stationary Noise using a Non-negative Latent Variable Decomposition

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Larsen, Jan

    2008-01-01

    We present a method for suppression of non-stationary noise in single channel recordings of speech. The method is based on a non-negative latent variable decomposition model for the speech and noise signals, learned directly from a noisy mixture. In non-speech regions an over complete basis...... is learned for the noise that is then used to jointly estimate the speech and the noise from the mixture. We compare the method to the classical spectral subtraction approach, where the noise spectrum is estimated as the average over non-speech frames. The proposed method significantly outperforms...

  15. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  16. Sequential fermentation using non-Saccharomyces yeasts for the reduction of alcohol content in wine

    Directory of Open Access Journals (Sweden)

    Ciani Maurizio

    2014-01-01

    Full Text Available Over the last few decades there has been a progressive increase in wine ethanol content due to global climate change and modified wine styles that involved viticulture and oenology practices. Among the different approaches and strategies to reduce alcohol content in wine we propose a sequential fermentation using immobilized non-Saccharomyces wine yeasts. Preliminary results showed that sequential fermentations with Hanseniaspora osmophila, Hanseniaspora uvarum, Metschnikowia pulcherrima, Starmerella bombicola and Saccharomyces cerevisiae strains showed an ethanol reduction when compared with pure S. cerevisiae fermentation trials.

  17. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    Science.gov (United States)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  18. The importance of examining movements within the US health care system: sequential logit modeling

    Directory of Open Access Journals (Sweden)

    Lee Chioun

    2010-09-01

    Full Text Available Abstract Background Utilization of specialty care may not be a discrete, isolated behavior but rather, a behavior of sequential movements within the health care system. Although patients may often visit their primary care physician and receive a referral before utilizing specialty care, prior studies have underestimated the importance of accounting for these sequential movements. Methods The sample included 6,772 adults aged 18 years and older who participated in the 2001 Survey on Disparities in Quality of Care, sponsored by the Commonwealth Fund. A sequential logit model was used to account for movement in all stages of utilization: use of any health services (i.e., first stage, having a perceived need for specialty care (i.e., second stage, and utilization of specialty care (i.e., third stage. In the sequential logit model, all stages are nested within the previous stage. Results Gender, race/ethnicity, education and poor health had significant explanatory effects with regard to use of any health services and having a perceived need for specialty care, however racial/ethnic, gender, and educational disparities were not present in utilization of specialty care. After controlling for use of any health services and having a perceived need for specialty care, inability to pay for specialty care via income (AOR = 1.334, CI = 1.10 to 1.62 or health insurance (unstable insurance: AOR = 0.26, CI = 0.14 to 0.48; no insurance: AOR = 0.12, CI = 0.07 to 0.20 were significant barriers to utilization of specialty care. Conclusions Use of a sequential logit model to examine utilization of specialty care resulted in a detailed representation of utilization behaviors and patient characteristics that impact these behaviors at all stages within the health care system. After controlling for sequential movements within the health care system, the biggest barrier to utilizing specialty care is the inability to pay, while racial, gender, and educational disparities

  19. Studies of the wavelength dependence of non-sequential double ionization of xenon in strong fields

    International Nuclear Information System (INIS)

    Kaminski, P.; Wiehle, R.; Kamke, W.; Helm, H.; Witzele, B.

    2005-01-01

    Full text: The non-sequential double ionization of noble gases in strong fields is still a process which is not completely understood. The most challenging question is: what is the dominant physical process behind the knee structure in the yield of doubly charged ions which are produced in the focus of an ultrashort laser pulse in dependence of the intensity? Numerous studies can be explained with the so-called rescattering model, where an electron is freed by the strong laser field and then driven back to its parent ion due to the oscillation of the field. Through this backscattering process it is possible to kick out a second electron. However in the low intensity or multiphoton (MPI) region this model predicts that the first electron can not gain enough energy in the oscillating electric field to further ionize or excite the ion. We present experimental results for xenon in the MPI region which show a significant contribution of doubly charged ions. A Ti:sapphire laser system (800 nm, 100 fs) is used to ionize the atoms. The coincident detection of the momentum distribution of the photoelectrons with an imaging spectrometer and the time of flight spectrum of the ions allows a detailed view into the ionization process. For the first time we also show a systematic study of the wavelength dependence (780-830 nm and 1180-1550 nm) on the non-sequential double ionization. The ratio Xe 2+ /Xe + shows a surprising oscillatory behavior with varying wavelength. Ref. 1 (author)

  20. Fast and accurate non-sequential protein structure alignment using a new asymmetric linear sum assignment heuristic.

    Science.gov (United States)

    Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi

    2016-02-01

    The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus

    2012-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...

  2. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  3. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  4. How quantum are non-negative wavefunctions?

    International Nuclear Information System (INIS)

    Hastings, M. B.

    2016-01-01

    We consider wavefunctions which are non-negative in some tensor product basis. We study what possible teleportation can occur in such wavefunctions, giving a complete answer in some cases (when one system is a qubit) and partial answers elsewhere. We use this to show that a one-dimensional wavefunction which is non-negative and has zero correlation length can be written in a “coherent Gibbs state” form, as explained later. We conjecture that such holds in higher dimensions. Additionally, some results are provided on possible teleportation in general wavefunctions, explaining how Schmidt coefficients before measurement limit the possible Schmidt coefficients after measurement, and on the absence of a “generalized area law” [D. Aharonov et al., in Proceedings of Foundations of Computer Science (FOCS) (IEEE, 2014), p. 246; e-print arXiv.org:1410.0951] even for Hamiltonians with no sign problem. One of the motivations for this work is an attempt to prove a conjecture about ground state wavefunctions which have an “intrinsic” sign problem that cannot be removed by any quantum circuit. We show a weaker version of this, showing that the sign problem is intrinsic for commuting Hamiltonians in the same phase as the double semion model under the technical assumption that TQO-2 holds [S. Bravyi et al., J. Math. Phys. 51, 093512 (2010)

  5. How quantum are non-negative wavefunctions?

    Energy Technology Data Exchange (ETDEWEB)

    Hastings, M. B. [Station Q, Microsoft Research, Santa Barbara, California 93106-6105, USA and Quantum Architectures and Computation Group, Microsoft Research, Redmond, Washington 98052 (United States)

    2016-01-15

    We consider wavefunctions which are non-negative in some tensor product basis. We study what possible teleportation can occur in such wavefunctions, giving a complete answer in some cases (when one system is a qubit) and partial answers elsewhere. We use this to show that a one-dimensional wavefunction which is non-negative and has zero correlation length can be written in a “coherent Gibbs state” form, as explained later. We conjecture that such holds in higher dimensions. Additionally, some results are provided on possible teleportation in general wavefunctions, explaining how Schmidt coefficients before measurement limit the possible Schmidt coefficients after measurement, and on the absence of a “generalized area law” [D. Aharonov et al., in Proceedings of Foundations of Computer Science (FOCS) (IEEE, 2014), p. 246; e-print arXiv.org:1410.0951] even for Hamiltonians with no sign problem. One of the motivations for this work is an attempt to prove a conjecture about ground state wavefunctions which have an “intrinsic” sign problem that cannot be removed by any quantum circuit. We show a weaker version of this, showing that the sign problem is intrinsic for commuting Hamiltonians in the same phase as the double semion model under the technical assumption that TQO-2 holds [S. Bravyi et al., J. Math. Phys. 51, 093512 (2010)].

  6. Non-chiral, molecular model of negative Poisson ratio in two dimensions

    International Nuclear Information System (INIS)

    Wojciechowski, K W

    2003-01-01

    A two-dimensional model of tri-atomic molecules (in which 'atoms' are distributed on vertices of equilateral triangles, and which are further referred to as cyclic trimers) is solved exactly in the static (zero-temperature) limit for the nearest-neighbour site-site interactions. It is shown that the cyclic trimers form a mechanically stable and elastically isotropic non-chiral phase of negative Poisson ratio. The properties of the system are illustrated by three examples of atom-atom interaction potentials: (i) the purely repulsive (n-inverse-power) potential, (ii) the purely attractive (n-power) potential and (iii) the Lennard-Jones potential which shows both the repulsive and the attractive part. The analytic form of the dependence of the Poisson ratio on the interatomic potential is obtained. It is shown that the Poisson ratio depends, in a universal way, only on the trimer anisotropy parameter both (1) in the limit of n → ∞ for cases (i) and (ii), as well as (2) at the zero external pressure for any potential with a doubly differentiable minimum, case (iii) is an example

  7. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  8. A non-negative Wigner-type distribution

    International Nuclear Information System (INIS)

    Cartwright, N.D.

    1976-01-01

    The Wigner function, which is commonly used as a joint distribution for non-commuting observables, is shown to be non-negative in all quantum states when smoothed with a gaussian whose variances are greater than or equal to those of the minimum uncertainty wave packet. (Auth.)

  9. ALGORITMA PARALEL ODD EVEN TRANSPOSITION PADA MODEL JARINGAN NON-LINIER

    Directory of Open Access Journals (Sweden)

    Ernastuti .

    2012-05-01

    Full Text Available Odd-even-transposition adalah suatu algoritma paralel yang merupakan pengembangan dari algoritma sekuensial “bubble sort”. Algoritma odd-even-transposition ini didesain khusus untuk model jaringan array linier (homogen. Untuk n elemen data, kompleksitas waktu dari algoritma bubble sort adalah O(n2, sedangkan pada odd-even-transposition yang bekerja di atas n prosesor adalah (n. Ada peningkatan kecepatan waktu pada kinerja algoritma paralel ini sebesar n kali dibanding algoritma sekuensialnya. Hypercube dimensi k adalah model jaringan non-linier (non-homogen terdiri dari n = 2k prosesor, di mana setiap prosesor berderajat k. Model jaringan Fibonacci cube dan extended Lucas cube masing-masing merupakan model subjaringan hypercube dengan jumlah prosesor < 2k prosesor dan maksimum derajat prosesornya adalah k. Pada paper ini, diperlihatkan bagaimana algoritma odd-even-transposition dapat dijalankan juga pada model jaringan komputer cluster non-linier hypercube, Fibonacci cube, dan extended Lucas cube dengan kompleksitas waktu O(n. Odd-even-transposition is a parallel algorithm which is the development of sequential algorithm “bubble sort”. Odd-even transposition algorithm is specially designed for linear array network model (homogeneous. For n data elements, the time complexity of bubble sort algorithm is O(n2, while the odd-even-transposition that works with n processor is (n. There in an increase in the speed of time on the performance of this parallel algorithms for n times than its sequential algorithm. K-dimensional hypercube is a non-linear network model (non-homogeneous consists of n = 2k processors, where each processor has k degree . Network model of Fibonacci cube and extended Lucas cube are the hypercube sub-network model with the number of processors

  10. Models of sequential decision making in consumer lending

    OpenAIRE

    Kanshukan Rajaratnam; Peter A. Beling; George A. Overstreet

    2016-01-01

    Abstract In this paper, we introduce models of sequential decision making in consumer lending. From the definition of adverse selection in static lending models, we show that homogenous borrowers take-up offers at different instances of time when faced with a sequence of loan offers. We postulate that bounded rationality and diverse decision heuristics used by consumers drive the decisions they make about credit offers. Under that postulate, we show how observation of early decisions in a seq...

  11. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  12. About a sequential method for non destructive testing of structures by mechanical vibrations

    International Nuclear Information System (INIS)

    Suarez Antola, R.

    2001-01-01

    The presence and growth of cracks voids or fields of pores under applied forces or environmental actions can produce a meaningful lowering in the proper frequencies of normal modes of mechanical vibration in structures.A quite general expression for the square of modes proper frequency as a functional of displacement field,density field and elastic moduli fields is used as a starting point.The effect of defects on frequency are modeled as equivalent changes in density and elastic moduli fields,introducing the concept of region of influence of each defect.An approximate expression is obtained which relates the relative lowering in the square of modes proper frequency with position,size,shape and orientation of defects in mode displacement field.Some simple examples of structural elements with cracks or fields of pores are considered.the connection with linear elastic fracture mechanics is briefly exemplified.A sequential method is proposed for non-destructive testing of structures using mechanical vibrations combined with properly chosen local nondestructive testing methods

  13. Microwave Ablation: Comparison of Simultaneous and Sequential Activation of Multiple Antennas in Liver Model Systems.

    Science.gov (United States)

    Harari, Colin M; Magagna, Michelle; Bedoya, Mariajose; Lee, Fred T; Lubner, Meghan G; Hinshaw, J Louis; Ziemlewicz, Timothy; Brace, Christopher L

    2016-01-01

    To compare microwave ablation zones created by using sequential or simultaneous power delivery in ex vivo and in vivo liver tissue. All procedures were approved by the institutional animal care and use committee. Microwave ablations were performed in both ex vivo and in vivo liver models with a 2.45-GHz system capable of powering up to three antennas simultaneously. Two- and three-antenna arrays were evaluated in each model. Sequential and simultaneous ablations were created by delivering power (50 W ex vivo, 65 W in vivo) for 5 minutes per antenna (10 and 15 minutes total ablation time for sequential ablations, 5 minutes for simultaneous ablations). Thirty-two ablations were performed in ex vivo bovine livers (eight per group) and 28 in the livers of eight swine in vivo (seven per group). Ablation zone size and circularity metrics were determined from ablations excised postmortem. Mixed effects modeling was used to evaluate the influence of power delivery, number of antennas, and tissue type. On average, ablations created by using the simultaneous power delivery technique were larger than those with the sequential technique (P Simultaneous ablations were also more circular than sequential ablations (P = .0001). Larger and more circular ablations were achieved with three antennas compared with two antennas (P simultaneous power delivery creates larger, more confluent ablations with greater temperatures than those created with sequential power delivery. © RSNA, 2015.

  14. Sequential decoding of intramuscular EMG signals via estimation of a Markov model.

    Science.gov (United States)

    Monsifrot, Jonathan; Le Carpentier, Eric; Aoustin, Yannick; Farina, Dario

    2014-09-01

    This paper addresses the sequential decoding of intramuscular single-channel electromyographic (EMG) signals to extract the activity of individual motor neurons. A hidden Markov model is derived from the physiological generation of the EMG signal. The EMG signal is described as a sum of several action potentials (wavelet) trains, embedded in noise. For each train, the time interval between wavelets is modeled by a process that parameters are linked to the muscular activity. The parameters of this process are estimated sequentially by a Bayes filter, along with the firing instants. The method was tested on some simulated signals and an experimental one, from which the rates of detection and classification of action potentials were above 95% with respect to the reference decomposition. The method works sequentially in time, and is the first to address the problem of intramuscular EMG decomposition online. It has potential applications for man-machine interfacing based on motor neuron activities.

  15. Wind Noise Reduction using Non-negative Sparse Coding

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Larsen, Jan; Hsiao, Fu-Tien

    2007-01-01

    We introduce a new speaker independent method for reducing wind noise in single-channel recordings of noisy speech. The method is based on non-negative sparse coding and relies on a wind noise dictionary which is estimated from an isolated noise recording. We estimate the parameters of the model ...... and discuss their sensitivity. We then compare the algorithm with the classical spectral subtraction method and the Qualcomm-ICSI-OGI noise reduction method. We optimize the sound quality in terms of signal-to-noise ratio and provide results on a noisy speech recognition task....

  16. Sequential Path Model for Grain Yield in Soybean

    Directory of Open Access Journals (Sweden)

    Mohammad SEDGHI

    2010-09-01

    Full Text Available This study was performed to determine some physiological traits that affect soybean,s grain yield via sequential path analysis. In a factorial experiment, two cultivars (Harcor and Williams were sown under four levels of nitrogen and two levels of weed management at the research station of Tabriz University, Iran, during 2004 and 2005. Grain yield, some yield components and physiological traits were measured. Correlation coefficient analysis showed that grain yield had significant positive and negative association with measured traits. A sequential path analysis was done in order to evaluate associations among grain yield and related traits by ordering the various variables in first, second and third order paths on the basis of their maximum direct effects and minimal collinearity. Two first-order variables, namely number of pods per plant and pre-flowering net photosynthesis revealed highest direct effect on total grain yield and explained 49, 44 and 47 % of the variation in grain yield based on 2004, 2005, and combined datasets, respectively. Four traits i.e. post-flowering net photosynthesis, plant height, leaf area index and intercepted radiation at the bottom layer of canopy were found to fit as second-order variables. Pre- and post-flowering chlorophyll content, main root length and intercepted radiation at the middle layer of canopy were placed at the third-order path. From the results concluded that, number of pods per plant and pre-flowering net photosynthesis are the best selection criteria in soybean for grain yield.

  17. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  18. mTOR in breast cancer: differential expression in triple-negative and non-triple-negative tumors.

    LENUS (Irish Health Repository)

    Walsh, S

    2012-04-01

    Triple-negative breast cancer (TNBC) is defined by the absence of estrogen receptors (ER), progesterone receptors (PR) and overexpression of HER2. Targeted therapy is currently unavailable for this subgroup of breast cancer patients. mTOR controls cancer cell growth, survival and invasion and is thus a potential target for the treatment of patients with TNBC. Using immunohistochemistry, mTOR and p-mTOR were measured in 89 TNBCs and 99 non-TNBCs. While mTOR expression was confined to tumor cell cytoplasm, p-mTOR staining was located in the nucleus, perinuclear area and in the cytoplasm. Potentially important, was our finding that nuclear p-mTOR was found more frequently in triple-negative than non triple-negative cancers (p < 0.001). These results suggest that mTOR may play a more important role in the progression of TNBC compared to non-TNBC. Based on these findings, we conclude that mTOR may be a new target for the treatment of triple-negative breast cancer.

  19. Second-line rescue triple therapy with levofloxacin after failure of non-bismuth quadruple "sequential" or "concomitant" treatment to eradicate H. pylori infection.

    Science.gov (United States)

    Gisbert, Javier P; Molina-Infante, Javier; Marin, Alicia C; Vinagre, Gemma; Barrio, Jesus; McNicholl, Adrian Gerald

    2013-06-01

    Non-bismuth quadruple "sequential" and "concomitant" regimens, including a proton pump inhibitor (PPI), amoxicillin, clarithromycin and a nitroimidazole, are increasingly used as first-line treatments for Helicobacter pylori infection. Eradication with rescue regimens may be challenging after failure of key antibiotics such as clarithromycin and nitroimidazoles. To evaluate the efficacy and tolerability of a second-line levofloxacin-containing triple regimen (PPI-amoxicillin-levofloxacin) in the eradication of H. pylori after non-bismuth quadruple-containing treatment failure. prospective multicenter study. in whom a non-bismuth quadruple regimen, administered either sequentially (PPI + amoxicillin for 5 days followed by PPI + clarithromycin + metronidazole for 5 more days) or concomitantly (PPI + amoxicillin + clarithromycin + metronidazole for 10 days) had previously failed. levofloxacin (500 mg b.i.d.), amoxicillin (1 g b.i.d.) and PPI (standard dose b.i.d.) for 10 days. eradication was confirmed with (13)C-urea breath test 4-8 weeks after therapy. Compliance and tolerance: compliance was determined through questioning and recovery of empty medication envelopes. Incidence of adverse effects was evaluated by means of a questionnaire. 100 consecutive patients were included (mean age 50 years, 62% females, 12% peptic ulcer and 88% dyspepsia): 37 after "sequential", and 63 after "concomitant" treatment failure. All patients took all medications correctly. Overall, per-protocol and intention-to-treat H. pylori eradication rates were 75.5% (95% CI 66-85%) and 74% (65-83%). Respective intention-to-treat cure rates for "sequential" and "concomitant" failure regimens were 74.4% and 71.4%, respectively. Adverse effects were reported in six (6%) patients; all of them were mild. Ten-day levofloxacin-containing triple therapy constitutes an encouraging second-line strategy in patients with previous non-bismuth quadruple "sequential" or "concomitant" treatment failure.

  20. Employees’ Perceptions of Corporate Social Responsibility and Job Performance: A Sequential Mediation Model

    Directory of Open Access Journals (Sweden)

    Inyong Shin

    2016-05-01

    Full Text Available In spite of the increasing importance of corporate social responsibility (CSR and employee job performance, little is still known about the links between the socially responsible actions of organizations and the job performance of their members. In order to explain how employees’ perceptions of CSR influence their job performance, this study first examines the relationships between perceived CSR, organizational identification, job satisfaction, and job performance, and then develops a sequential mediation model by fully integrating these links. The results of structural equation modeling analyses conducted for 250 employees at hotels in South Korea offered strong support for the proposed model. We found that perceived CSR was indirectly and positively associated with job performance sequentially mediated first through organizational identification and then job satisfaction. This study theoretically contributes to the CSR literature by revealing the sequential mechanism through which employees’ perceptions of CSR affect their job performance, and offers practical implications by stressing the importance of employees’ perceptions of CSR. Limitations of this study and future research directions are discussed.

  1. Modelling noise in second generation sequencing forensic genetics STR data using a one-inflated (zero-truncated) negative binomial model

    DEFF Research Database (Denmark)

    Vilsen, Søren B.; Tvedebrink, Torben; Mogensen, Helle Smidt

    2015-01-01

    We present a model fitting the distribution of non-systematic errors in STR second generation sequencing, SGS, analysis. The model fits the distribution of non-systematic errors, i.e. the noise, using a one-inflated, zero-truncated, negative binomial model. The model is a two component model...

  2. Insurance-markets Equilibrium with Sequential Non-convex Straight-time and Over-time Labor Supply

    OpenAIRE

    Vasilev, Aleksandar

    2016-01-01

    This note describes the lottery- and insurance-market equilibrium in an economy with non-convex straight-time and overtime employment. In contrast to Hansen and Sargent (1988), the overtime-decision is a sequential one. This requires two separate insurance market to operate, one for straight-time work, and one for overtime. In addi- tion, given that the labor choice for regular and overtime hours is made in succession, the insurance market for overtime needs to open once the insurance market ...

  3. Sequential Logic Model Deciphers Dynamic Transcriptional Control of Gene Expressions

    Science.gov (United States)

    Yeo, Zhen Xuan; Wong, Sum Thai; Arjunan, Satya Nanda Vel; Piras, Vincent; Tomita, Masaru; Selvarajoo, Kumar; Giuliani, Alessandro; Tsuchiya, Masa

    2007-01-01

    Background Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. Methodology Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM) is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. Principal Findings SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin) during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. Conclusions/Significance The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet providing rich biological

  4. Sequential logic model deciphers dynamic transcriptional control of gene expressions.

    Directory of Open Access Journals (Sweden)

    Zhen Xuan Yeo

    Full Text Available BACKGROUND: Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. METHODOLOGY: Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. PRINCIPAL FINDINGS: SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. CONCLUSIONS/SIGNIFICANCE: The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet

  5. Patient specific dynamic geometric models from sequential volumetric time series image data.

    Science.gov (United States)

    Cameron, B M; Robb, R A

    2004-01-01

    Generating patient specific dynamic models is complicated by the complexity of the motion intrinsic and extrinsic to the anatomic structures being modeled. Using a physics-based sequentially deforming algorithm, an anatomically accurate dynamic four-dimensional model can be created from a sequence of 3-D volumetric time series data sets. While such algorithms may accurately track the cyclic non-linear motion of the heart, they generally fail to accurately track extrinsic structural and non-cyclic motion. To accurately model these motions, we have modified a physics-based deformation algorithm to use a meta-surface defining the temporal and spatial maxima of the anatomic structure as the base reference surface. A mass-spring physics-based deformable model, which can expand or shrink with the local intrinsic motion, is applied to the metasurface, deforming this base reference surface to the volumetric data at each time point. As the meta-surface encompasses the temporal maxima of the structure, any extrinsic motion is inherently encoded into the base reference surface and allows the computation of the time point surfaces to be performed in parallel. The resultant 4-D model can be interactively transformed and viewed from different angles, showing the spatial and temporal motion of the anatomic structure. Using texture maps and per-vertex coloring, additional data such as physiological and/or biomechanical variables (e.g., mapping electrical activation sequences onto contracting myocardial surfaces) can be associated with the dynamic model, producing a 5-D model. For acquisition systems that may capture only limited time series data (e.g., only images at end-diastole/end-systole or inhalation/exhalation), this algorithm can provide useful interpolated surfaces between the time points. Such models help minimize the number of time points required to usefully depict the motion of anatomic structures for quantitative assessment of regional dynamics.

  6. The impact of comorbid body dysmorphic disorder on the response to sequential pharmacological trials for obsessive-compulsive disorder.

    Science.gov (United States)

    Diniz, Juliana B; Costa, Daniel Lc; Cassab, Raony Cc; Pereira, Carlos Ab; Miguel, Euripedes C; Shavitt, Roseli G

    2014-06-01

    Our aim was to investigate the impact of comorbid body dysmorphic disorder (BDD) on the response to sequential pharmacological trials in adult obsessive-compulsive disorder (OCD) patients. The sequential trial initially involved fluoxetine monotherapy followed by one of three randomized, add-on strategies: placebo, clomipramine or quetiapine. We included 138 patients in the initial phase of fluoxetine, up to 80 mg or the maximum tolerated dosage, for 12 weeks. We invited 70 non-responders to participate in the add-on trial; as 54 accepted, we allocated 18 to each treatment group and followed them for an additional 12 weeks. To evaluate the combined effects of sex, age, age at onset, initial severity, type of augmentation and BDD on the response to sequential treatments, we constructed a model using generalized estimating equations (GEE). Of the 39 patients who completed the study (OCD-BDD, n = 13; OCD-non-BDD, n = 26), the OCD-BDD patients were less likely to be classified as responders than the OCD-non-BDD patients (Pearson Chi-Square = 4.4; p = 0.036). In the GEE model, BDD was not significantly associated with a worse response to sequential treatments (z-robust = 1.77; p = 0.07). The predictive potential of BDD regarding sequential treatment strategies for OCD did not survive when the analyses were controlled for other clinical characteristics. © The Author(s) 2013.

  7. Non-Cross Resistant Sequential Single Agent Chemotherapy in First-Line Advanced Non-Small Cell Lung Cancer Patients: Results of a Phase II Study

    Directory of Open Access Journals (Sweden)

    V. Surmont

    2009-01-01

    Full Text Available Background. sequential chemotherapy can maintain dose intensity and preclude cumulative toxicity by increasing drug diversity. Purpose. to investigate the toxicity and efficacy of the sequential regimen of gemcitabine followed by paclitaxel in first line advanced stage non-small cell lung cancer (NSCLC patients with good performance status (PS. Patients and methods. gemcitabine 1250 mg/m2 was administered on day 1 and 8 of course 1 and 2; Paclitaxel 150 mg/m2 on day 1 and 8 of course 3 and 4. Primary endpoint was response rate (RR, secondary endpoints toxicity and time to progression (TTP. Results. Of the 21 patients (median age 56, range 38–80 years; 62% males, 38% females 10% (2/21 had stage IIIB, 90% (19/21 stage IV, 15% PS 0, 85% PS 1. 20% of patients had a partial response, 30% stable disease, 50% progressive disease. Median TTP was 12 weeks (range 6–52 weeks, median overall survival (OS 8 months (range 1–27 months, 1-year survival was 33%. One patient had grade 3 hematological toxicity, 2 patients a grade 3 peripheral neuropathy. Conclusions. sequential administration of gemcitabine followed by paclitaxel in first line treatment of advanced NSCLC had a favourable toxicity profile, a median TTP and OS comparable with other sequential trials and might , therefore, be a treatment option for NSCLC patients with high ERCC1 expression.

  8. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  9. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    In this article, we present a method and an associated toolchain for the formal verification of the new Danish railway interlocking systems that are compatible with the European Train Control System (ETCS) Level 2. We have made a generic and reconfigurable model of the system behaviour and generic...... safety properties. This model accommodates sequential release - a feature in the new Danish interlocking systems. To verify the safety of an interlocking system, first a domain-specific description of interlocking configuration data is constructed and validated. Then the generic model and safety...

  10. Multi-view clustering via multi-manifold regularized non-negative matrix factorization.

    Science.gov (United States)

    Zong, Linlin; Zhang, Xianchao; Zhao, Long; Yu, Hong; Zhao, Qianli

    2017-04-01

    Non-negative matrix factorization based multi-view clustering algorithms have shown their competitiveness among different multi-view clustering algorithms. However, non-negative matrix factorization fails to preserve the locally geometrical structure of the data space. In this paper, we propose a multi-manifold regularized non-negative matrix factorization framework (MMNMF) which can preserve the locally geometrical structure of the manifolds for multi-view clustering. MMNMF incorporates consensus manifold and consensus coefficient matrix with multi-manifold regularization to preserve the locally geometrical structure of the multi-view data space. We use two methods to construct the consensus manifold and two methods to find the consensus coefficient matrix, which leads to four instances of the framework. Experimental results show that the proposed algorithms outperform existing non-negative matrix factorization based algorithms for multi-view clustering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Mathematical Model for the Sequential Action of Radiation and Heat on Yeast Cells

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Lee, Yun Jong; Kim, Su Hyoun; Nili, Mohammad; Zhurakovskaya, Galina P.; Petin, Vladislav G.

    2009-01-01

    It is well known that the synergistic interaction of hyperthermia with ionizing radiation and other agents is widely used in hyperthermic oncology. Interaction between two agents may be considered as synergistic or antagonistic when the effect produced is greater or smaller than the sum of the two single responses. It has long be considered that the mechanism of synergistic interaction of hyperthermia and ionizing radiation may be brought about by an inhibition of the repair from sublethal and potentially lethal damage at the cellular level. The inhibition of the recovery process after combined treatments cannot be considered as a reason for the synergy, but rather would be the expected and predicted consequence of the production of irreversible damage. On the basis of it, a simple mathematical model of the synergistic interaction of two agents acting simultaneously has been proposed. However, the model has not been applied to predict the degree of interaction of heat and ionizing radiation after their sequential action. Extension of the model to the sequential treatment of heat and ionizing radiation seems to be of interest for theoretical and practical reasons. Thus, the purposes of the present work is to suggest the simplest mathematical model which would be able to account for the results obtained and currently available experimental information on the sequential action of radiation and heat.

  12. Concurrent versus Sequential Chemoradiotherapy with Cisplatin and Vinorelbine in Locally Advanced Non-Small Cell Lung Cancer: A Randomized Study

    Czech Academy of Sciences Publication Activity Database

    Zatloukal, P.; Petruželka, L.; Zemanová, M.; Havel, L.; Janků, F.; Judas, L.; Kubík, A.; Křepela, E.; Fiala, P.; Pecen, Ladislav

    2004-01-01

    Roč. 46, - (2004), s. 87-98 ISSN 0169-5002 Institutional research plan: CEZ:AV0Z1030915 Keywords : concurrent chemoradiotherapy * sequential chemoradiotherapy * locally advanced non-small cell lung cancer * cisplatin * vinorelbine Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.914, year: 2004

  13. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  14. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  15. Specific and non-specific match effects in negative priming.

    Science.gov (United States)

    Labossière, Danielle I; Leboe-McGowan, Jason P

    2018-01-01

    The negative priming effect occurs when withholding a response to a stimulus impairs generation of subsequent responding to a same or a related stimulus. Our goal was to use the negative priming procedure to obtain insights about the memory representations generated by ignoring vs. attending/responding to a prime stimulus. Across three experiments we observed that ignoring a prime stimulus tends to generate higher identity-independent, non-specific repetition effects, owing to an overlap in the coarse perceptual form of a prime distractor and a probe target. By contrast, attended repetition effects generate predominantly identity-specific sources of facilitation. We use these findings to advocate for using laboratory phenomena to illustrate general principles that can be of practical use to non-specialists. In the case of the negative priming procedure, we propose that the procedure provides a useful means for investigating attention/memory interactions, even if the specific cause (or causes) of negative priming effects remain unresolved. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  17. When good is stickier than bad: Understanding gain/loss asymmetries in sequential framing effects.

    Science.gov (United States)

    Sparks, Jehan; Ledgerwood, Alison

    2017-08-01

    Considerable research has demonstrated the power of the current positive or negative frame to shape people's current judgments. But humans must often learn about positive and negative information as they encounter that information sequentially over time. It is therefore crucial to consider the potential importance of sequencing when developing an understanding of how humans think about valenced information. Indeed, recent work looking at sequentially encountered frames suggests that some frames can linger outside the context in which they are first encountered, sticking in the mind so that subsequent frames have a muted effect. The present research builds a comprehensive account of sequential framing effects in both the loss and the gain domains. After seeing information about a potential gain or loss framed in positive terms or negative terms, participants saw the same issue reframed in the opposing way. Across 5 studies and 1566 participants, we find accumulating evidence for the notion that in the gain domain, positive frames are stickier than negative frames for novel but not familiar scenarios, whereas in the loss domain, negative frames are always stickier than positive frames. Integrating regulatory focus theory with the literatures on negativity dominance and positivity offset, we develop a new and comprehensive account of sequential framing effects that emphasizes the adaptive value of positivity and negativity biases in specific contexts. Our findings highlight the fact that research conducted solely in the loss domain risks painting an incomplete and oversimplified picture of human bias and suggest new directions for future research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. WE-G-207-02: Full Sequential Projection Onto Convex Sets (FS-POCS) for X-Ray CT Reconstruction

    International Nuclear Information System (INIS)

    Liu, L; Han, Y; Jin, M

    2015-01-01

    Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater than an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS

  19. Learning Orthographic Structure With Sequential Generative Neural Networks.

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  20. Exploring non-signalling polytopes with negative probability

    International Nuclear Information System (INIS)

    Oas, G; Barros, J Acacio de; Carvalhaes, C

    2014-01-01

    Bipartite and tripartite EPR–Bell type systems are examined via joint quasi-probability distributions where probabilities are permitted to be negative. It is shown that such distributions exist only when the no-signalling condition is satisfied. A characteristic measure, the probability mass, is introduced and, via its minimization, limits the number of quasi-distributions describing a given marginal probability distribution. The minimized probability mass is shown to be an alternative way to characterize non-local systems. Non-signalling polytopes for two to eight settings in the bipartite scenario are examined and compared to prior work. Examining perfect cloning of non-local systems within the tripartite scenario suggests defining two categories of signalling. It is seen that many properties of non-local systems can be efficiently described by quasi-probability theory. (paper)

  1. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin; Wong, Kachun; Rockwood, Alyn; Zhang, Xiangliang; Jiang, Jinling; Keyes, David E.

    2012-01-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc

  2. Algorithm for Non-proportional Loading in Sequentially Linear Analysis

    NARCIS (Netherlands)

    Yu, C.; Hoogenboom, P.C.J.; Rots, J.G.; Saouma, V.; Bolander, J.; Landis, E.

    2016-01-01

    Sequentially linear analysis (SLA) is an alternative to the Newton-Raphson method for analyzing the nonlinear behavior of reinforced concrete and masonry structures. In this paper SLA is extended to load cases that are applied one after the other, for example first dead load and then wind load. It

  3. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  4. Testing multi-alternative decision models with non-stationary evidence.

    Science.gov (United States)

    Tsetsos, Konstantinos; Usher, Marius; McClelland, James L

    2011-01-01

    Recent research has investigated the process of integrating perceptual evidence toward a decision, converging on a number of sequential sampling choice models, such as variants of race and diffusion models and the non-linear leaky competing accumulator (LCA) model. Here we study extensions of these models to multi-alternative choice, considering how well they can account for data from a psychophysical experiment in which the evidence supporting each of the alternatives changes dynamically during the trial, in a way that creates temporal correlations. We find that participants exhibit a tendency to choose an alternative whose evidence profile is temporally anti-correlated with (or dissimilar from) that of other alternatives. This advantage of the anti-correlated alternative is well accounted for in the LCA, and provides constraints that challenge several other models of multi-alternative choice.

  5. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    Science.gov (United States)

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  6. Lead isotopes combined with a sequential extraction procedure for source apportionment in the dry deposition of Asian dust and non-Asian dust

    International Nuclear Information System (INIS)

    Lee, Pyeong-Koo; Yu, Soonyoung

    2016-01-01

    Lead isotopic compositions were determined in leachates that were generated using sequential extractions of dry deposition samples of Asian dust (AD) and non-Asian dust (NAD) and Chinese desert soils, and used to apportion Pb sources. Results showed significant differences in "2"0"6Pb/"2"0"7Pb and "2"0"6Pb/"2"0"4Pb isotopic compositions in non-residual fractions between the dry deposition samples and the Chinese desert soils while "2"0"6Pb/"2"0"7Pb and "2"0"6Pb/"2"0"4Pb isotopic compositions in residual fraction of the dry deposition of AD and NAD were similar to the mean "2"0"6Pb/"2"0"7Pb and "2"0"6Pb/"2"0"4Pb in residual fraction of the Alashan Plateau soil. These results indicate that the geogenic materials of the dry deposition of AD and NAD were largely influenced by the Alashan Plateau soil, while the secondary sources of the dry deposition were different from those of the Chinese desert soils. In particular, the lead isotopic compositions in non-residual fractions of the dry deposition were homogenous, which implies that the non-residual four fractions (F1 to F4) shared the primary anthropogenic origin. "2"0"6Pb/"2"0"7Pb values and the predominant wind directions in the study area suggested that airborne particulates of heavily industrialized Chinese cities were one of the main Pb sources. Source apportionment calculations showed that the average proportion of anthropogenic Pb in the dry deposition of AD and NAD was 87% and 95% respectively in total Pb extraction, 92% and 97% in non-residual fractions, 15% and 49% in residual fraction. Approximately 81% and 80% of the anthropogenic Pb was contributed by coal combustion in China in the dry deposition of AD and NAD respectively while the remainder was derived from industrial Pb contamination. The research result proposes that sequential extractions with Pb isotope analysis are a useful tool for the discrimination of anthropogenic and geogenic origins in highly contaminated AD and NAD. - Highlights:

  7. Development and sensitivity analysis of a fullykinetic model of sequential reductive dechlorination in subsurface

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Albrechtsen, Hans-Jørgen

    2010-01-01

    and natural degradation of chlorinated solvents frequently occurs in the subsurface through sequential reductive dechlorination. However, the occurrence and the performance of natural sequential reductive dechlorination strongly depends on environmental factor such as redox conditions, presence of fermenting...... organic matter / electron donors, presence of specific biomass, etc. Here we develop a new fully-kinetic biogeochemical reactive model able to simulate chlorinated solvents degradation as well as production and consumption of molecular hydrogen. The model is validated using batch experiment data......Chlorinated hydrocarbons originating from point sources are amongst the most prevalent contaminants of ground water and often represent a serious threat to groundwater-based drinking water resources. Natural attenuation of contaminant plumes can play a major role in contaminated site management...

  8. Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback

    OpenAIRE

    Jung–Min Yang

    2016-01-01

    Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...

  9. Sequential, progressive, equal-power, reflective beam-splitter arrays

    Science.gov (United States)

    Manhart, Paul K.

    2017-11-01

    The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.

  10. [Negative symptoms in patients with non schizophrenic psychiatric disorders].

    Science.gov (United States)

    Donnoli, Vicente F; Moroni, María V; Cohen, Diego; Chisari Rocha, Liliana; Marleta, María; Sepich Dalmeida, Tomás; Bonani, Matías; D'Alessio, Luciana

    2011-01-01

    The presence of negative symptoms (NS) in different clinical entities other than schizophrenia, with a dimensional approach of negative symptoms, was considered in this work. Determine the presence and distribution of NS, in a population of patients with non schizophrenic psychiatric disorders attending ambulatory treatment at public hospitals. Patients with define DSM IV diagnosis criteria for different disorders; affective, alimentary, substance abuse, anxiety, personality disorders and patients with ILAE diagnoses criteria for temporal lobe epilepsy were included. All patients underwent the subscale PANNS for negative symptoms of schizophrenia. Student T test was calculated to determine the differences of frequency for NS among psychiatric disorders. 106 patients were included; 60 women, 46 men, 38 years +/- 12.1. The 90% of patients have a low score of NS. Media 11.6, Max/min 9.38 -14.29. Emotional withdrawal and passive social withdrawal were more frequent in alimentary disorders than in affective disorder and than in epilepsy. Emotional withdrawal was more frequent in substance disorders than epilepsy. According this study, negative symptoms are present in a low to moderate intensity in non schizophrenic psychiatry entities and in the temporal lobe epilepsy.

  11. Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations.

    Science.gov (United States)

    Laukka, Petri; Elfenbein, Hillary Anger; Söder, Nela; Nordström, Henrik; Althoff, Jean; Chui, Wanda; Iraki, Frederick K; Rockstuhl, Thomas; Thingujam, Nutankumar S

    2013-01-01

    Which emotions are associated with universally recognized non-verbal signals?We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged in separate experiments. Results showed that listeners could recognize a wide range of positive and negative emotions with accuracy above chance. For positive emotions, we observed the highest recognition rates for relief, followed by lust, interest, serenity and positive surprise, with affection and pride receiving the lowest recognition rates. Anger, disgust, fear, sadness, and negative surprise received the highest recognition rates for negative emotions, with the lowest rates observed for guilt and shame. By way of summary, results showed that the voice can reveal both basic emotions and several positive emotions other than happiness across cultures, but self-conscious emotions such as guilt, pride, and shame seem not to be well recognized from non-linguistic vocalizations.

  12. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  13. A novel non-sequential hydrogen-pulsed deep reactive ion etching of silicon

    International Nuclear Information System (INIS)

    Gharooni, M; Mohajerzadeh, A; Sandoughsaz, A; Khanof, S; Mohajerzadeh, S; Asl-Soleimani, E

    2013-01-01

    A non-sequential pulsed-mode deep reactive ion etching of silicon is reported that employs continuous etching and passivation based on SF 6 and H 2 gases. The passivation layer, as an important step for deep vertical etching of silicon, is feasible by hydrogen pulses in proper time-slots. By adjusting the etching parameters such as plasma power, H 2 and SF 6 flows and hydrogen pulse timing, the process can be controlled for minimum underetch and high etch-rate at the same time. High-aspect-ratio features can be realized with low-density plasma power and by controlling the reaction chemistry. The so-called reactive ion etching lag has been minimized by operating the reactor at higher pressures. X-ray photoelectron spectroscopy and scanning electron microscopy have been used to study the formation of the passivation layer and the passivation mechanism. (paper)

  14. Sequential analysis in neonatal research-systematic review.

    Science.gov (United States)

    Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

    2018-05-01

    As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

  15. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Modeling non-isothermal multiphase multi-species reactive chemical transport in geologic media

    Energy Technology Data Exchange (ETDEWEB)

    Tianfu Xu; Gerard, F.; Pruess, K.; Brimhall, G.

    1997-07-01

    The assessment of mineral deposits, the analysis of hydrothermal convection systems, the performance of radioactive, urban and industrial waste disposal, the study of groundwater pollution, and the understanding of natural groundwater quality patterns all require modeling tools that can consider both the transport of dissolved species as well as their interactions with solid (or other) phases in geologic media and engineered barriers. Here, a general multi-species reactive transport formulation has been developed, which is applicable to homogeneous and/or heterogeneous reactions that can proceed either subject to local equilibrium conditions or kinetic rates under non-isothermal multiphase flow conditions. Two numerical solution methods, the direct substitution approach (DSA) and sequential iteration approach (SIA) for solving the coupled complex subsurface thermo-physical-chemical processes, are described. An efficient sequential iteration approach, which solves transport of solutes and chemical reactions sequentially and iteratively, is proposed for the current reactive chemical transport computer code development. The coupled flow (water, vapor, air and heat) and solute transport equations are also solved sequentially. The existing multiphase flow code TOUGH2 and geochemical code EQ3/6 are used to implement this SIA. The flow chart of the coupled code TOUGH2-EQ3/6, required modifications of the existing codes and additional subroutines needed are presented.

  17. Negative symptoms in first episode non-affective psychosis.

    Science.gov (United States)

    Malla, Ashok K; Takhar, Jatinder J; Norman, Ross M G; Manchanda, Rahul; Cortese, Leonard; Haricharan, Raj; Verdi, Mary; Ahmed, Rashid

    2002-06-01

    To determine the prevalence of negative symptoms and to examine secondary sources of influence on negative symptoms and the role of specific negative symptoms in delay associated with seeking treatment in first episode non-affective psychosis. One hundred and ten patients who met Diagnostic Statistical Manual-IV (DSM-IV) criteria for a first episode of schizophrenia spectrum psychoses were rated for assessment of negative, positive, depressive and extrapyramidal symptoms, the premorbid adjustment scale and assessment of demographic and clinical characteristics including duration of untreated psychosis (DUP). Alogia/flat affect and avolition/anhedonia were strongly influenced by parkinsonian and depressive symptoms, respectively. A substantial proportion (26.8%) of patients showed at a least moderate level of negative symptoms not confounded by depression and Parkinsonism. DUP was related only to avolition/anhedonia while flat affect/alogia was related to male gender, diagnosis of schizophrenia, age of onset and the length of the prodrome. Negative symptoms that are independent of the influence of positive symptoms, depression and extra pyramidal symptoms (EPS) are present in a substantial proportion of first episode psychosis patients and delay in seeking treatment is associated mainly with avolition and anhedonia.

  18. Sequential decisions: a computational comparison of observational and reinforcement accounts.

    Directory of Open Access Journals (Sweden)

    Nazanin Mohammadi Sepahvand

    Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.

  19. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  20. Quantum Probability Zero-One Law for Sequential Terminal Events

    Science.gov (United States)

    Rehder, Wulf

    1980-07-01

    On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.

  1. Anti-tumor activity of high-dose EGFR tyrosine kinase inhibitor and sequential docetaxel in wild type EGFR non-small cell lung cancer cell nude mouse xenografts

    Science.gov (United States)

    Tang, Ning; Zhang, Qianqian; Fang, Shu; Han, Xiao; Wang, Zhehai

    2017-01-01

    Treatment of non-small-cell lung cancer (NSCLC) with wild-type epidermal growth factor receptor (EGFR) is still a challenge. This study explored antitumor activity of high-dose icotinib (an EGFR tyrosine kinase inhibitor) plus sequential docetaxel against wild-type EGFR NSCLC cells-generated nude mouse xenografts. Nude mice were subcutaneously injected with wild-type EGFR NSCLC A549 cells and divided into different groups for 3-week treatment. Tumor xenograft volumes were monitored and recorded, and at the end of experiments, tumor xenografts were removed for Western blot and immunohistochemical analyses. Compared to control groups (negative control, regular-dose icotinib [IcoR], high-dose icotinib [IcoH], and docetaxel [DTX]) and regular icotinib dose (60 mg/kg) with docetaxel, treatment of mice with a high-dose (1200 mg/kg) of icotinib plus sequential docetaxel for 3 weeks (IcoH-DTX) had an additive effect on suppression of tumor xenograft size and volume (P Icotinib-containing treatments markedly reduced phosphorylation of EGFR, mitogen activated protein kinase (MAPK), and protein kinase B (Akt), but only the high-dose icotinib-containing treatments showed an additive effect on CD34 inhibition (P icotinib plus docetaxel had a similar effect on mouse weight loss (a common way to measure adverse reactions in mice), compared to the other treatment combinations. The study indicate that the high dose of icotinib plus sequential docetaxel (IcoH-DTX) have an additive effect on suppressing the growth of wild-type EGFR NSCLC cell nude mouse xenografts, possibly through microvessel density reduction. Future clinical trials are needed to confirm the findings of this study. PMID:27852073

  2. Negative optical spin torque wrench of a non-diffracting non-paraxial fractional Bessel vortex beam

    International Nuclear Information System (INIS)

    Mitri, F.G.

    2016-01-01

    An absorptive Rayleigh dielectric sphere in a non-diffracting non-paraxial fractional Bessel vortex beam experiences a spin torque. The axial and transverse radiation spin torque components are evaluated in the dipole approximation using the radiative correction of the electric field. Particular emphasis is given on the polarization as well as changing the topological charge α and the half-cone angle of the beam. When α is zero, the axial spin torque component vanishes. However, when α becomes a real positive number, the vortex beam induces left-handed (negative) axial spin torque as the sphere shifts off-axially from the center of the beam. The results show that a non-diffracting non-paraxial fractional Bessel vortex beam is capable of inducing a spin reversal of an absorptive Rayleigh sphere placed arbitrarily in its path. Potential applications are yet to be explored in particle manipulation, rotation in optical tweezers, optical tractor beams, and the design of optically-engineered metamaterials to name a few areas. - Highlights: • Optical nondiffracting nonparaxial fractional Bessel vortex beam is considered. • Negative spin torque on an absorptive dielectric Rayleigh sphere is predicted numerically. • Negative spin torque occurs as the sphere departs from the center of the beam.

  3. Sparse Non-negative Matrix Factor 2-D Deconvolution for Automatic Transcription of Polyphonic Music

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for automatic transcription of polyphonic music based on a recently published algorithm for non-negative matrix factor 2-D deconvolution. The method works by simultaneously estimating a time-frequency model for an instrument and a pattern corresponding to the notes which...... are played based on a log-frequency spectrogram of the music....

  4. Towards non-sequential double ionization of Ne and Ar using a femtosecond laser oscillator.

    Science.gov (United States)

    Liu, Yunquan; Tschuch, Sebastian; Dürr, Martin; Rudenko, Artem; Moshammer, Robert; Ullrich, Joachim; Siegel, Martin; Morgner, Uwe

    2007-12-24

    We report on first proof-of-principles results on non-sequential double ionization of argon and neon achieved by using a newly developed long-cavity Ti:sapphire femtosecond oscillator with a pulse duration of 45 fs and a repetition of 6.2 MHz combined with a dedicated reaction microscope. Under optimized experimental conditions, peak intensities larger than 2.310(14) W/cm(2) have been achieved. Ion momentum distributions were recorded for both rare gases and show significantly different features for single as well as for double ionization. For single ionization of neon a spike of zero-momentum electrons is found when decreasing the laser intensity towards the lowest ionization rate we can measure which is attributed to a non-resonant ionization channel. As to double ionization, the longitudinal momentum distribution for Ne(2+) displays a clear double-hump structure whereas this feature is found to be smoothened out with a maximum at zero momentum for Ar(2+).

  5. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    Science.gov (United States)

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  6. Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data

    KAUST Repository

    Xie, Qing

    2016-02-23

    Alzheimer\\'s Disease (AD) is currently attracting much attention in elders\\' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD\\'s progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients\\' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer\\'s Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.

  7. Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data

    KAUST Repository

    Xie, Qing; Wang, Su; Zhu, Jia; Zhang, Xiangliang

    2016-01-01

    Alzheimer's Disease (AD) is currently attracting much attention in elders' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD's progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer's Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.

  8. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    Science.gov (United States)

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  9. Negative parity non-strange baryons

    International Nuclear Information System (INIS)

    Stancu, F.; Stassart, P.

    1991-01-01

    Our previous study is extended to negative parity baryon resonances up to J=(9/2) - . The framework is a semi-relativistic constituent quark model. The quark-quark interaction contains a Coulomb plus linear confinement terms and a short distance spin-spin and tensor terms. It is emphasized that a linear confinement potential gives too large a mass to the D 35 (1930) resonance. (orig.)

  10. Efficacy and safety of sequential versus quadruple therapy as second-line treatment for helicobacter pylori infection-A randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Daniela Munteanu

    Full Text Available Quadruple therapy is recommended as second-line treatment for Helicobacter pylori eradication failure. However, high cost, multiple side effects, and low adherence rates are major drawbacks to its routine use. Our aim was to compare the efficacy and safety of sequential versus quadruple regimens as second line treatment for persistent Helicobacter pylori infection.Prospective, randomized, open label trial was conducted at a large academic, tertiary care center in Israel. Patients who previously failed a standard triple treatment eradication course were randomly assigned (1:1 to receive a 10-day sequential therapy course, or a 14-day quadruple regimen. Compliance and adverse events were evaluated by telephone questionnaires. The primary endpoint for analysis was the rate of Helicobacter pylori eradication as defined by either a negative 13C-urea breath-test, or stool antigen test, 4-16 weeks after treatment assessed under the non-inferiority hypothesis. The trial was terminated prematurely due to low recruitment rates. See S1 Checklist for CONSORT checklist.One hundred and one patients were randomized. Per modified intention-to-treat analysis, eradication rate was 49% in the sequential versus 42.5% in the quadruple regimen group (p-value for non-inferiority 0.02. Forty-two (84.0% versus 33 (64.7% patients completed treatment in the sequential and quadruple groups respectively (p 0.027. Gastrointestinal side effects were more common in the quadruple regimen group.Sequential treatment when used as a second line regimen, was non-inferior to the standard of care quadruple regimen in achieving Helicobacter pylori eradication, and was associated with better compliance and fewer adverse effects. Both treatment protocols failed to show an adequate eradication rate in the population of Southern Israel.ClinicalTrials.gov NCT01481844.

  11. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    Science.gov (United States)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S

  12. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity.

    Science.gov (United States)

    Kuersteiner, Guido M; Prucha, Ingmar R

    2013-06-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n . The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.

  13. A non-symmetric pillar[5]arene based on triazole-linked 8-oxyquinolines as a sequential sensor for thorium(IV) followed by fluoride ions.

    Science.gov (United States)

    Fang, Yuyu; Li, Caixia; Wu, Lei; Bai, Bing; Li, Xing; Jia, Yiming; Feng, Wen; Yuan, Lihua

    2015-09-07

    A novel non-symmetric pillar[5]arene bearing triazole-linked 8-oxyquinolines at one rim was synthesized and demonstrated as a sequential fluorescence sensor for thorium(iv) followed by fluoride ions with high sensitivity and selectivity.

  14. Alternative theories of the non-linear negative mass instability

    International Nuclear Information System (INIS)

    Channell, P.J.

    1974-01-01

    A theory non-linear negative mass instability is extended to include resistance. The basic assumption is explained physically and an alternative theory is offered. The two theories are compared computationally. 7 refs., 8 figs

  15. Kinetic modeling of particle dynamics in H− negative ion sources (invited)

    International Nuclear Information System (INIS)

    Hatayama, A.; Shibata, T.; Nishioka, S.; Ohta, M.; Yasumoto, M.; Nishida, K.; Yamamoto, T.; Miyamoto, K.; Fukano, A.; Mizuno, T.

    2014-01-01

    Progress in the kinetic modeling of particle dynamics in H − negative ion source plasmas and their comparisons with experiments are reviewed, and discussed with some new results. Main focus is placed on the following two topics, which are important for the research and development of large negative ion sources and high power H − ion beams: (i) Effects of non-equilibrium features of EEDF (electron energy distribution function) on H − production, and (ii) extraction physics of H − ions and beam optics

  16. Decomposing the time-frequency representation of EEG using non-negative matrix and multi-way factorization

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Parnas, Josef

    2006-01-01

    We demonstrate how non-negative matrix factorization (NMF) can be used to decompose the inter trial phase coherence (ITPC) of multi-channel EEG to yield a unique decomposition of time-frequency signatures present in various degrees in the recording channels. The NMF optimization is easily...... generalized to a parallel factor (PARAFAC) model to form a non-negative multi-way factorization (NMWF). While the NMF can examine subject specific activities the NMWF can effectively extract the most similar activities across subjects and or conditions. The methods are tested on a proprioceptive stimulus...... consisting of a weight change in a handheld load. While somatosensory gamma oscillations have previously only been evoked by electrical stimuli we hypothesized that a natural proprioceptive stimulus also would be able to evoke gamma oscillations. ITPC maxima were determined by visual inspection...

  17. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  18. Induction of simultaneous and sequential malolactic fermentation in durian wine.

    Science.gov (United States)

    Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan

    2016-08-02

    This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test

    Directory of Open Access Journals (Sweden)

    Closas Pau

    2012-10-01

    Full Text Available Abstract Background Influenza is a well known and common human respiratory infection, causing significant morbidity and mortality every year. Despite Influenza variability, fast and reliable outbreak detection is required for health resource planning. Clinical health records, as published by the Diagnosticat database in Catalonia, host useful data for probabilistic detection of influenza outbreaks. Methods This paper proposes a statistical method to detect influenza epidemic activity. Non-epidemic incidence rates are modeled against the exponential distribution, and the maximum likelihood estimate for the decaying factor λ is calculated. The sequential detection algorithm updates the parameter as new data becomes available. Binary epidemic detection of weekly incidence rates is assessed by Kolmogorov-Smirnov test on the absolute difference between the empirical and the cumulative density function of the estimated exponential distribution with significance level 0 ≤ α ≤ 1. Results The main advantage with respect to other approaches is the adoption of a statistically meaningful test, which provides an indicator of epidemic activity with an associated probability. The detection algorithm was initiated with parameter λ0 = 3.8617 estimated from the training sequence (corresponding to non-epidemic incidence rates of the 2008-2009 influenza season and sequentially updated. Kolmogorov-Smirnov test detected the following weeks as epidemic for each influenza season: 50−10 (2008-2009 season, 38−50 (2009-2010 season, weeks 50−9 (2010-2011 season and weeks 3 to 12 for the current 2011-2012 season. Conclusions Real medical data was used to assess the validity of the approach, as well as to construct a realistic statistical model of weekly influenza incidence rates in non-epidemic periods. For the tested data, the results confirmed the ability of the algorithm to detect the start and the end of epidemic periods. In general, the proposed test could

  20. Non-existence of Normal Tokamak Equilibria with Negative Central Current

    International Nuclear Information System (INIS)

    Hammett, G.W.; Jardin, S.C.; Stratton, B.C.

    2003-01-01

    Recent tokamak experiments employing off-axis, non-inductive current drive have found that a large central current hole can be produced. The current density is measured to be approximately zero in this region, though in principle there was sufficient current-drive power for the central current density to have gone significantly negative. Recent papers have used a large aspect-ratio expansion to show that normal MHD equilibria (with axisymmetric nested flux surfaces, non-singular fields, and monotonic peaked pressure profiles) can not exist with negative central current. We extend that proof here to arbitrary aspect ratio, using a variant of the virial theorem to derive a relatively simple integral constraint on the equilibrium. However, this constraint does not, by itself, exclude equilibria with non-nested flux surfaces, or equilibria with singular fields and/or hollow pressure profiles that may be spontaneously generated

  1. Non-negative Tensor Factorization with missing data for the modeling of gene expressions in the Human Brain

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Mørup, Morten

    2014-01-01

    Non-negative Tensor Factorization (NTF) has become a prominent tool for analyzing high dimensional multi-way structured data. In this paper we set out to analyze gene expression across brain regions in multiple subjects based on data from the Allen Human Brain Atlas [1] with more than 40 % data m...

  2. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  3. Bursts and heavy tails in temporal and sequential dynamics of foraging decisions.

    Directory of Open Access Journals (Sweden)

    Kanghoon Jung

    2014-08-01

    Full Text Available A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a a highly biased choice distribution; and (b preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices.

  4. Bursts and Heavy Tails in Temporal and Sequential Dynamics of Foraging Decisions

    Science.gov (United States)

    Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D.; Jeong, Jaeseung

    2014-01-01

    A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices. PMID:25122498

  5. Non-Pilot-Aided Sequential Monte Carlo Method to Joint Signal, Phase Noise, and Frequency Offset Estimation in Multicarrier Systems

    Directory of Open Access Journals (Sweden)

    Christelle Garnier

    2008-05-01

    Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm.

  6. A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.

    Science.gov (United States)

    Baron, Jonathan; Gürçay, Burcu

    2017-05-01

    The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.

  7. Negative Saturation Approach for Non-Isothermal Compositional Two-Phase Flow Simulations

    NARCIS (Netherlands)

    Salimi, H.; Wolf, K.H.; Bruining, J.

    2011-01-01

    This article deals with developing a solution approach, called the non-isothermal negative saturation (NegSat) solution approach. The NegSat solution approach solves efficiently any non-isothermal compositional flow problem that involves phase disappearance, phase appearance, and phase transition.

  8. Mining Emerging Sequential Patterns for Activity Recognition in Body Sensor Networks

    DEFF Research Database (Denmark)

    Gu, Tao; Wang, Liang; Chen, Hanhua

    2010-01-01

    Body Sensor Networks oer many applications in healthcare, well-being and entertainment. One of the emerging applications is recognizing activities of daily living. In this paper, we introduce a novel knowledge pattern named Emerging Sequential Pattern (ESP)|a sequential pattern that discovers...... signicant class dierences|to recognize both simple (i.e., sequential) and complex (i.e., interleaved and concurrent) activities. Based on ESPs, we build our complex activity models directly upon the sequential model to recognize both activity types. We conduct comprehensive empirical studies to evaluate...

  9. Fast Bayesian Non-Negative Matrix Factorisation and Tri-Factorisation

    DEFF Research Database (Denmark)

    Brouwer, Thomas; Frellsen, Jes; Liò, Pietro

    We present a fast variational Bayesian algorithm for performing non-negative matrix factorisation and tri-factorisation. We show that our approach achieves faster convergence per iteration and timestep (wall-clock) than Gibbs sampling and non-probabilistic approaches, and do not require additional...... samples to estimate the posterior. We show that in particular for matrix tri-factorisation convergence is difficult, but our variational Bayesian approach offers a fast solution, allowing the tri-factorisation approach to be used more effectively....

  10. Identification of necessary and sufficient conditions for real non-negativeness of rational matrices

    International Nuclear Information System (INIS)

    Saeed, K.

    1982-12-01

    The necessary and sufficient conditions for real non-negativeness of rational matrices have been identified. A programmable algorithm is developed and is given with its computer flow chart. This algorithm can be used as a general solution to test the real non-negativeness of rational matrices. The computer program assures the feasibility of the suggested algorithm. (author)

  11. Modeling sequential context effects in diagnostic interpretation of screening mammograms.

    Science.gov (United States)

    Alamudun, Folami; Paulus, Paige; Yoon, Hong-Jun; Tourassi, Georgia

    2018-07-01

    Prior research has shown that physicians' medical decisions can be influenced by sequential context, particularly in cases where successive stimuli exhibit similar characteristics when analyzing medical images. This type of systematic error is known to psychophysicists as sequential context effect as it indicates that judgments are influenced by features of and decisions about the preceding case in the sequence of examined cases, rather than being based solely on the peculiarities unique to the present case. We determine if radiologists experience some form of context bias, using screening mammography as the use case. To this end, we explore correlations between previous perceptual behavior and diagnostic decisions and current decisions. We hypothesize that a radiologist's visual search pattern and diagnostic decisions in previous cases are predictive of the radiologist's current diagnostic decisions. To test our hypothesis, we tasked 10 radiologists of varied experience to conduct blind reviews of 100 four-view screening mammograms. Eye-tracking data and diagnostic decisions were collected from each radiologist under conditions mimicking clinical practice. Perceptual behavior was quantified using the fractal dimension of gaze scanpath, which was computed using the Minkowski-Bouligand box-counting method. To test the effect of previous behavior and decisions, we conducted a multifactor fixed-effects ANOVA. Further, to examine the predictive value of previous perceptual behavior and decisions, we trained and evaluated a predictive model for radiologists' current diagnostic decisions. ANOVA tests showed that previous visual behavior, characterized by fractal analysis, previous diagnostic decisions, and image characteristics of previous cases are significant predictors of current diagnostic decisions. Additionally, predictive modeling of diagnostic decisions showed an overall improvement in prediction error when the model is trained on additional information about

  12. Intrinsic spin-relaxation induced negative tunnel magnetoresistance in a single-molecule magnet

    Science.gov (United States)

    Xie, Haiqing; Wang, Qiang; Xue, Hai-Bin; Jiao, HuJun; Liang, J.-Q.

    2013-06-01

    We investigate theoretically the effects of intrinsic spin-relaxation on the spin-dependent transport through a single-molecule magnet (SMM), which is weakly coupled to ferromagnetic leads. The tunnel magnetoresistance (TMR) is obtained by means of the rate-equation approach including not only the sequential but also the cotunneling processes. It is shown that the TMR is strongly suppressed by the fast spin-relaxation in the sequential region and can vary from a large positive to slight negative value in the cotunneling region. Moreover, with an external magnetic field along the easy-axis of SMM, a large negative TMR is found when the relaxation strength increases. Finally, in the high bias voltage limit the TMR for the negative bias is slightly larger than its characteristic value of the sequential region; however, it can become negative for the positive bias caused by the fast spin-relaxation.

  13. [Sequential monitoring of renal transplant with aspiration cytology].

    Science.gov (United States)

    Manfro, R C; Gonçalves, L F; de Moura, L A

    1998-01-01

    To evaluate the utility of kidney aspiration cytology in the sequential monitorization of acute rejection in renal transplant patients. Thirty patients were submitted to 376 aspirations. The clinical diagnoses were independently established. The representativity of the samples reached 82.7%. The total corrected increment index and the number of immunoactivated cells were higher during acute rejection as compared to normal allograft function, acute tubular necrosis, and cyclosporine nephrotoxicity. The parameters to the diagnosis of acute rejection were sensitivity: 71.8%, specificity: 87.3%, positive predictive value: 50.9%, negative predictive value: 94.9% and accuracy 84.9%. The false positive results were mainly related to cytomegalovirus infection or to the administration of OKT3. In 10 out of 11 false negative results incipient immunoactivation was present alerting to the possibility of acute rejection. Kidney aspiration cytology is a useful tool for the sequential monitorization of acute rejection in renal transplant patients. The best results are reached when the results of aspiration cytology are analyzed with the clinical data.

  14. Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte

    2004-01-01

    are proposed, but all remain applicable to other domain problems. The well-known multi-resolution AAM optimisation is extended to include sequential relaxations on texture resolution, model coverage and model parameter constraints. Fully unsupervised analysis is obtained by exploiting model parameter...... that show that the method produces accurate, robust and rapid segmentations in a cross sectional study of 17 subjects, establishing its feasibility as a fully automated clinical tool for analysis and segmentation.......This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case...

  15. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    International Nuclear Information System (INIS)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu

    2017-01-01

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  16. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  17. Thinking while drinking: Fear of negative evaluation predicts drinking behaviors of students with social anxiety.

    Science.gov (United States)

    Villarosa-Hurlocker, Margo C; Whitley, Robert B; Capron, Daniel W; Madson, Michael B

    2018-03-01

    College students with social anxiety disorder experience more alcohol-related negative consequences, regardless of the amount of alcohol they consume. Social anxiety refers to psychological distress and physiological arousal in social situations due to an excessive fear of negative evaluation by others. The current study examined within-group differences in alcohol-related negative consequences of students who met or exceeded clinically-indicated social anxiety symptoms. In particular, we tested a sequential mediation model of the cognitive (i.e., fear of negative evaluation) and behavioral (protective behavioral strategies) mechanisms for the link between social anxiety disorder subtypes (i.e., interaction and performance-type) and alcohol-related negative consequences. Participants were 412 traditional-age college student drinkers who met or exceeded the clinically-indicated threshold for social anxiety disorder and completed measures of fear of negative evaluation, protective behavioral strategies (controlled consumption and serious harm reduction), and alcohol-related negative consequences. Fear of negative evaluation and serious harm reduction strategies sequentially accounted for the relationship between interaction social anxiety disorder and alcohol-related negative consequences, such that students with more severe interaction social anxiety symptoms reported more fear of negative evaluation, which was related to more serious harm reduction strategies, which predicted fewer alcohol-related negative consequences. Future directions and implications are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Negative Symptoms and Avoidance of Social Interaction: A Study of Non-Verbal Behaviour.

    Science.gov (United States)

    Worswick, Elizabeth; Dimic, Sara; Wildgrube, Christiane; Priebe, Stefan

    2018-01-01

    Non-verbal behaviour is fundamental to social interaction. Patients with schizophrenia display an expressivity deficit of non-verbal behaviour, exhibiting behaviour that differs from both healthy subjects and patients with different psychiatric diagnoses. The present study aimed to explore the association between non-verbal behaviour and symptom domains, overcoming methodological shortcomings of previous studies. Standardised interviews with 63 outpatients diagnosed with schizophrenia were videotaped. Symptoms were assessed using the Clinical Assessment Interview for Negative Symptoms (CAINS), the Positive and Negative Syndrome Scale (PANSS) and the Calgary Depression Scale. Independent raters later analysed the videos for non-verbal behaviour, using a modified version of the Ethological Coding System for Interviews (ECSI). Patients with a higher level of negative symptoms displayed significantly fewer prosocial (e.g., nodding and smiling), gesture, and displacement behaviours (e.g., fumbling), but significantly more flight behaviours (e.g., looking away, freezing). No gender differences were found, and these associations held true when adjusted for antipsychotic medication dosage. Negative symptoms are associated with both a lower level of actively engaging non-verbal behaviour and an increased active avoidance of social contact. Future research should aim to identify the mechanisms behind flight behaviour, with implications for the development of treatments to improve social functioning. © 2017 S. Karger AG, Basel.

  19. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

    NARCIS (Netherlands)

    Zamarro, G.

    2006-01-01

    This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

  20. Prevalence of Germline Mutations in Genes Engaged in DNA Damage Repair by Homologous Recombination in Patients with Triple-Negative and Hereditary Non-Triple-Negative Breast Cancers.

    Directory of Open Access Journals (Sweden)

    Pawel Domagala

    Full Text Available This study sought to assess the prevalence of common germline mutations in several genes engaged in the repair of DNA double-strand break by homologous recombination in patients with triple-negative breast cancers and hereditary non-triple-negative breast cancers. Tumors deficient in this type of DNA damage repair are known to be especially sensitive to DNA cross-linking agents (e.g., platinum drugs and to poly(ADP-ribose polymerase (PARP inhibitors.Genetic testing was performed for 36 common germline mutations in genes engaged in the repair of DNA by homologous recombination, i.e., BRCA1, BRCA2, CHEK2, NBN, ATM, PALB2, BARD1, and RAD51D, in 202 consecutive patients with triple-negative breast cancers and hereditary non-triple-negative breast cancers.Thirty five (22.2% of 158 patients in the triple-negative group carried mutations in genes involved in DNA repair by homologous recombination, while 10 (22.7% of the 44 patients in the hereditary non-triple-negative group carried such mutations. Mutations in BRCA1 were most frequent in patients with triple-negative breast cancer (18.4%, and mutations in CHEK2 were most frequent in patients with hereditary non-triple-negative breast cancers (15.9%. In addition, in the triple-negative group, mutations in CHEK2, NBN, and ATM (3.8% combined were found, while mutations in BRCA1, NBN, and PALB2 (6.8% combined were identified in the hereditary non-triple-negative group.Identifying mutations in genes engaged in DNA damage repair by homologous recombination other than BRCA1/2 can substantially increase the proportion of patients with triple-negative breast cancer and hereditary non-triple-negative breast cancer who may be eligible for therapy using PARP inhibitors and platinum drugs.

  1. The economic, environmental and public health impacts of new power plants: a sequential inter industry model integrated with GIS data

    Energy Technology Data Exchange (ETDEWEB)

    Avelino, Andre F.T.; Hewings, Geoffrey J.D.; Guilhoto, Joaquim J.M. [Universidade de Sao Paulo (FEA/USP), SE (Brazil). Fac. de Administracao e Contabilidade

    2010-07-01

    The electrical sector is responsible for a considerable amount of greenhouse gases emissions worldwide, but also the one in which modern society depends the most for maintenance of quality of life as well as the functioning of economic and social activities. Invariably, even CO2 emission-free power plants have some indirect environmental impacts due to the economic effects they produce during their life cycle (construction, O and M and decommissioning). Thus, sustainability issues should be always considered in energy planning, by evaluating the balance of positive/negative externalities on different areas of the country. This study aims to introduce a social-environmental economic model, based on a Regional Sequential Inter industry Model (SIM) integrated with geoprocessing data, in order to identify economic, pollution and public health impacts in state level for energy planning analysis. The model is based on the Impact Pathway Approach Methodology, using geoprocessing to locate social-environmental variables for dispersion and health evaluations. The final goal is to provide an auxiliary tool for policy makers to assess energy planning scenarios in Brazil. (author)

  2. [Mathematical modeling of synergistic interaction of sequential thermoradiation action on mammalian cells].

    Science.gov (United States)

    Belkina, S V; Semkina, M A; Kritskiĭ, R O; Petin, V G

    2010-01-01

    Data obtained by other authors for mammalian cells treated by sequential action of ionizing radiation and hyperthermia were used to estimate the dependence of synergistic enhancement ratio on the ratio of damages induced by these agents. Experimental results were described and interpreted by means of the mathematical model of synergism in accordance with which the synergism is expected to result from the additional lethal damage arising from the interaction of sublesions induced by both agents.

  3. Comparison of concurrent chemoradiotherapy versus sequential radiochemotherapy in patients with completely resected non-small cell lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hwan Ik; Noh, O Kyu; Oh, Young Taek; Chun, Mi Son; Kim, Sang Won; Cho, O Yeon; Heo, Jae Sung [Ajou University School of Medicine, Suwon (Korea, Republic of)

    2016-09-15

    Our institution has implemented two different adjuvant protocols in treating patients with non-small cell lung cancer (NSCLC): chemotherapy followed by concurrent chemoradiotherapy (CT-CCRT) and sequential postoperative radiotherapy (PORT) followed by postoperative chemotherapy (POCT). We aimed to compare the clinical outcomes between the two adjuvant protocols. From March 1997 to October 2012, 68 patients were treated with CT-CCRT (n = 25) and sequential PORT followed by POCT (RT-CT; n = 43). The CT-CCRT protocol consisted of 2 cycles of cisplatin-based POCT followed by PORT concurrently with 2 cycles of POCT. The RT-CT protocol consisted of PORT followed by 4 cycles of cisplatin-based POCT. PORT was administered using conventional fractionation with a dose of 50.4–60 Gy. We compared the outcomes between the two adjuvant protocols and analyzed the clinical factors affecting survivals. Median follow-up time was 43.9 months (range, 3.2 to 74.0 months), and the 5-year overall survival (OS), locoregional recurrence-free survival (LRFS), and distant metastasis-free survival (DMFS) were 53.9%, 68.2%, and 51.0%, respectively. There were no significant differences in OS (p = 0.074), LRFS (p = 0.094), and DMFS (p = 0.490) between the two protocols. In multivariable analyses, adjuvant protocol remained as a significant prognostic factor for LRFS, favouring CT-CCRT (hazard ratio [HR] = 3.506, p = 0.046) over RT-CT, not for OS (HR = 0.647, p = 0.229). CT-CCRT protocol increased LRFS more than RT-CT protocol in patients with completely resected NSCLC, but not in OS. Further studies are warranted to evaluate the benefit of CCRT strategy compared with sequential strategy.

  4. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    Science.gov (United States)

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  5. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  6. Zips : mining compressing sequential patterns in streams

    NARCIS (Netherlands)

    Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

    2013-01-01

    We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

  7. Sequential Exposure of Bortezomib and Vorinostat is Synergistic in Multiple Myeloma Cells

    Science.gov (United States)

    Nanavati, Charvi; Mager, Donald E.

    2018-01-01

    Purpose To examine the combination of bortezomib and vorinostat in multiple myeloma cells (U266) and xenografts, and to assess the nature of their potential interactions with semi-mechanistic pharmacodynamic models and biomarkers. Methods U266 proliferation was examined for a range of bortezomib and vorinostat exposure times and concentrations (alone and in combination). A non-competitive interaction model was used with interaction parameters that reflect the nature of drug interactions after simultaneous and sequential exposures. p21 and cleaved PARP were measured using immunoblotting to assess critical biomarker dynamics. For xenografts, data were extracted from literature and modeled with a PK/PD model with an interaction parameter. Results Estimated model parameters for simultaneous in vitro and xenograft treatments suggested additive drug effects. The sequence of bortezomib preincubation for 24 hours, followed by vorinostat for 24 hours, resulted in an estimated interaction term significantly less than 1, suggesting synergistic effects. p21 and cleaved PARP were also up-regulated the most in this sequence. Conclusions Semi-mechanistic pharmacodynamic modeling suggests synergistic pharmacodynamic interactions for the sequential administration of bortezomib followed by vorinostat. Increased p21 and cleaved PARP expression can potentially explain mechanisms of their enhanced effects, which require further PK/PD systems analysis to suggest an optimal dosing regimen. PMID:28101809

  8. A model for negative ion extraction and comparison of negative ion optics calculations to experimental results

    International Nuclear Information System (INIS)

    Pamela, J.

    1990-10-01

    Negative ion extraction is described by a model which includes electron diffusion across transverse magnetic fields in the sheath. This model allows a 2-Dimensional approximation of the problem. It is used to introduce electron space charge effects in a 2-D particle trajectory code, designed for negative ion optics calculations. Another physical effect, the stripping of negative ions on neutral gas atoms, has also been included in our model; it is found to play an important role in negative ion optics. The comparison with three sets of experimental data from very different negative ion accelerators, show that our model is able of accurate predictions

  9. Non-oral gram-negative facultative rods in chronic periodontitis microbiota

    NARCIS (Netherlands)

    van Winkelhoff, Arie J; Rurenga, Patrick; Wekema-Mulder, Gepke J; Singadji, Zadnach; Rams, Thomas E

    OBJECTIVE: The subgingival prevalence of gram-negative facultative rods not usually inhabiting or indigenous to the oral cavity (non-oral GNFR), as well as selected periodontal bacterial pathogens, were evaluated by culture in untreated and treated chronic periodontitis patients. METHODS:

  10. The sequential structure of brain activation predicts skill.

    Science.gov (United States)

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    A. Tran-Duy (An); A. Boonen (Annelies); M.A.F.J. van de Laar (Mart); A. Franke (Andre); J.L. Severens (Hans)

    2011-01-01

    textabstractObjective: To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods: Discrete event simulation paradigm was selected for model

  12. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    Tran-Duy, A.; Boonen, A.; Laar, M.A.F.J.; Franke, A.C.; Severens, J.L.

    2011-01-01

    Objective To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods Discrete event simulation paradigm was selected for model development. Drug

  13. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Fiandaca, G.; Auken, Esben

    2013-01-01

    hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM) and electrical resistivity tomography (ERT) data. In a sequential hydrogeophysical inversion (SHI) a groundwater model is calibrated with geophysical data by coupling groundwater model parameters...... with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI). In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical...

  14. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Fiandaca, G.; Auken, Esben

    2013-01-01

    with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI). In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical...... hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM) and electrical resistivity tomography (ERT) data. In a sequential hydrogeophysical inversion (SHI) a groundwater model is calibrated with geophysical data by coupling groundwater model parameters...

  15. Analysis of hypoglycemic events using negative binomial models.

    Science.gov (United States)

    Luo, Junxiang; Qu, Yongming

    2013-01-01

    Negative binomial regression is a standard model to analyze hypoglycemic events in diabetes clinical trials. Adjusting for baseline covariates could potentially increase the estimation efficiency of negative binomial regression. However, adjusting for covariates raises concerns about model misspecification, in which the negative binomial regression is not robust because of its requirement for strong model assumptions. In some literature, it was suggested to correct the standard error of the maximum likelihood estimator through introducing overdispersion, which can be estimated by the Deviance or Pearson Chi-square. We proposed to conduct the negative binomial regression using Sandwich estimation to calculate the covariance matrix of the parameter estimates together with Pearson overdispersion correction (denoted by NBSP). In this research, we compared several commonly used negative binomial model options with our proposed NBSP. Simulations and real data analyses showed that NBSP is the most robust to model misspecification, and the estimation efficiency will be improved by adjusting for baseline hypoglycemia. Copyright © 2013 John Wiley & Sons, Ltd.

  16. In vivo comparison of simultaneous versus sequential injection technique for thermochemical ablation in a porcine model.

    Science.gov (United States)

    Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C

    2012-01-01

    To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.

  17. Sequential super-stereotypy of an instinctive fixed action pattern in hyper-dopaminergic mutant mice: a model of obsessive compulsive disorder and Tourette's

    Directory of Open Access Journals (Sweden)

    Houchard Kimberly R

    2005-02-01

    Full Text Available Abstract Background Excessive sequential stereotypy of behavioral patterns (sequential super-stereotypy in Tourette's syndrome and obsessive compulsive disorder (OCD is thought to involve dysfunction in nigrostriatal dopamine systems. In sequential super-stereotypy, patients become trapped in overly rigid sequential patterns of action, language, or thought. Some instinctive behavioral patterns of animals, such as the syntactic grooming chain pattern of rodents, have sufficiently complex and stereotyped serial structure to detect potential production of overly-rigid sequential patterns. A syntactic grooming chain is a fixed action pattern that serially links up to 25 grooming movements into 4 predictable phases that follow 1 syntactic rule. New mutant mouse models allow gene-based manipulation of brain function relevant to sequential patterns, but no current animal model of spontaneous OCD-like behaviors has so far been reported to exhibit sequential super-stereotypy in the sense of a whole complex serial pattern that becomes stronger and excessively rigid. Here we used a hyper-dopaminergic mutant mouse to examine whether an OCD-like behavioral sequence in animals shows sequential super-stereotypy. Knockdown mutation of the dopamine transporter gene (DAT causes extracellular dopamine levels in the neostriatum of these adult mutant mice to rise to 170% of wild-type control levels. Results We found that the serial pattern of this instinctive behavioral sequence becomes strengthened as an entire entity in hyper-dopaminergic mutants, and more resistant to interruption. Hyper-dopaminergic mutant mice have stronger and more rigid syntactic grooming chain patterns than wild-type control mice. Mutants showed sequential super-stereotypy in the sense of having more stereotyped and predictable syntactic grooming sequences, and were also more likely to resist disruption of the pattern en route, by returning after a disruption to complete the pattern from the

  18. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    International Nuclear Information System (INIS)

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  19. Sugeno-Fuzzy Expert System Modeling for Quality Prediction of Non-Contact Machining Process

    Science.gov (United States)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Modeling can be categorised into four main domains: prediction, optimisation, estimation and calibration. In this paper, the Takagi-Sugeno-Kang (TSK) fuzzy logic method is examined as a prediction modelling method to investigate the taper quality of laser lathing, which seeks to replace traditional lathe machines with 3D laser lathing in order to achieve the desired cylindrical shape of stock materials. Three design parameters were selected: feed rate, cutting speed and depth of cut. A total of twenty-four experiments were conducted with eight sequential runs and replicated three times. The results were found to be 99% of accuracy rate of the TSK fuzzy predictive model, which suggests that the model is a suitable and practical method for non-linear laser lathing process.

  20. Approximate L0 constrained Non-negative Matrix and Tensor Factorization

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2008-01-01

    Non-negative matrix factorization (NMF), i.e. V = WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However...... constraint. In general, solving for a given L0 norm is an NP hard problem thus convex relaxatin to regularization by the L1 norm is often considered, i.e., minimizing ( 1/2 ||V-WHk||^2+lambda|H|_1). An open problem is to control the degree of sparsity imposed. We here demonstrate that a full regularization......, the L1 regularization strength lambda that best approximates a given L0 can be directly accessed and in effect used to control the sparsity of H. The MATLAB code for the NLARS algorithm is available for download....

  1. Sequential cancer immunotherapy: targeted activity of dimeric TNF and IL-8

    Science.gov (United States)

    Adrian, Nicole; Siebenborn, Uta; Fadle, Natalie; Plesko, Margarita; Fischer, Eliane; Wüest, Thomas; Stenner, Frank; Mertens, Joachim C.; Knuth, Alexander; Ritter, Gerd; Old, Lloyd J.; Renner, Christoph

    2009-01-01

    Polymorphonuclear neutrophils (PMNs) are potent effectors of inflammation and their attempts to respond to cancer are suggested by their systemic, regional and intratumoral activation. We previously reported on the recruitment of CD11b+ leukocytes due to tumor site-specific enrichment of TNF activity after intravenous administration of a dimeric TNF immunokine with specificity for fibroblast activation protein (FAP). However, TNF-induced chemo-attraction and extravasation of PMNs from blood into the tumor is a multistep process essentially mediated by interleukin 8. With the aim to amplify the TNF-induced and IL-8-mediated chemotactic response, we generated immunocytokines by N-terminal fusion of a human anti-FAP scFv fragment with human IL-8 (IL-872) and its N-terminally truncated form IL-83-72. Due to the dramatic difference in chemotaxis induction in vitro, we favored the mature chemokine fused to the anti-FAP scFv for further investigation in vivo. BALB/c nu/nu mice were simultaneously xenografted with FAP-positive or -negative tumors and extended chemo-attraction of PMNs was only detectable in FAP-expressing tissue after intravenous administration of the anti-FAP scFv-IL-872 construct. As TNF-activated PMNs are likewise producers and primary targets for IL-8, we investigated the therapeutic efficacy of co-administration of both effectors: Sequential application of scFv-IL-872 and dimeric IgG1-TNF fusion proteins significantly enhanced anti-tumor activity when compared either to a single effector treatment regimen or sequential application of non-targeted cytokines, indicating that the tumor-restricted sequential application of IL-872 and TNF is a promising approach for cancer therapy. PMID:19267427

  2. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  3. Campbell and moment measures for finite sequential spatial processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2006-01-01

    textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector

  4. Obsessive-compulsive symptoms and negative affect during tobacco withdrawal in a non-clinical sample of African American smokers.

    Science.gov (United States)

    Bello, Mariel S; Pang, Raina D; Chasson, Gregory S; Ray, Lara A; Leventhal, Adam M

    2017-05-01

    The association between obsessive-compulsive (OC) symptomatology and smoking is poorly understood, particularly in African Americans-a group subject to smoking- and OC-related health disparities. In a non-clinical sample of 253 African American smokers, we tested the negative reinforcement model of OC-smoking comorbidity, purporting that smokers with higher OC symptoms experience greater negative affect (NA) and urge to smoke for NA suppression upon acute tobacco abstinence. Following a baseline visit involving OC assessment, participants completed two counterbalanced experimental visits (non-abstinent vs. 16-h tobacco abstinence) involving affect, smoking urge, and nicotine withdrawal assessment. OC symptom severity predicted larger abstinence-provoked increases in overall NA, anger, anxiety, depression, fatigue, urge to smoke to suppress NA, and composite nicotine withdrawal symptom index. African American smokers with elevated OC symptoms appear to be vulnerable to negative reinforcement-mediated smoking motivation and may benefit from cessation treatments that diminish NA or the urge to quell NA via smoking. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Generalized infimum and sequential product of quantum effects

    International Nuclear Information System (INIS)

    Li Yuan; Sun Xiuhong; Chen Zhengli

    2007-01-01

    The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

  6. Dynamics-based sequential memory: Winnerless competition of patterns

    International Nuclear Information System (INIS)

    Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.

    2003-01-01

    We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns

  7. Kinetic modeling of particle dynamics in H{sup −} negative ion sources (invited)

    Energy Technology Data Exchange (ETDEWEB)

    Hatayama, A., E-mail: akh@ppl.appi.keio.ac.jp; Shibata, T.; Nishioka, S.; Ohta, M.; Yasumoto, M.; Nishida, K.; Yamamoto, T. [Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522 (Japan); Miyamoto, K. [Naruto University of Education, 748 Nakashima, Takashima, Naruto-cho, Naruto-shi, Tokushima 772-8502 (Japan); Fukano, A. [Monozukuri Department, Tokyo Metropolitan College of Industrial Technology, Shinagawa, Tokyo 140-0011 (Japan); Mizuno, T. [Department of Management Science, College of Engineering, Tamagawa University, Machida, Tokyo 194-8610 (Japan)

    2014-02-15

    Progress in the kinetic modeling of particle dynamics in H{sup −} negative ion source plasmas and their comparisons with experiments are reviewed, and discussed with some new results. Main focus is placed on the following two topics, which are important for the research and development of large negative ion sources and high power H{sup −} ion beams: (i) Effects of non-equilibrium features of EEDF (electron energy distribution function) on H{sup −} production, and (ii) extraction physics of H{sup −} ions and beam optics.

  8. Sequential application of non-pharmacological interventions reduces the severity of labour pain, delays use of pharmacological analgesia, and improves some obstetric outcomes: a randomised trial

    Directory of Open Access Journals (Sweden)

    Rubneide Barreto Silva Gallo

    2018-01-01

    Trial registration: NCT01389128. [Gallo RBS, Santana LS, Marcolin AC, Duarte G, Quintana SM (2018 Sequential application of non-pharmacological interventions reduces the severity of labour pain, delays use of pharmacological analgesia, and improves some obstetric outcomes: a randomised trial. Journal of Physiotherapy 64: 33–40

  9. Behavioral Modeling of WSN MAC Layer Security Attacks: A Sequential UML Approach

    DEFF Research Database (Denmark)

    Pawar, Pranav M.; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    is the vulnerability to security attacks/threats. The performance and behavior of a WSN are vastly affected by such attacks. In order to be able to better address the vulnerabilities of WSNs in terms of security, it is important to understand the behavior of the attacks. This paper addresses the behavioral modeling...... of medium access control (MAC) security attacks in WSNs. The MAC layer is responsible for energy consumption, delay and channel utilization of the network and attacks on this layer can introduce significant degradation of the individual sensor nodes due to energy drain and in performance due to delays....... The behavioral modeling of attacks will be beneficial for designing efficient and secure MAC layer protocols. The security attacks are modeled using a sequential diagram approach of Unified Modeling Language (UML). Further, a new attack definition, specific to hybrid MAC mechanisms, is proposed....

  10. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  11. Validation studies on indexed sequential modeling for the Colorado River Basin

    International Nuclear Information System (INIS)

    Labadie, J.W.; Fontane, D.G.; Salas, J.D.; Ouarda, T.

    1991-01-01

    This paper reports on a method called indexed sequential modeling (ISM) that has been developed by the Western Area Power Administration to estimate reliable levels of project dependable power capacity (PDC) and applied to several federal hydro systems in the Western U.S. The validity of ISM in relation to more commonly accepted stochastic modeling approaches is analyzed by applying it to the Colorado River Basin using the Colorado River Simulation System (CRSS) developed by the U.S. Bureau of Reclamation. Performance of ISM is compared with results from input of stochastically generated data using the LAST Applied Stochastic Techniques Package. Results indicate that output generated from ISM synthetically generated sequences display an acceptable correspondence with results obtained from final convergent stochastically generated hydrology for the Colorado River Basin

  12. Modeling the stepping mechanism in negative lightning leaders

    Science.gov (United States)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  13. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    Science.gov (United States)

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  15. A sequential threshold cure model for genetic analysis of time-to-event data

    DEFF Research Database (Denmark)

    Ødegård, J; Madsen, Per; Labouriau, Rodrigo S.

    2011-01-01

    In analysis of time-to-event data, classical survival models ignore the presence of potential nonsusceptible (cured) individuals, which, if present, will invalidate the inference procedures. Existence of nonsusceptible individuals is particularly relevant under challenge testing with specific...... pathogens, which is a common procedure in aquaculture breeding schemes. A cure model is a survival model accounting for a fraction of nonsusceptible individuals in the population. This study proposes a mixed cure model for time-to-event data, measured as sequential binary records. In a simulation study...... survival data were generated through 2 underlying traits: susceptibility and endurance (risk of dying per time-unit), associated with 2 sets of underlying liabilities. Despite considerable phenotypic confounding, the proposed model was largely able to distinguish the 2 traits. Furthermore, if selection...

  16. Neutron stars in non-linear coupling models

    International Nuclear Information System (INIS)

    Taurines, Andre R.; Vasconcellos, Cesar A.Z.; Malheiro, Manuel; Chiapparini, Marcelo

    2001-01-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, ∼ 0.72M s un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  17. Neutron stars in non-linear coupling models

    Energy Technology Data Exchange (ETDEWEB)

    Taurines, Andre R.; Vasconcellos, Cesar A.Z. [Rio Grande do Sul Univ., Porto Alegre, RS (Brazil); Malheiro, Manuel [Universidade Federal Fluminense, Niteroi, RJ (Brazil); Chiapparini, Marcelo [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    2001-07-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, {approx} 0.72M{sub s}un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  18. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Guiseppe, Cavaliere; Rahbæk, Anders; Taylor, A.M. Robert

    with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....

  19. Affective reactions and context-dependent processing of negations

    Directory of Open Access Journals (Sweden)

    Enrico Rubaltelli

    2008-12-01

    Full Text Available Three experiments demonstrate how the processing of negations is contingent on the evaluation context in which the negative information is presented. In addition, the strategy used to process the negations induced different affective reactions toward the stimuli, leading to inconsistency of preference. Participants were presented with stimuli described by either stating the presence of positive features (explicitly positive alternative or negating the presence of negative features (non-negative alternative. Alternatives were presented for either joint (JE or separate evaluation (SE. Experiment 1 showed that the non-negative stimuli were judged less attractive than the positive ones in JE but not in SE. Experiment 2 revealed that the non-negative stimuli induced a less clear and less positive feeling when they were paired with explicitly positive stimuli rather than evaluated separately. Non-negative options were also found less easy to judge than the positive ones in JE but not in SE. Finally, Experiment 3 showed that people process negations using two different models depending on the evaluation mode. Through a memory task, we found that in JE people process the non-negative attributes as negations of negative features, whereas in SE they directly process the non-negative attributes as positive features.

  20. Hierarchical Bayesian analysis of outcome- and process-based social preferences and beliefs in Dictator Games and sequential Prisoner's Dilemmas.

    Science.gov (United States)

    Aksoy, Ozan; Weesie, Jeroen

    2014-05-01

    In this paper, using a within-subjects design, we estimate the utility weights that subjects attach to the outcome of their interaction partners in four decision situations: (1) binary Dictator Games (DG), second player's role in the sequential Prisoner's Dilemma (PD) after the first player (2) cooperated and (3) defected, and (4) first player's role in the sequential Prisoner's Dilemma game. We find that the average weights in these four decision situations have the following order: (1)>(2)>(4)>(3). Moreover, the average weight is positive in (1) but negative in (2), (3), and (4). Our findings indicate the existence of strong negative and small positive reciprocity for the average subject, but there is also high interpersonal variation in the weights in these four nodes. We conclude that the PD frame makes subjects more competitive than the DG frame. Using hierarchical Bayesian modeling, we simultaneously analyze beliefs of subjects about others' utility weights in the same four decision situations. We compare several alternative theoretical models on beliefs, e.g., rational beliefs (Bayesian-Nash equilibrium) and a consensus model. Our results on beliefs strongly support the consensus effect and refute rational beliefs: there is a strong relationship between own preferences and beliefs and this relationship is relatively stable across the four decision situations. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  2. Carbapenem-Resistant Non-Glucose-Fermenting Gram-Negative Bacilli: the Missing Piece to the Puzzle

    Science.gov (United States)

    Gniadek, Thomas J.; Carroll, Karen C.

    2016-01-01

    The non-glucose-fermenting Gram-negative bacilli Pseudomonas aeruginosa and Acinetobacter baumannii are increasingly acquiring carbapenem resistance. Given their intrinsic antibiotic resistance, this can cause extremely difficult-to-treat infections. Additionally, resistance gene transfer can occur between Gram-negative species, regardless of their ability to ferment glucose. Thus, the acquisition of carbapenemase genes by these organisms increases the risk of carbapenemase spread in general. Ultimately, infection control practitioners and clinical microbiologists need to work together to determine the risk carried by carbapenem-resistant non-glucose-fermenting Gram-negative bacilli (CR-NF) in their institution and what methods should be considered for surveillance and detection of CR-NF. PMID:26912753

  3. Geographically weighted negative binomial regression applied to zonal level safety performance models.

    Science.gov (United States)

    Gomes, Marcos José Timbó Lima; Cunto, Flávio; da Silva, Alan Ricardo

    2017-09-01

    Generalized Linear Models (GLM) with negative binomial distribution for errors, have been widely used to estimate safety at the level of transportation planning. The limited ability of this technique to take spatial effects into account can be overcome through the use of local models from spatial regression techniques, such as Geographically Weighted Poisson Regression (GWPR). Although GWPR is a system that deals with spatial dependency and heterogeneity and has already been used in some road safety studies at the planning level, it fails to account for the possible overdispersion that can be found in the observations on road-traffic crashes. Two approaches were adopted for the Geographically Weighted Negative Binomial Regression (GWNBR) model to allow discrete data to be modeled in a non-stationary form and to take note of the overdispersion of the data: the first examines the constant overdispersion for all the traffic zones and the second includes the variable for each spatial unit. This research conducts a comparative analysis between non-spatial global crash prediction models and spatial local GWPR and GWNBR at the level of traffic zones in Fortaleza/Brazil. A geographic database of 126 traffic zones was compiled from the available data on exposure, network characteristics, socioeconomic factors and land use. The models were calibrated by using the frequency of injury crashes as a dependent variable and the results showed that GWPR and GWNBR achieved a better performance than GLM for the average residuals and likelihood as well as reducing the spatial autocorrelation of the residuals, and the GWNBR model was more able to capture the spatial heterogeneity of the crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Hemodynamic analysis of sequential graft from right coronary system to left coronary system.

    Science.gov (United States)

    Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun

    2016-12-28

    Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.

  5. Macroscopic Dynamic Modeling of Sequential Batch Cultures of Hybridoma Cells: An Experimental Validation

    Directory of Open Access Journals (Sweden)

    Laurent Dewasme

    2017-02-01

    Full Text Available Hybridoma cells are commonly grown for the production of monoclonal antibodies (MAb. For monitoring and control purposes of the bioreactors, dynamic models of the cultures are required. However these models are difficult to infer from the usually limited amount of available experimental data and do not focus on target protein production optimization. This paper explores an experimental case study where hybridoma cells are grown in a sequential batch reactor. The simplest macroscopic reaction scheme translating the data is first derived using a maximum likelihood principal component analysis. Subsequently, nonlinear least-squares estimation is used to determine the kinetic laws. The resulting dynamic model reproduces quite satisfactorily the experimental data, as evidenced in direct and cross-validation tests. Furthermore, model predictions can also be used to predict optimal medium renewal time and composition.

  6. Sequential Extraction Versus Comprehensive Characterization of Heavy Metal Species in Brownfield Soils

    Energy Technology Data Exchange (ETDEWEB)

    Dahlin, Cheryl L.; Williamson, Connie A.; Collins, W. Keith; Dahlin, David C.

    2002-06-01

    The applicability of sequential extraction as a means to determine species of heavy-metals was examined by a study on soil samples from two Superfund sites: the National Lead Company site in Pedricktown, NJ, and the Roebling Steel, Inc., site in Florence, NJ. Data from a standard sequential extraction procedure were compared to those from a comprehensive study that combined optical- and scanning-electron microscopy, X-ray diffraction, and chemical analyses. The study shows that larger particles of contaminants, encapsulated contaminants, and/or man-made materials such as slags, coke, metals, and plastics are subject to incasement, non-selectivity, and redistribution in the sequential extraction process. The results indicate that standard sequential extraction procedures that were developed for characterizing species of contaminants in river sediments may be unsuitable for stand-alone determinative evaluations of contaminant species in industrial-site materials. However, if employed as part of a comprehensive, site-specific characterization study, sequential extraction could be a very useful tool.

  7. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  8. Positive and Negative Perceptions of Bumiputra And Non-Bumiputra Students on Professional Qualification

    Directory of Open Access Journals (Sweden)

    Abd. Rashid Noor Asidah

    2017-01-01

    Full Text Available Bumiputra and non-Bumiputra students may come from various economic backgrounds and culture. This may influence their perception on their career choice of pursuing a professional accounting qualification. Thus, this study investigates the difference in positive and negative perceptions of Bumiputra and non-Bumiputra students on pursuing a professional qualification upon graduation. A questionnaire survey method was used to collect the data from final year accounting students from five public and three private universities in Malaysia. Means and independent sample t-tests results were analysed. Results indicated that there are only a few significant differences between Bumiputra and non-Bumiputra students on positive and negative perceptions on becoming professional accountants. As perception frames action, these findings would be useful to the Malaysian Institute of Accountants as well as professional bodies to attract both Bumiputra and non-Bumiputra graduates to become professional accountants.

  9. Non-negative matrix factorization in texture feature for classification of dementia with MRI data

    Science.gov (United States)

    Sarwinda, D.; Bustamam, A.; Ardaneswari, G.

    2017-07-01

    This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).

  10. Direct Observations of Parenting and Real-time Negative Affect among Adolescent Smokers and Non-Smokers

    Science.gov (United States)

    Richmond, Melanie J.; Mermelstein, Robin J.; Wakschlag, Lauren S.

    2012-01-01

    Objective This longitudinal study examined how observations of parental general communication style and control with their adolescents predicted changes in negative affect over time for adolescent smokers and non-smokers. Method Participants were 9th and 10th grade adolescents (N = 111; 56.8% female) who had all experimented with cigarettes and were thus at risk for continued smoking and escalation; 36% of these adolescents (n = 40) had smoked in the past month at baseline and were considered smokers in the present analyses. Adolescents participated separately with mothers and fathers in observed parent-adolescent problem-solving discussions to assess parenting at baseline. Adolescent negative affect was assessed at baseline, 6- and 24-months via ecological momentary assessment. Results Among both smoking and non-smoking adolescents, escalating negative affect significantly increased risk for future smoking. Higher quality maternal and paternal communication predicted a decline in negative affect over 1.5 years for adolescent smokers but was not related to negative affect for non-smokers. Controlling maternal, but not paternal, parenting predicted escalation in negative affect for all adolescents. Conclusions Findings suggest that reducing negative affect among experimenting youth can reduce risk for smoking escalation. Therefore, family-based prevention efforts for adolescent smoking escalation might consider parental general communication style and control as intervention targets. However, adolescent smoking status and parent gender may moderate these effects. PMID:23153193

  11. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Xiaolei; Gao, Xin

    2013-01-01

    Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.

  12. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-03-24

    Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.

  13. Is the negative glow plasma of a direct current glow discharge negatively charged?

    International Nuclear Information System (INIS)

    Bogdanov, E. A.; Saifutdinov, A. I.; Demidov, V. I.; Kudryavtsev, A. A.

    2015-01-01

    A classic problem in gas discharge physics is discussed: what is the sign of charge density in the negative glow region of a glow discharge? It is shown that traditional interpretations in text-books on gas discharge physics that states a negative charge of the negative glow plasma are based on analogies with a simple one-dimensional model of discharge. Because the real glow discharges with a positive column are always two-dimensional, the transversal (radial) term in divergence with the electric field can provide a non-monotonic axial profile of charge density in the plasma, while maintaining a positive sign. The numerical calculation of glow discharge is presented, showing a positive space charge in the negative glow under conditions, where a one-dimensional model of the discharge would predict a negative space charge

  14. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States

    Science.gov (United States)

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-01

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  15. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  16. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Richard Stahl

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  17. Positive and Negative Impacts of Non-Native Bee Species around the World.

    Science.gov (United States)

    Russo, Laura

    2016-11-28

    Though they are relatively understudied, non-native bees are ubiquitous and have enormous potential economic and environmental impacts. These impacts may be positive or negative, and are often unquantified. In this manuscript, I review literature on the known distribution and environmental and economic impacts of 80 species of introduced bees. The potential negative impacts of non-native bees include competition with native bees for nesting sites or floral resources, pollination of invasive weeds, co-invasion with pathogens and parasites, genetic introgression, damage to buildings, affecting the pollination of native plant species, and changing the structure of native pollination networks. The potential positive impacts of non-native bees include agricultural pollination, availability for scientific research, rescue of native species, and resilience to human-mediated disturbance and climate change. Most non-native bee species are accidentally introduced and nest in stems, twigs, and cavities in wood. In terms of number of species, the best represented families are Megachilidae and Apidae, and the best represented genus is Megachile . The best studied genera are Apis and Bombus , and most of the species in these genera were deliberately introduced for agricultural pollination. Thus, we know little about the majority of non-native bees, accidentally introduced or spreading beyond their native ranges.

  18. Positive and Negative Impacts of Non-Native Bee Species around the World

    Directory of Open Access Journals (Sweden)

    Laura Russo

    2016-11-01

    Full Text Available Though they are relatively understudied, non-native bees are ubiquitous and have enormous potential economic and environmental impacts. These impacts may be positive or negative, and are often unquantified. In this manuscript, I review literature on the known distribution and environmental and economic impacts of 80 species of introduced bees. The potential negative impacts of non-native bees include competition with native bees for nesting sites or floral resources, pollination of invasive weeds, co-invasion with pathogens and parasites, genetic introgression, damage to buildings, affecting the pollination of native plant species, and changing the structure of native pollination networks. The potential positive impacts of non-native bees include agricultural pollination, availability for scientific research, rescue of native species, and resilience to human-mediated disturbance and climate change. Most non-native bee species are accidentally introduced and nest in stems, twigs, and cavities in wood. In terms of number of species, the best represented families are Megachilidae and Apidae, and the best represented genus is Megachile. The best studied genera are Apis and Bombus, and most of the species in these genera were deliberately introduced for agricultural pollination. Thus, we know little about the majority of non-native bees, accidentally introduced or spreading beyond their native ranges.

  19. Image Quality of 3rd Generation Spiral Cranial Dual-Source CT in Combination with an Advanced Model Iterative Reconstruction Technique: A Prospective Intra-Individual Comparison Study to Standard Sequential Cranial CT Using Identical Radiation Dose.

    Science.gov (United States)

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas

    2015-01-01

    To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (pspiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.

  20. Parameter sampling capabilities of sequential and simultaneous data assimilation: I. Analytical comparison

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess the parameter sampling capabilities of some Bayesian, ensemble-based, joint state-parameter (JS) estimation methods. The forward model is assumed to be non-chaotic and have nonlinear components, and the emphasis is on results obtained for the parameters in the state-parameter vector. A variety of approximate sampling methods exist, and a number of numerical comparisons between such methods have been performed. Often, more than one of the defining characteristics vary from one method to another, so it can be difficult to point out which characteristic of the more successful method in such a comparison was decisive. In this study, we single out one defining characteristic for comparison; whether or not data are assimilated sequentially or simultaneously. The current paper is concerned with analytical investigations into this issue. We carefully select one sequential and one simultaneous JS method for the comparison. We also design a corresponding pair of pure parameter estimation methods, and we show how the JS methods and the parameter estimation methods are pairwise related. It is shown that the sequential and the simultaneous parameter estimation methods are equivalent for one particular combination of observations with different degrees of nonlinearity. Strong indications are presented for why one may expect the sequential parameter estimation method to outperform the simultaneous parameter estimation method for all other combinations of observations. Finally, the conditions for when similar relations can be expected to hold between the corresponding JS methods are discussed. A companion paper, part II (Fossum and Mannseth 2014 Inverse Problems 30 114003), is concerned with statistical analysis of results from a range of numerical experiments involving sequential and simultaneous JS estimation, where the design of the numerical investigation is motivated by our findings in the current paper. (paper)

  1. Dissociable roles of dopamine and serotonin transporter function in a rat model of negative urgency.

    Science.gov (United States)

    Yates, Justin R; Darna, Mahesh; Gipson, Cassandra D; Dwoskin, Linda P; Bardo, Michael T

    2015-09-15

    Negative urgency is a facet of impulsivity that reflects mood-based rash action and is associated with various maladaptive behaviors in humans. However, the underlying neural mechanisms of negative urgency are not fully understood. Several brain regions within the mesocorticolimbic pathway, as well as the neurotransmitters dopamine (DA) and serotonin (5-HT), have been implicated in impulsivity. Extracellular DA and 5-HT concentrations are regulated by DA transporters (DAT) and 5-HT transporters (SERT); thus, these transporters may be important molecular mechanisms underlying individual differences in negative urgency. The current study employed a reward omission task to model negative urgency in rats. During reward trials, a cue light signaled the non-contingent delivery of one sucrose pellet; immediately following the non-contingent reward, rats responded on a lever to earn sucrose pellets (operant phase). Omission trials were similar to reward trials, except that non-contingent sucrose was omitted following the cue light prior to the operant phase. As expected, contingent responding was higher following omission of expected reward than following delivery of expected reward, thus reflecting negative urgency. Upon completion of behavioral training, Vmax and Km were obtained from kinetic analysis of [(3)H]DA and [(3)H]5-HT uptake using synaptosomes prepared from nucleus accumbens (NAc), dorsal striatum (Str), medial prefrontal cortex (mPFC), and orbitofrontal cortex (OFC) isolated from individual rats. Vmax for DAT in NAc and for SERT in OFC were positively correlated with negative urgency scores. The current findings suggest that mood-based impulsivity (negative urgency) is associated with enhanced DAT function in NAc and SERT function in OFC. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Assessing the impact of safety monitoring on the efficacy analysis in large Phase III group sequential trials with non-trivial safety event rate.

    Science.gov (United States)

    Weng, Yanqiu; Palesch, Yuko Y; DeSantis, Stacia M; Zhao, Wenle

    2016-01-01

    In Phase III clinical trials for life-threatening conditions, some serious but expected adverse events, such as early deaths or congestive heart failure, are often treated as the secondary or co-primary endpoint, and are closely monitored by the Data and Safety Monitoring Committee (DSMC). A naïve group sequential design (GSD) for such a study is to specify univariate statistical boundaries for the efficacy and safety endpoints separately, and then implement the two boundaries during the study, even though the two endpoints are typically correlated. One problem with this naïve design, which has been noted in the statistical literature, is the potential loss of power. In this article, we develop an analytical tool to evaluate this negative impact for trials with non-trivial safety event rates, particularly when the safety monitoring is informal. Using a bivariate binary power function for the GSD with a random-effect component to account for subjective decision-making in safety monitoring, we demonstrate how, under common conditions, the power loss in the naïve design can be substantial. This tool may be helpful to entities such as the DSMCs when they wish to deviate from the prespecified stopping boundaries based on safety measures.

  3. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  4. Making Career Decisions--A Sequential Elimination Approach.

    Science.gov (United States)

    Gati, Itamar

    1986-01-01

    Presents a model for career decision making based on the sequential elimination of occupational alternatives, an adaptation for career decisions of Tversky's (1972) elimination-by-aspects theory of choice. The expected utility approach is reviewed as a representative compensatory model for career decisions. Advantages, disadvantages, and…

  5. Sequential Computerized Mastery Tests--Three Simulation Studies

    Science.gov (United States)

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  6. A new cellular automata model of traffic flow with negative exponential weighted look-ahead potential

    Science.gov (United States)

    Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye

    2016-10-01

    With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).

  7. H- production from non-cesiated converter-type negative ion sources

    International Nuclear Information System (INIS)

    van Os, C.F.A.; Leung, K.N.; Lietzke, A.F.; Stearns, J.W.; Kunkel, W.B.

    1989-11-01

    Recent results of surface produced negative ions are presented. Two low work function metal surfaces have been studied, barium and magnesium, in combination with several plasma generators; rf- and dc-filament discharges. The negative ion yield for barium is about 5 to 6 times larger than magnesium. This ratio is confirmed by model calculations on resonant charge exchange. 32 refs., 9 figs

  8. Modeling Non-homologous End Joining

    Science.gov (United States)

    Li, Yongfeng

    2013-01-01

    Non-homologous end joining (NHEJ) is the dominant DNA double strand break (DSB) repair pathway and involves several NHEJ proteins such as Ku, DNA-PKcs, XRCC4, Ligase IV and so on. Once DSBs are generated, Ku is first recruited to the DNA end, followed by other NHEJ proteins for DNA end processing and ligation. Because of the direct ligation of break ends without the need for a homologous template, NHEJ turns out to be an error-prone but efficient repair pathway. Some mechanisms have been proposed of how the efficiency of NHEJ repair is affected. The type of DNA damage is an important factor of NHEJ repair. For instance, the length of DNA fragment may determine the recruitment efficiency of NHEJ protein such as Ku [1], or the complexity of the DNA breaks [2] is accounted for the choice of NHEJ proteins and subpathway of NHEJ repair. On the other hand, the chromatin structure also plays a role of the accessibility of NHEJ protein to the DNA damage site. In this talk, some mathematical models of NHEJ, that consist of series of biochemical reactions complying with the laws of chemical reaction (e.g. mass action, etc.), will be introduced. By mathematical and numerical analysis and parameter estimation, the models are able to capture the qualitative biological features and show good agreement with experimental data. As conclusions, from the viewpoint of modeling, how the NHEJ proteins are recruited will be first discussed for connection between the classical sequential model [4] and recently proposed two-phase model [5]. Then how the NHEJ repair pathway is affected, by the length of DNA fragment [6], the complexity of DNA damage [7] and the chromatin structure [8], will be addressed

  9. Trial Sequential Analysis in systematic reviews with meta-analysis

    Directory of Open Access Journals (Sweden)

    Jørn Wetterslev

    2017-03-01

    Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in

  10. Discussions on the non-equilibrium effects in the quantitative phase field model of binary alloys

    International Nuclear Information System (INIS)

    Zhi-Jun, Wang; Jin-Cheng, Wang; Gen-Cang, Yang

    2010-01-01

    All the quantitative phase field models try to get rid of the artificial factors of solutal drag, interface diffusion and interface stretch in the diffuse interface. These artificial non-equilibrium effects due to the introducing of diffuse interface are analysed based on the thermodynamic status across the diffuse interface in the quantitative phase field model of binary alloys. Results indicate that the non-equilibrium effects are related to the negative driving force in the local region of solid side across the diffuse interface. The negative driving force results from the fact that the phase field model is derived from equilibrium condition but used to simulate the non-equilibrium solidification process. The interface thickness dependence of the non-equilibrium effects and its restriction on the large scale simulation are also discussed. (cross-disciplinary physics and related areas of science and technology)

  11. Sequential Change-Point Detection via Online Convex Optimization

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2018-02-01

    Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

  12. Negative optical spin torque wrench of a non-diffracting non-paraxial fractional Bessel vortex beam

    Science.gov (United States)

    Mitri, F. G.

    2016-10-01

    An absorptive Rayleigh dielectric sphere in a non-diffracting non-paraxial fractional Bessel vortex beam experiences a spin torque. The axial and transverse radiation spin torque components are evaluated in the dipole approximation using the radiative correction of the electric field. Particular emphasis is given on the polarization as well as changing the topological charge α and the half-cone angle of the beam. When α is zero, the axial spin torque component vanishes. However, when α becomes a real positive number, the vortex beam induces left-handed (negative) axial spin torque as the sphere shifts off-axially from the center of the beam. The results show that a non-diffracting non-paraxial fractional Bessel vortex beam is capable of inducing a spin reversal of an absorptive Rayleigh sphere placed arbitrarily in its path. Potential applications are yet to be explored in particle manipulation, rotation in optical tweezers, optical tractor beams, and the design of optically-engineered metamaterials to name a few areas.

  13. AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.

    Science.gov (United States)

    Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo

    2017-09-21

    Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.

  14. Sequential Monte Carlo filter for state estimation of LiFePO4 batteries based on an online updated model

    Science.gov (United States)

    Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.

    2014-02-01

    Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.

  15. The statistical decay of very hot nuclei: from sequential decay to multifragmentation

    International Nuclear Information System (INIS)

    Carlson, B.V.; Donangelo, R.; Universidad de la Republica, Montevideo; Souza, S.R.; Universidade Federal do Rio Grande do Sul; Lynch, W.G.; Steiner, A.W.; Tsang, M.B.

    2010-01-01

    Full text. At low excitation energies, the compound nucleus typically decays through the sequential emission of light particles. As the energy increases, the emission probability of heavier fragments increases until, at sufficiently high energies, several heavy complex fragments are emitted during the decay. The extent to which this fragment emission is simultaneous or sequential has been a subject of theoretical and experimental study for almost 30 years. The Statistical Multifragmentation Model, an equilibrium model of simultaneous fragment emission, uses the configurations of a statistical ensemble to determine the distribution of primary fragments of a compound nucleus. The primary fragments are then assumed to decay by sequential compound emission or Fermi breakup. As the first step toward a more unified model of these processes, we demonstrate the equivalence of a generalized Fermi breakup model, in which densities of excited states are taken into account, to the microcanonical version of the statistical multifragmentation model. We then establish a link between this unified Fermi breakup / statistical multifragmentation model and the well-known process of compound nucleus emission, which permits to consider simultaneous and sequential emission on the same footing. Within this unified framework, we analyze the increasing importance of simultaneous, multifragment decay with increasing excitation energy and decreasing lifetime of the compound nucleus. (author)

  16. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task

    Directory of Open Access Journals (Sweden)

    Cristóbal Moënne-Loccoz

    2017-09-01

    Full Text Available Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  17. A Relational Account of Call-by-Value Sequentiality

    DEFF Research Database (Denmark)

    Riecke, Jon Gary; Sandholm, Anders Bo

    2002-01-01

    We construct a model for FPC, a purely functional, sequential, call-by-value language. The model is built from partial continuous functions, in the style of Plotkin, further constrained to be uniform with respect to a class of logical relations. We prove that the model is fully abstract....

  18. What do results of common sequential fractionation and single-step extractions tell us about P binding with Fe and Al compounds in non-calcareous sediments?

    Czech Academy of Sciences Publication Activity Database

    Jan, Jiří; Borovec, Jakub; Kopáček, Jiří; Hejzlar, Josef

    2013-01-01

    Roč. 47, č. 2 (2013), s. 547-557 ISSN 0043-1354 R&D Projects: GA ČR(CZ) GA206/09/1764; GA MZe(CZ) QH81012; GA MZe(CZ) QI102A265 Institutional support: RVO:60077344 Keywords : sequential fractionation * ascorbate and oxalate extration * non-calcareous sediments Subject RIV: DA - Hydrology ; Limnology Impact factor: 5.323, year: 2013

  19. Heart rate reactivity associated to positive and negative food and non-food visual stimuli.

    Science.gov (United States)

    Kuoppa, Pekka; Tarvainen, Mika P; Karhunen, Leila; Narvainen, Johanna

    2016-08-01

    Using food as a stimuli is known to cause multiple psychophysiological reactions. Heart rate variability (HRV) is common tool for assessing physiological reactions in autonomic nervous system. However, the findings in HRV related to food stimuli have not been consistent. In this paper the quick changes in HRV related to positive and negative food and non-food visual stimuli are investigated. Electrocardiogram (ECG) was measured from 18 healthy females while being stimulated with the pictures. Subjects also filled Three-Factor Eating Questionnaire to determine their eating behavior. The inter-beat-interval time series and the HRV parameters were extracted from the ECG. The quick change in HRV parameters were studied by calculating the change from baseline value (10 s window before stimulus) to value after the onset of the stimulus (10 s window during stimulus). The paired t-test showed significant difference between positive and negative food pictures but not between positive and negative non-food pictures. All the HRV parameters decreased for positive food pictures while they stayed the same or increased a little for negative food pictures. The eating behavior characteristic cognitive restraint was negatively correlated with HRV parameters that describe decreasing of heart rate.

  20. Non-negative Feynman endash Kac kernels in Schroedinger close-quote s interpolation problem

    International Nuclear Information System (INIS)

    Blanchard, P.; Garbaczewski, P.; Olkiewicz, R.

    1997-01-01

    The local formulations of the Markovian interpolating dynamics, which is constrained by the prescribed input-output statistics data, usually utilize strictly positive Feynman endash Kac kernels. This implies that the related Markov diffusion processes admit vanishing probability densities only at the boundaries of the spatial volume confining the process. We discuss an extension of the framework to encompass singular potentials and associated non-negative Feynman endash Kac-type kernels. It allows us to deal with a class of continuous interpolations admitted by general non-negative solutions of the Schroedinger boundary data problem. The resulting nonstationary stochastic processes are capable of both developing and destroying nodes (zeros) of probability densities in the course of their evolution, also away from the spatial boundaries. This observation conforms with the general mathematical theory (due to M. Nagasawa and R. Aebi) that is based on the notion of multiplicative functionals, extending in turn the well known Doob close-quote s h-transformation technique. In view of emphasizing the role of the theory of non-negative solutions of parabolic partial differential equations and the link with open-quotes Wiener exclusionclose quotes techniques used to evaluate certain Wiener functionals, we give an alternative insight into the issue, that opens a transparent route towards applications.copyright 1997 American Institute of Physics

  1. Revisiting Bevacizumab + Cytotoxics Scheduling Using Mathematical Modeling: Proof of Concept Study in Experimental Non-Small Cell Lung Carcinoma.

    Science.gov (United States)

    Imbs, Diane-Charlotte; El Cheikh, Raouf; Boyer, Arnaud; Ciccolini, Joseph; Mascaux, Céline; Lacarelle, Bruno; Barlesi, Fabrice; Barbolosi, Dominique; Benzekry, Sébastien

    2018-01-01

    Concomitant administration of bevacizumab and pemetrexed-cisplatin is a common treatment for advanced nonsquamous non-small cell lung cancer (NSCLC). Vascular normalization following bevacizumab administration may transiently enhance drug delivery, suggesting improved efficacy with sequential administration. To investigate optimal scheduling, we conducted a study in NSCLC-bearing mice. First, experiments demonstrated improved efficacy when using sequential vs. concomitant scheduling of bevacizumab and chemotherapy. Combining this data with a mathematical model of tumor growth under therapy accounting for the normalization effect, we predicted an optimal delay of 2.8 days between bevacizumab and chemotherapy. This prediction was confirmed experimentally, with reduced tumor growth of 38% as compared to concomitant scheduling, and prolonged survival (74 vs. 70 days). Alternate sequencing of 8 days failed in achieving a similar increase in efficacy, thus emphasizing the utility of modeling support to identify optimal scheduling. The model could also be a useful tool in the clinic to personally tailor regimen sequences. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  2. Bayesian network model of crowd emotion and negative behavior

    Science.gov (United States)

    Ramli, Nurulhuda; Ghani, Noraida Abdul; Hatta, Zulkarnain Ahmad; Hashim, Intan Hashimah Mohd; Sulong, Jasni; Mahudin, Nor Diana Mohd; Rahman, Shukran Abd; Saad, Zarina Mat

    2014-12-01

    The effects of overcrowding have become a major concern for event organizers. One aspect of this concern has been the idea that overcrowding can enhance the occurrence of serious incidents during events. As one of the largest Muslim religious gathering attended by pilgrims from all over the world, Hajj has become extremely overcrowded with many incidents being reported. The purpose of this study is to analyze the nature of human emotion and negative behavior resulting from overcrowding during Hajj events from data gathered in Malaysian Hajj Experience Survey in 2013. The sample comprised of 147 Malaysian pilgrims (70 males and 77 females). Utilizing a probabilistic model called Bayesian network, this paper models the dependence structure between different emotions and negative behaviors of pilgrims in the crowd. The model included the following variables of emotion: negative, negative comfortable, positive, positive comfortable and positive spiritual and variables of negative behaviors; aggressive and hazardous acts. The study demonstrated that emotions of negative, negative comfortable, positive spiritual and positive emotion have a direct influence on aggressive behavior whereas emotion of negative comfortable, positive spiritual and positive have a direct influence on hazardous acts behavior. The sensitivity analysis showed that a low level of negative and negative comfortable emotions leads to a lower level of aggressive and hazardous behavior. Findings of the study can be further improved to identify the exact cause and risk factors of crowd-related incidents in preventing crowd disasters during the mass gathering events.

  3. Ab-initio validation of a simple heuristic expression for the sequential-double-ionization contribution to the double ionization of helium by ultrashort XUV pulses

    International Nuclear Information System (INIS)

    Liu, Aihua; Thumm, Uwe

    2015-01-01

    We study two-photon double ionization of helium by short XUV pulses by numerically solving the time-dependent Schrodinger equation in full dimensionality within a finite-element discrete-variable-representation scheme. Based on the emission asymmetries in joint photoelectron angular distributions, we identify sequential and non-sequential contributions to two-photon double ionization for ultrashort pulses whose spectrum overlaps the sequential (ħω > 54.4 eV) and non-sequential (39.5 eV < ħω < 54.4 eV) double-ionization regimes. (paper)

  4. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  5. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    Science.gov (United States)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  6. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  7. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  8. Non-oral gram-negative facultative rods in chronic periodontitis microbiota.

    Science.gov (United States)

    van Winkelhoff, Arie J; Rurenga, Patrick; Wekema-Mulder, Gepke J; Singadji, Zadrach M; Rams, Thomas E

    2016-05-01

    The subgingival prevalence of gram-negative facultative rods not usually inhabiting or indigenous to the oral cavity (non-oral GNFR), as well as selected periodontal bacterial pathogens, were evaluated by culture in untreated and treated chronic periodontitis patients. Subgingival biofilm specimens from 102 untreated and 101 recently treated adults with chronic periodontitis in the Netherlands were plated onto MacConkey III and Dentaid selective media with air-5% CO2 incubation for isolation of non-oral GNFR, and onto enriched Oxoid blood agar with anaerobic incubation for recovery of selected periodontal bacterial pathogens. Suspected non-oral GNFR clinical isolates were identified to a species level with the VITEK 2 automated system. A total of 87 (42.9%) out of 203 patients yielded subgingival non-oral GNFR. Patients recently treated with periodontal mechanical debridement therapy demonstrated a greater prevalence of non-oral GNFR (57.4% vs 28.4%, P chronic periodontitis patients yielded cultivable non-oral GNFR in periodontal pockets, particularly among those recently treated with periodontal mechanical debridement therapy. Since non-oral GNFR species may resist mechanical debridement from periodontal pockets, and are often not susceptible to many antibiotics frequently used in periodontal practice, their subgingival presence may complicate periodontal treatment in species-positive patients and increase risk of potentially dangerous GNFR infections developing at other body sites. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Immunogenicity of simultaneous versus sequential administration of a 23-valent pneumococcal polysaccharide vaccine and a quadrivalent influenza vaccine in older individuals: A randomized, open-label, non-inferiority trial.

    Science.gov (United States)

    Nakashima, Kei; Aoshima, Masahiro; Ohfuji, Satoko; Yamawaki, Satoshi; Nemoto, Masahiro; Hasegawa, Shinya; Noma, Satoshi; Misawa, Masafumi; Hosokawa, Naoto; Yaegashi, Makito; Otsuka, Yoshihito

    2018-03-21

    It is unclear whether simultaneous administration of a 23-valent pneumococcal polysaccharide vaccine (PPSV23) and a quadrivalent influenza vaccine (QIV) produces immunogenicity in older individuals. This study tested the hypothesis that the pneumococcal antibody response elicited by simultaneous administration of PPSV23 and QIV in older individuals is not inferior to that elicited by sequential administration of PPSV23 and QIV. We performed a single-center, randomized, open-label, non-inferiority trial comprising 162 adults aged ≥65 years randomly assigned to either the simultaneous (simultaneous injections of PPSV23 and QIV) or sequential (control; PPSV23 injected 2 weeks after QIV vaccination) groups. Pneumococcal immunoglobulin G (IgG) titers of serotypes 23F, 3, 4, 6B, 14, and 19A were assessed. The primary endpoint was the serotype 23F response rate (a ≥2-fold increase in IgG concentrations 4-6 weeks after PPSV23 vaccination). With the non-inferiority margin set at 20% fewer patients, the response rate of serotype 23F in the simultaneous group (77.8%) was not inferior to that of the sequential group (77.6%; difference, 0.1%; 90% confidence interval, -10.8% to 11.1%). None of the pneumococcal IgG serotype titers were significantly different between the groups 4-6 weeks after vaccination. Simultaneous administration did not show a significant decrease in seroprotection odds ratios for H1N1, H3N2, or B/Phuket influenza strains other than B/Texas. Additionally, simultaneous administration did not increase adverse reactions. Hence, simultaneous administration of PPSV23 and QIV shows an acceptable immunogenicity that is comparable to sequential administration without an increase in adverse reactions. (This study was registered with ClinicalTrials.gov [NCT02592486]).

  10. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  11. Non-negative factor analysis supporting the interpretation of elemental distribution images acquired by XRF

    International Nuclear Information System (INIS)

    Alfeld, Matthias; Falkenberg, Gerald; Wahabzada, Mirwaes; Bauckhage, Christian; Kersting, Kristian; Wellenreuther, Gerd

    2014-01-01

    Stacks of elemental distribution images acquired by XRF can be difficult to interpret, if they contain high degrees of redundancy and components differing in their quantitative but not qualitative elemental composition. Factor analysis, mainly in the form of Principal Component Analysis (PCA), has been used to reduce the level of redundancy and highlight correlations. PCA, however, does not yield physically meaningful representations as they often contain negative values. This limitation can be overcome, by employing factor analysis that is restricted to non-negativity. In this paper we present the first application of the Python Matrix Factorization Module (pymf) on XRF data. This is done in a case study on the painting Saul and David from the studio of Rembrandt van Rijn. We show how the discrimination between two different Co containing compounds with minimum user intervention and a priori knowledge is supported by Non-Negative Matrix Factorization (NMF).

  12. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  13. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    Iooss, Christine

    1991-01-01

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

  14. Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations

    Directory of Open Access Journals (Sweden)

    Nah-Oak Song

    2015-08-01

    Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.

  15. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Science.gov (United States)

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  16. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Directory of Open Access Journals (Sweden)

    Fangjun Qin

    2018-05-01

    Full Text Available In this paper, a sequential multiplicative extended Kalman filter (SMEKF is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  17. Sequential Foreign Investments, Regional Technology Platforms and the Evolution of Japanese Multinationals in East Asia

    OpenAIRE

    Song, Jaeyong

    2001-01-01

    IVABSTRACTIn this paper, we investigate the firm-level mechanisms that underlie the sequential foreign direct investment (FDI) decisions of multinational corporations (MNCs). To understand inter-firm heterogeneity in the sequential FDI behaviors of MNCs, we develop a firm capability-based model of sequential FDI decisions. In the setting of Japanese electronics MNCs in East Asia, we empirically examine how prior investments in firm capabilities affect sequential investments into existingprodu...

  18. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  19. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    Science.gov (United States)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  20. Relative resilience to noise of standard and sequential approaches to measurement-based quantum computation

    Science.gov (United States)

    Gallagher, C. B.; Ferraro, A.

    2018-05-01

    A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.

  1. Sequential Objective Structured Clinical Examination based on item response theory in Iran

    Directory of Open Access Journals (Sweden)

    Sara Mortaz Hejri

    2017-09-01

    Full Text Available Purpose In a sequential objective structured clinical examination (OSCE, all students initially take a short screening OSCE. Examinees who pass are excused from further testing, but an additional OSCE is administered to the remaining examinees. Previous investigations of sequential OSCE were based on classical test theory. We aimed to design and evaluate screening OSCEs based on item response theory (IRT. Methods We carried out a retrospective observational study. At each station of a 10-station OSCE, the students’ performance was graded on a Likert-type scale. Since the data were polytomous, the difficulty parameters, discrimination parameters, and students’ ability were calculated using a graded response model. To design several screening OSCEs, we identified the 5 most difficult stations and the 5 most discriminative ones. For each test, 5, 4, or 3 stations were selected. Normal and stringent cut-scores were defined for each test. We compared the results of each of the 12 screening OSCEs to the main OSCE and calculated the positive and negative predictive values (PPV and NPV, as well as the exam cost. Results A total of 253 students (95.1% passed the main OSCE, while 72.6% to 94.4% of examinees passed the screening tests. The PPV values ranged from 0.98 to 1.00, and the NPV values ranged from 0.18 to 0.59. Two tests effectively predicted the results of the main exam, resulting in financial savings of 34% to 40%. Conclusion If stations with the highest IRT-based discrimination values and stringent cut-scores are utilized in the screening test, sequential OSCE can be an efficient and convenient way to conduct an OSCE.

  2. Sequential Objective Structured Clinical Examination based on item response theory in Iran.

    Science.gov (United States)

    Hejri, Sara Mortaz; Jalili, Mohammad

    2017-01-01

    In a sequential objective structured clinical examination (OSCE), all students initially take a short screening OSCE. Examinees who pass are excused from further testing, but an additional OSCE is administered to the remaining examinees. Previous investigations of sequential OSCE were based on classical test theory. We aimed to design and evaluate screening OSCEs based on item response theory (IRT). We carried out a retrospective observational study. At each station of a 10-station OSCE, the students' performance was graded on a Likert-type scale. Since the data were polytomous, the difficulty parameters, discrimination parameters, and students' ability were calculated using a graded response model. To design several screening OSCEs, we identified the 5 most difficult stations and the 5 most discriminative ones. For each test, 5, 4, or 3 stations were selected. Normal and stringent cut-scores were defined for each test. We compared the results of each of the 12 screening OSCEs to the main OSCE and calculated the positive and negative predictive values (PPV and NPV), as well as the exam cost. A total of 253 students (95.1%) passed the main OSCE, while 72.6% to 94.4% of examinees passed the screening tests. The PPV values ranged from 0.98 to 1.00, and the NPV values ranged from 0.18 to 0.59. Two tests effectively predicted the results of the main exam, resulting in financial savings of 34% to 40%. If stations with the highest IRT-based discrimination values and stringent cut-scores are utilized in the screening test, sequential OSCE can be an efficient and convenient way to conduct an OSCE.

  3. Wigner weight functions and Weyl symbols of non-negative definite linear operators

    NARCIS (Netherlands)

    Janssen, A.J.E.M.

    1989-01-01

    In this paper we present several necessary and, for radially symmetric functions, necessary and sufficient conditions for a function of two variables to be a Wigner weight function (Weyl symbol of a non-negative definite linear operator of L2(R)). These necessary conditions are in terms of spread

  4. Methamphetamine-alcohol interactions in murine models of sequential and simultaneous oral drug-taking.

    Science.gov (United States)

    Fultz, Elissa K; Martin, Douglas L; Hudson, Courtney N; Kippin, Tod E; Szumlinski, Karen K

    2017-08-01

    A high degree of co-morbidity exists between methamphetamine (MA) addiction and alcohol use disorders and both sequential and simultaneous MA-alcohol mixing increases risk for co-abuse. As little preclinical work has focused on the biobehavioral interactions between MA and alcohol within the context of drug-taking behavior, we employed simple murine models of voluntary oral drug consumption to examine how prior histories of either MA- or alcohol-taking influence the intake of the other drug. In one study, mice with a 10-day history of binge alcohol-drinking [5,10, 20 and 40% (v/v); 2h/day] were trained to self-administer oral MA in an operant-conditioning paradigm (10-40mg/L). In a second study, mice with a 10-day history of limited-access oral MA-drinking (5, 10, 20 and 40mg/L; 2h/day) were presented with alcohol (5-40% v/v; 2h/day) and then a choice between solutions of 20% alcohol, 10mg/L MA or their mix. Under operant-conditioning procedures, alcohol-drinking mice exhibited less MA reinforcement overall, than water controls. However, when drug availability was not behaviorally-contingent, alcohol-drinking mice consumed more MA and exhibited greater preference for the 10mg/L MA solution than drug-naïve and combination drug-experienced mice. Conversely, prior MA-drinking history increased alcohol intake across a range of alcohol concentrations. These exploratory studies indicate the feasibility of employing procedurally simple murine models of sequential and simultaneous oral MA-alcohol mixing of relevance to advancing our biobehavioral understanding of MA-alcohol co-abuse. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2016-09-01

    Full Text Available Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance.

  6. Non-linear Capital Taxation Without Commitment

    OpenAIRE

    Emmanuel Farhi; Christopher Sleet; Iván Werning; Sevin Yeltekin

    2012-01-01

    We study efficient non-linear taxation of labour and capital in a dynamic Mirrleesian model incorporating political economy constraints. Policies are chosen sequentially over time, without commitment. Our main result is that the marginal tax on capital income is progressive, in the sense that richer agents face higher marginal tax rates. Copyright , Oxford University Press.

  7. Treatment Variation of Sequential versus Concurrent Chemoradiotherapy in Stage III Non-Small Cell Lung Cancer Patients in the Netherlands and Belgium.

    Science.gov (United States)

    Walraven, I; Damhuis, R A; Ten Berge, M G; Rosskamp, M; van Eycken, L; de Ruysscher, D; Belderbos, J S A

    2017-11-01

    Concurrent chemoradiotherapy (CCRT) is considered the standard treatment regimen in non-surgical locally advanced non-small cell lung cancer (NSCLC) patients and sequential chemoradiotherapy (SCRT) is recommended in patients who are unfit to receive CCRT or when the treatment volume is considered too large. In this study, we investigated the proportion of CCRT/SCRT in the Netherlands and Belgium. Furthermore, patient and disease characteristics associated with SCRT were assessed. An observational study was carried out with data from three independent national registries: the Belgian Cancer Registry (BCR), the Netherlands Cancer Registry (NCR) and the Dutch Lung Cancer Audit-Radiotherapy (DLCA-R). Differences in patient and disease characteristics between CCRT and SCRT were tested with unpaired t-tests (for continuous variables) and with chi-square tests (for categorical variables). A prognostic model was constructed to determine patient and disease parameters predictive for the choice of SCRT. This study included 350 patients from the BCR, 780 patients from the NCR and 428 patients from the DLCA-R. More than half of the stage III NSCLC patients in the Netherlands (55%) and in Belgium more than a third (35%) were treated with CCRT. In both the Dutch and Belgian population, higher age and more advanced N-stage were significantly associated with SCRT. Performance score, pulmonary function, comorbidities and tumour volume were not associated with SCRT. In this observational population-based study, a large treatment variation in non-surgical stage III NSCLC patients was observed between and within the Netherlands and Belgium. Higher age and N-stage were significantly associated with the choice for SCRT. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  8. Exposure assessment of mobile phone base station radiation in an outdoor environment using sequential surrogate modeling.

    Science.gov (United States)

    Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom

    2013-05-01

    Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.

  9. Mind-to-mind heteroclinic coordination: Model of sequential episodic memory initiation

    Science.gov (United States)

    Afraimovich, V. S.; Zaks, M. A.; Rabinovich, M. I.

    2018-05-01

    Retrieval of episodic memory is a dynamical process in the large scale brain networks. In social groups, the neural patterns, associated with specific events directly experienced by single members, are encoded, recalled, and shared by all participants. Here, we construct and study the dynamical model for the formation and maintaining of episodic memory in small ensembles of interacting minds. We prove that the unconventional dynamical attractor of this process—the nonsmooth heteroclinic torus—is structurally stable within the Lotka-Volterra-like sets of equations. Dynamics on this torus combines the absence of chaos with asymptotic instability of every separate trajectory; its adequate quantitative characteristics are length-related Lyapunov exponents. Variation of the coupling strength between the participants results in different types of sequential switching between metastable states; we interpret them as stages in formation and modification of the episodic memory.

  10. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    Science.gov (United States)

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  11. Simulation on Poisson and negative binomial models of count road accident modeling

    Science.gov (United States)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  12. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  13. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    Directory of Open Access Journals (Sweden)

    Dario Cuevas Rivera

    2015-10-01

    Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.

  14. A Two-Factor Model Better Explains Heterogeneity in Negative Symptoms: Evidence from the Positive and Negative Syndrome Scale.

    Science.gov (United States)

    Jang, Seon-Kyeong; Choi, Hye-Im; Park, Soohyun; Jaekal, Eunju; Lee, Ga-Young; Cho, Young Il; Choi, Kee-Hong

    2016-01-01

    Acknowledging separable factors underlying negative symptoms may lead to better understanding and treatment of negative symptoms in individuals with schizophrenia. The current study aimed to test whether the negative symptoms factor (NSF) of the Positive and Negative Syndrome Scale (PANSS) would be better represented by expressive and experiential deficit factors, rather than by a single factor model, using confirmatory factor analysis (CFA). Two hundred and twenty individuals with schizophrenia spectrum disorders completed the PANSS; subsamples additionally completed the Brief Negative Symptom Scale (BNSS) and the Motivation and Pleasure Scale-Self-Report (MAP-SR). CFA results indicated that the two-factor model fit the data better than the one-factor model; however, latent variables were closely correlated. The two-factor model's fit was significantly improved by accounting for correlated residuals between N2 (emotional withdrawal) and N6 (lack of spontaneity and flow of conversation), and between N4 (passive social withdrawal) and G16 (active social avoidance), possibly reflecting common method variance. The two NSF factors exhibited differential patterns of correlation with subdomains of the BNSS and MAP-SR. These results suggest that the PANSS NSF would be better represented by a two-factor model than by a single-factor one, and support the two-factor model's adequate criterion-related validity. Common method variance among several items may be a potential source of measurement error under a two-factor model of the PANSS NSF.

  15. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  16. Androgen receptor status is a prognostic marker in non-basal triple negative breast cancers and determines novel therapeutic options.

    Directory of Open Access Journals (Sweden)

    Pierluigi Gasparini

    Full Text Available Triple negative breast cancers are a heterogeneous group of tumors characterized by poor patient survival and lack of targeted therapeutics. Androgen receptor has been associated with triple negative breast cancer pathogenesis, but its role in the different subtypes has not been clearly defined. We examined androgen receptor protein expression by immunohistochemical analysis in 678 breast cancers, including 396 triple negative cancers. Fifty matched lymph node metastases were also examined. Association of expression status with clinical (race, survival and pathological (basal, non-basal subtype, stage, grade features was also evaluated. In 160 triple negative breast cancers, mRNA microarray expression profiling was performed, and differences according to androgen receptor status were analyzed. In triple negative cancers the percentage of androgen receptor positive cases was lower (24.8% vs 81.6% of non-triple negative cases, especially in African American women (16.7% vs 25.5% of cancers of white women. No significant difference in androgen receptor expression was observed in primary tumors vs matched metastatic lesions. Positive androgen receptor immunoreactivity was inversely correlated with tumor grade (p<0.01 and associated with better overall patient survival (p = 0.032 in the non-basal triple negative cancer group. In the microarray study, expression of three genes (HER4, TNFSF10, CDK6 showed significant deregulation in association with androgen receptor status; eg CDK6, a novel therapeutic target in triple negative cancers, showed significantly higher expression level in androgen receptor negative cases (p<0.01. These findings confirm the prognostic impact of androgen receptor expression in non-basal triple negative breast cancers, and suggest targeting of new androgen receptor-related molecular pathways in patients with these cancers.

  17. S2SA preconditioning for the Sn equations with strictly non negative spatial discretization

    International Nuclear Information System (INIS)

    Bruss, D. E.; Morel, J. E.; Ragusa, J. C.

    2013-01-01

    Preconditioners based upon sweeps and diffusion-synthetic acceleration have been constructed and applied to the zeroth and first spatial moments of the 1-D S n transport equation using a strictly non negative nonlinear spatial closure. Linear and nonlinear preconditioners have been analyzed. The effectiveness of various combinations of these preconditioners are compared. In one dimension, nonlinear sweep preconditioning is shown to be superior to linear sweep preconditioning, and DSA preconditioning using nonlinear sweeps in conjunction with a linear diffusion equation is found to be essentially equivalent to nonlinear sweeps in conjunction with a nonlinear diffusion equation. The ability to use a linear diffusion equation has important implications for preconditioning the S n equations with a strictly non negative spatial discretization in multiple dimensions. (authors)

  18. [Sequential prescriptions: Arguments for a change of therapeutic patterns in treatment resistant depressions].

    Science.gov (United States)

    Allouche, G

    2016-02-01

    Among the therapeutic strategies in treatment of resistant depression, the use of sequential prescriptions is discussed here. A number of observations, initially quite isolated and few controlled studies, some large-scale, have been reported, which showed a definite therapeutic effect of certain requirements in sequential treatment of depression. The Sequenced Treatment Alternatives to Relieve Depression Study (STAR*D) is up to now the largest clinical trial exploring treatment strategies in non psychotic resistant depression in real-life conditions with an algorithm of sequential decision. The main conclusions of this study are the following: after two unsuccessful attempts, the chance of remission decreases considerably. A 12-months follow-up showed that the higher the use of the processing steps were high, the more common the relapses were during this period. The pharmacological differences between psychotropic did not cause clinically significant difference. The positive effect of lithium in combination with antidepressants has been known since the work of De Montigny. Antidepressants allow readjustment of physiological sequence involving different monoaminergic systems together. Studies with tricyclic antidepressant-thyroid hormone T3: in depression, decreased norepinephrine at the synaptic receptors believed to cause hypersensitivity of these receptors. Thyroid hormones modulate the activity of adrenergic receptors. There would be a balance of activity between alpha and beta-adrenergic receptors, depending on the bioavailability of thyroid hormones. ECT may in some cases promote pharmacological response in case of previous resistance, or be effective in preventing relapse. Cognitive therapy and antidepressant medications likely have an effect on different types of depression. We can consider the interest of cognitive therapy in a sequential pattern after effective treatment with an antidepressant effect for treatment of residual symptoms, preventing relapses

  19. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  20. Comparison of peripapillary retinal nerve fiber layer loss and visual outcome in fellow eyes following sequential bilateral non-arteritic anterior ischemic optic neuropathy.

    Science.gov (United States)

    Dotan, Gad; Kesler, Anat; Naftaliev, Elvira; Skarf, Barry

    2015-05-01

    To report on the correlation of structural damage to the axons of the optic nerve and visual outcome following bilateral non-arteritic anterior ischemic optic neuropathy. A retrospective review of the medical records of 25 patients with bilateral sequential non-arteritic anterior ischemic optic neuropathy was performed. Outcome measures were peripapillary retinal nerve fiber layer thickness measured with the Stratus optical coherence tomography scanner, visual acuity and visual field loss. Median peripapillary retinal nerve fiber layer (RNFL) thickness, mean deviation (MD) of visual field, and visual acuity of initially involved NAION eyes (54.00 µm, -17.77 decibels (dB), 0.4, respectively) were comparable to the same parameters measured following development of second NAION event in the other eye (53.70 µm, p = 0.740; -16.83 dB, p = 0.692; 0.4, p = 0.942, respectively). In patients with bilateral NAION, there was a significant correlation of peripapillary RNFL thickness (r = 0.583, p = 0.002) and MD of the visual field (r = 0.457, p = 0.042) for the pairs of affected eyes, whereas a poor correlation was found in visual acuity of these eyes (r = 0.279, p = 0.176). Peripapillary RNFL thickness following NAION was positively correlated with MD of visual field (r = 0.312, p = 0.043) and negatively correlated with logMAR visual acuity (r = -0.365, p = 0.009). In patients who experience bilateral NAION, the magnitude of RNFL loss is similar in each eye. There is a greater similarity in visual field loss than in visual acuity between the two affected eyes with NAION of the same individual.

  1. A pilot study: sequential gemcitabine/cisplatin and icotinib as induction therapy for stage IIB to IIIA non-small-cell lung adenocarcinoma

    Science.gov (United States)

    2013-01-01

    Background A phase II clinical trial previously evaluated the sequential administration of erlotinib after chemotherapy for advanced non-small-cell lung cancer (NSCLC). This current pilot study assessed the feasibility of sequential induction therapy in patients with stage IIB to IIIA NSCLC adenocarcinoma. Methods Patients received gemcitabine 1,250 mg/m2 on days 1 and 8 and cisplatin 75 mg/m2 on day 1, followed by oral icotinib (125 mg, three times a day) on days 15 to 28. A repeatcomputed tomography(CT) scan evaluated the response to the induction treatment after two 4-week cycles and eligible patients underwent surgical resection. The primary objective was to assess the objective response rate (ORR), while EGFR and KRAS mutations and mRNA and protein expression levels of ERCC1 and RRM1 were analyzed in tumor tissues and blood samples. Results Eleven patients, most with stage IIIA disease, completed preoperative treatment. Five patients achieved partial response according to the Response Evaluation Criteria in Solid Tumors (RECIST) criteria (ORR=45%) and six patients underwent resection. Common toxicities included neutropenia, alanine transaminase (ALT) elevation, fatigue, dry skin, rash, nausea, alopecia and anorexia. No serious complications were recorded perioperatively. Three patients had exon 19 deletions and those with EGFR mutations were more likely to achieve a clinical response (P= 0.083). Furthermore, most cases who achieved a clinical response had low levels of ERCC1 expression and high levels of RRM1. Conclusions Two cycles of sequentially administered gemcitabine/cisplatin with icotinib as an induction treatment is a feasible and efficacious approach for stage IIB to IIIA NSCLC adenocarcinoma, which provides evidence for the further investigation of these chemotherapeutic and molecularly targeted therapies. PMID:23621919

  2. Parent-Adolescent Conflict as Sequences of Reciprocal Negative Emotion: Links with Conflict Resolution and Adolescents' Behavior Problems.

    Science.gov (United States)

    Moed, Anat; Gershoff, Elizabeth T; Eisenberg, Nancy; Hofer, Claire; Losoya, Sandra; Spinrad, Tracy L; Liew, Jeffrey

    2015-08-01

    Although conflict is a normative part of parent-adolescent relationships, conflicts that are long or highly negative are likely to be detrimental to these relationships and to youths' development. In the present article, sequential analyses of data from 138 parent-adolescent dyads (adolescents' mean age was 13.44, SD = 1.16; 52 % girls, 79 % non-Hispanic White) were used to define conflicts as reciprocal exchanges of negative emotion observed while parents and adolescents were discussing "hot," conflictual issues. Dynamic components of these exchanges, including who started the conflicts, who ended them, and how long they lasted, were identified. Mediation analyses revealed that a high proportion of conflicts ended by adolescents was associated with longer conflicts, which in turn predicted perceptions of the "hot" issue as unresolved and adolescent behavior problems. The findings illustrate advantages of using sequential analysis to identify patterns of interactions and, with some certainty, obtain an estimate of the contingent relationship between a pattern of behavior and child and parental outcomes. These interaction patterns are discussed in terms of the roles that parents and children play when in conflict with each other, and the processes through which these roles affect conflict resolution and adolescents' behavior problems.

  3. Multiplicity distributions and multiplicity correlations in sequential, off-equilibrium fragmentation process

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    A new kinetic fragmentation model, the Fragmentation - Inactivation -Binary (FIB) model is described where a dissipative process stops randomly the sequential, conservative and off-equilibrium fragmentation process. (K.A.)

  4. Probable C4d-negative accelerated acute antibody-mediated rejection due to non-HLA antibodies.

    Science.gov (United States)

    Niikura, Takahito; Yamamoto, Izumi; Nakada, Yasuyuki; Kamejima, Sahoko; Katsumata, Haruki; Yamakawa, Takafumi; Furuya, Maiko; Mafune, Aki; Kobayashi, Akimitsu; Tanno, Yudo; Miki, Jun; Yamada, Hiroki; Ohkido, Ichiro; Tsuboi, Nobuo; Yamamoto, Hiroyasu; Yokoo, Takashi

    2015-07-01

    We report a case of probable C4d-negative accelerated acute antibody-mediated rejection due to non-HLA antibodies. A 44 year-old male was admitted to our hospital for a kidney transplant. The donor, his wife, was an ABO minor mismatch (blood type O to A) and had Gitelman syndrome. Graft function was delayed; his serum creatinine level was 10.1 mg/dL at 3 days after transplantation. Open biopsy was performed immediately; no venous thrombosis was observed during surgery. Histology revealed moderate peritubular capillaritis and mild glomerulitis without C4d immunoreactivity. Flow cytometric crossmatching was positive, but no panel-reactive antibodies against HLA or donor-specific antibodies (DSAbs) to major histocompatibility complex class I-related chain A (MICA) were detected. Taken together, we diagnosed him with probable C4d-negative accelerated antibody-mediated rejection due to non-HLA, non-MICA antibodies, the patient was treated with steroid pulse therapy (methylprednisolone 500 mg/day for 3 days), plasma exchange, intravenous immunoglobulin (40 g/body), and rituximab (200 mg/body) were performed. Biopsy at 58 days after transplantation, at which time S-Cr levels were 1.56 mg/dL, found no evidence of rejection. This case, presented with a review of relevant literature, demonstrates that probable C4d-negative accelerated acute AMR can result from non-HLA antibodies. © 2015 Asian Pacific Society of Nephrology.

  5. Effect of hysteretic and non-hysteretic negative capacitance on tunnel FETs DC performance

    Science.gov (United States)

    Saeidi, Ali; Jazaeri, Farzan; Stolichnov, Igor; Luong, Gia V.; Zhao, Qing-Tai; Mantl, Siegfried; Ionescu, Adrian M.

    2018-03-01

    This work experimentally demonstrates that the negative capacitance effect can be used to significantly improve the key figures of merit of tunnel field effect transistor (FET) switches. In the proposed approach, a matching condition is fulfilled between a trained-polycrystalline PZT capacitor and the tunnel FET (TFET) gate capacitance fabricated on a strained silicon-nanowire technology. We report a non-hysteretic switch configuration by combining a homojunction TFET and a negative capacitance effect booster, suitable for logic applications, for which the on-current is increased by a factor of 100, the transconductance by 2 orders of magnitude, and the low swing region is extended. The operation of a hysteretic negative capacitance TFET, when the matching condition for the negative capacitance is fulfilled only in a limited region of operation, is also reported and discussed. In this late case, a limited improvement in the device performance is observed. Overall, the paper demonstrates the main beneficial effects of negative capacitance on TFETs are the overdrive and transconductance amplification, which exactly address the most limiting performances of current TFETs.

  6. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  7. Online multi-modal robust non-negative dictionary learning for visual tracking.

    Science.gov (United States)

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  8. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  9. Admissible solutions for a class of nonlinear parabolic problem with non-negative data

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Petzeltová, Hana; Simondon, F.

    2001-01-01

    Roč. 131, č. 5 (2001), s. 857-883 ISSN 0308-2105 R&D Projects: GA AV ČR IAA1019703 Keywords : admissible solutions%nonlinear parabolic problem * admissible solutions * comparison principle * non-negative data Subject RIV: BA - General Mathematics Impact factor: 0.441, year: 2001

  10. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  11. Sequential effects of severe drought and defoliation on tree growth and survival in a diverse temperate mesic forest

    Science.gov (United States)

    Matthes, J. H.; Pederson, N.; David, O.; Martin-Benito, D.

    2017-12-01

    Understanding the effects of climate change and biotic disturbance within diverse temperate mesic forests is complicated by the need to scale between impacts within individuals and across species in the community. It is not clear how these impacts within individuals and across a community influences the stand- and regional-scale response. Furthermore, co-occurring or sequential disturbances can make it challenging to interpret forest responses from observational data. In the northeastern United States, the 1960s drought was perhaps the most severe period of climatic stress within the past 300 years and negatively impacted the growth of individual trees across all species, but unevenly. Additionally, in 1981 the northeast experienced an outbreak of the defoliator Lymantria dispar, which preferentially consumes oak leaves, but in 1981 impacted a high proportion of other species as well. To investigate the effects of drought (across functional groups) and defoliation (within a functional group), we combined a long-term tree-ring dataset from an old-growth forest within the Palmaghatt Ravine in New York with a version of the Ecosystem Demography model that includes a scheme for representing forest insects and pathogens. We explored the sequential impacts of severe drought and defoliation on tree growth, community composition, and ecosystem-atmosphere interactions (carbon, water, and heat flux). W­e also conducted a set of modeling experiments with climate and defoliation disturbance scenarios to bound the potential long-term response of this forest to co-occurring and sequential drought-defoliator disturbances over the next fifty years.

  12. The non-ideal associated species model applied to the system copper-indium

    International Nuclear Information System (INIS)

    Kellogg, H.H.

    1991-01-01

    The liquid copper-indium system displays complex thermochemical behavior. Deviations from Raoult's law change from positive to negative, and the integral heat of mixing also varies from positive to strongly negative and is markedly dependent on temperature. This behavior was successfully modelled, over the entire composition range and for a temperature range of 400 K, using the non-ideal associated-species concept, with InCu 3 as the associated species. Independent evidence exists for association at the composition InCu 3 , from measurements of magnetic susceptibility, electrical resistivity and Hall effect. In this paper, the applicability of the model to other systems is discussed

  13. Sequential Modulations in a Combined Horizontal and Vertical Simon Task: Is There ERP Evidence for Feature Integration Effects?

    Science.gov (United States)

    Hoppe, Katharina; Küper, Kristina; Wascher, Edmund

    2017-01-01

    In the Simon task, participants respond faster when the task-irrelevant stimulus position and the response position are corresponding, for example on the same side, compared to when they have a non-corresponding relation. Interestingly, this Simon effect is reduced after non-corresponding trials. Such sequential effects can be explained in terms of a more focused processing of the relevant stimulus dimension due to increased cognitive control, which transfers from the previous non-corresponding trial (conflict adaptation effects). Alternatively, sequential modulations of the Simon effect can also be due to the degree of trial-to-trial repetitions and alternations of task features, which is confounded with the correspondence sequence (feature integration effects). In the present study, we used a spatially two-dimensional Simon task with vertical response keys to examine the contribution of adaptive cognitive control and feature integration processes to the sequential modulation of the Simon effect. The two-dimensional Simon task creates correspondences in the vertical as well as in the horizontal dimension. A trial-by-trial alternation of the spatial dimension, for example from a vertical to a horizontal stimulus presentation, generates a subset containing no complete repetitions of task features, but only complete alternations and partial repetitions, which are equally distributed over all correspondence sequences. In line with the assumed feature integration effects, we found sequential modulations of the Simon effect only when the spatial dimension repeated. At least for the horizontal dimension, this pattern was confirmed by the parietal P3b, an event-related potential that is assumed to reflect stimulus-response link processes. Contrary to conflict adaptation effects, cognitive control, measured by the fronto-central N2 component of the EEG, was not sequentially modulated. Overall, our data provide behavioral as well as electrophysiological evidence for feature

  14. The Role of Semantic Transfer in Clitic Drop among Simultaneous and Sequential Chinese-Spanish Bilinguals

    Science.gov (United States)

    Cuza, Alejandro; Perez-Leroux, Ana Teresa; Sanchez, Liliana

    2013-01-01

    This study examines the acquisition of the featural constraints on clitic and null distribution in Spanish among simultaneous and sequential Chinese-Spanish bilinguals from Peru. A truth value judgment task targeted the referential meaning of null objects in a negation context. Objects were elicited via two clitic elicitation tasks that targeted…

  15. Direct methods and residue type specific isotope labeling in NMR structure determination and model-driven sequential assignment

    International Nuclear Information System (INIS)

    Schedlbauer, Andreas; Auer, Renate; Ledolter, Karin; Tollinger, Martin; Kloiber, Karin; Lichtenecker, Roman; Ruedisser, Simon; Hommel, Ulrich; Schmid, Walther; Konrat, Robert; Kontaxis, Georg

    2008-01-01

    Direct methods in NMR based structure determination start from an unassigned ensemble of unconnected gaseous hydrogen atoms. Under favorable conditions they can produce low resolution structures of proteins. Usually a prohibitively large number of NOEs is required, to solve a protein structure ab-initio, but even with a much smaller set of distance restraints low resolution models can be obtained which resemble a protein fold. One problem is that at such low resolution and in the absence of a force field it is impossible to distinguish the correct protein fold from its mirror image. In a hybrid approach these ambiguous models have the potential to aid in the process of sequential backbone chemical shift assignment when 13 C β and 13 C' shifts are not available for sensitivity reasons. Regardless of the overall fold they enhance the information content of the NOE spectra. These, combined with residue specific labeling and minimal triple-resonance data using 13 C α connectivity can provide almost complete sequential assignment. Strategies for residue type specific labeling with customized isotope labeling patterns are of great advantage in this context. Furthermore, this approach is to some extent error-tolerant with respect to data incompleteness, limited precision of the peak picking, and structural errors caused by misassignment of NOEs

  16. Spectral multipliers on spaces of distributions associated with non-negative self-adjoint operators

    DEFF Research Database (Denmark)

    Georgiadis, Athanasios; Nielsen, Morten

    2018-01-01

    and Triebel–Lizorkin spaces with full range of indices is established too. As an application, we obtain equivalent norm characterizations for the spaces mentioned above. Non-classical spaces as well as Lebesgue, Hardy, (generalized) Sobolev and Lipschitz spaces are also covered by our approach.......We consider spaces of homogeneous type associated with a non-negative self-adjoint operator whose heat kernel satisfies certain upper Gaussian bounds. Spectral multipliers are introduced and studied on distributions associated with this operator. The boundedness of spectral multipliers on Besov...

  17. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  18. Percutaneous CT-guided lung biopsy: sequential versus spiral scanning. A randomized prospective study

    International Nuclear Information System (INIS)

    Ghaye, B.; Dondelinger, R.F.; Dewe, W.

    1999-01-01

    The aim of this study was to evaluate in a prospective and randomized study spiral versus sequential scanning in the guidance of percutaneous lung biopsy. Fifty thoracic lesions occurring in 48 patients were biopsied by a senior and a junior operator. Six different time segments of the procedure were measured. Scanning mode versus length of procedure, pathological results, irradiation and complications were evaluated. Total duration of the procedure and of the first sampling was significantly longer with spiral CT for the senior operator (p < 0.004). No significant time difference was observed for the junior operator. Diameter of the lesion, depth of location, position of the patient and needle entry site did not influence the results. The sensitivity was 90.9, specificity 100, positive predictive value 100 and negative predictive value 60 % for spiral CT, and 94.7, 100, 100 and 85.7 % for sequential CT, respectively. Eleven pneumothoraces and ten perinodular hemorrhages were seen with spiral CT and six and ten, respectively, with sequential CT. The mean dose of irradiation was 4027 mAs for spiral CT and 2358 mAs for conventional CT. Spiral CT does neither reduce procedure time nor the rate of complications. Pathological results do not differ compared with sequential CT, and total dose of irradiation is higher with spiral scanning. (orig.)

  19. Sequential lineup presentation promotes less-biased criterion setting but does not improve discriminability.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil

    2012-06-01

    When compared with simultaneous lineup presentation, sequential presentation has been shown to reduce false identifications to a greater extent than it reduces correct identifications. However, there has been much debate about whether this difference in identification performance represents improved discriminability or more conservative responding. In this research, data from 22 experiments that compared sequential and simultaneous lineups were analyzed using a compound signal-detection model, which is specifically designed to describe decision-making performance on tasks such as eyewitness identification tests. Sequential (cf. simultaneous) presentation did not influence discriminability, but produced a conservative shift in response bias that resulted in less-biased choosing for sequential than simultaneous lineups. These results inform understanding of the effects of lineup presentation mode on eyewitness identification decisions.

  20. The Impact of Efflux Pump Inhibitors on the Activity of Selected Non-Antibiotic Medicinal Products against Gram-Negative Bacteria

    Directory of Open Access Journals (Sweden)

    Agnieszka E. Laudy

    2017-01-01

    Full Text Available The potential role of non-antibiotic medicinal products in the treatment of multidrug-resistant Gram-negative bacteria has recently been investigated. It is highly likely that the presence of efflux pumps may be one of the reasons for the weak activity of non-antibiotics, as in the case of some non-steroidal anti-inflammatory drugs (NSAIDs, against Gram-negative rods. The activity of eight drugs of potential non-antibiotic activity, active substance standards, and relevant medicinal products were analysed with and without of efflux pump inhibitors against 180 strains of five Gram-negative rod species by minimum inhibitory concentration (MIC value determination in the presence of 1 mM MgSO4. Furthermore, the influence of non-antibiotics on the susceptibility of clinical strains to quinolones with or without PAβN (Phe-Arg-β-naphthylamide was investigated. The impacts of PAβN on the susceptibility of bacteria to non-antibiotics suggests that amitriptyline, alendronate, nicergoline, and ticlopidine are substrates of efflux pumps in Gram-negative rods. Amitriptyline/Amitriptylinum showed the highest direct antibacterial activity, with MICs ranging 100–800 mg/L against all studied species. Significant decreases in the MIC values of other active substances (acyclovir, atorvastatin, and famotidine tested with pump inhibitors were not observed. The investigated non-antibiotic medicinal products did not alter the MICs of quinolones in the absence and in the presence of PAβN to the studied clinical strains of five groups of species.

  1. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  2. Computation of a numerically satisfactory pair of solutions of the differential equation for conical functions of non-negative integer orders

    NARCIS (Netherlands)

    T.M. Dunster (Mark); A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2014-01-01

    textabstractWe consider the problem of computing satisfactory pair of solutions of the differential equation for Legendre functions of non-negative integer order $\\mu$ and degree $-\\frac12+i\\tau$, where $\\tau$ is a non-negative real parameter. Solutions of this equation are the conical functions

  3. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  4. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  5. Moving mesh generation with a sequential approach for solving PDEs

    DEFF Research Database (Denmark)

    In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure...... a simple and robust moving mesh algorithm in one or multidimension. In this study, we propose a sequential solution procedure including two separate parts: prediction step to obtain an approximate solution to a next time level (integration of physical PDEs) and regriding step at the next time level (mesh...... generation and solution interpolation). Convection terms, which appear in physical PDEs and a mesh equation, are discretized by a WENO (Weighted Essentially Non-Oscillatory) scheme under the consrvative form. This sequential approach is to keep the advantages of robustness and simplicity for the static...

  6. Anti-tumor activity of high-dose EGFR tyrosine kinase inhibitor and sequential docetaxel in wild type EGFR non-small cell lung cancer cell nude mouse xenografts

    OpenAIRE

    Tang, Ning; Zhang, Qianqian; Fang, Shu; Han, Xiao; Wang, Zhehai

    2016-01-01

    Treatment of non-small-cell lung cancer (NSCLC) with wild-type epidermal growth factor receptor (EGFR) is still a challenge. This study explored antitumor activity of high-dose icotinib (an EGFR tyrosine kinase inhibitor) plus sequential docetaxel against wild-type EGFR NSCLC cells-generated nude mouse xenografts. Nude mice were subcutaneously injected with wild-type EGFR NSCLC A549 cells and divided into different groups for 3-week treatment. Tumor xenograft volumes were monitored and record...

  7. Assessing potential forest and steel inter-industry residue utilisation by sequential chemical extraction

    Energy Technology Data Exchange (ETDEWEB)

    Makela, M.

    2012-10-15

    Traditional process industries in Finland and abroad are facing an emerging waste disposal problem due recent regulatory development which has increased the costs of landfill disposal and difficulty in acquiring new sites. For large manufacturers, such as the forest and ferrous metals industries, symbiotic cooperation of formerly separate industrial sectors could enable the utilisation waste-labeled residues in manufacturing novel residue-derived materials suitable for replacing commercial virgin alternatives. Such efforts would allow transforming the current linear resource use and disposal models to more cyclical ones and thus attain savings in valuable materials and energy resources. The work described in this thesis was aimed at utilising forest and carbon steel industry residues in the experimental manufacture of novel residue-derived materials technically and environmentally suitable for amending agricultural or forest soil properties. Single and sequential chemical extractions were used to compare the pseudo-total concentrations of trace elements in the manufactured amendment samples to relevant Finnish statutory limit values for the use of fertilizer products and to assess respective potential availability under natural conditions. In addition, the quality of analytical work and the suitability of sequential extraction in the analysis of an industrial solid sample were respectively evaluated through the analysis of a certified reference material and by X-ray diffraction of parallel sequential extraction residues. According to the acquired data, the incorporation of both forest and steel industry residues, such as fly ashes, lime wastes, green liquor dregs, sludges and slags, led to amendment liming capacities (34.9-38.3%, Ca equiv., d.w.) comparable to relevant commercial alternatives. Only the first experimental samples showed increased concentrations of pseudo-total cadmium and chromium, of which the latter was specified as the trivalent Cr(III). Based on

  8. Comparison of measured and modelled negative hydrogen ion densities at the ECR-discharge HOMER

    Science.gov (United States)

    Rauner, D.; Kurutz, U.; Fantz, U.

    2015-04-01

    As the negative hydrogen ion density nH- is a key parameter for the investigation of negative ion sources, its diagnostic quantification is essential in source development and operation as well as for fundamental research. By utilizing the photodetachment process of negative ions, generally two different diagnostic methods can be applied: via laser photodetachment, the density of negative ions is measured locally, but only relatively to the electron density. To obtain absolute densities, the electron density has to be measured additionally, which induces further uncertainties. Via cavity ring-down spectroscopy (CRDS), the absolute density of H- is measured directly, however LOS-averaged over the plasma length. At the ECR-discharge HOMER, where H- is produced in the plasma volume, laser photodetachment is applied as the standard method to measure nH-. The additional application of CRDS provides the possibility to directly obtain absolute values of nH-, thereby successfully bench-marking the laser photodetachment system as both diagnostics are in good agreement. In the investigated pressure range from 0.3 to 3 Pa, the measured negative hydrogen ion density shows a maximum at 1 to 1.5 Pa and an approximately linear response to increasing input microwave powers from 200 up to 500 W. Additionally, the volume production of negative ions is 0-dimensionally modelled by balancing H- production and destruction processes. The modelled densities are adapted to the absolute measurements of nH- via CRDS, allowing to identify collisions of H- with hydrogen atoms (associative and non-associative detachment) to be the dominant loss process of H- in the plasma volume at HOMER. Furthermore, the characteristic peak of nH- observed at 1 to 1.5 Pa is identified to be caused by a comparable behaviour of the electron density with varying pressure, as ne determines the volume production rate via dissociative electron attachment to vibrationally excited hydrogen molecules.

  9. Impact of Sequential Ammonia Fiber Expansion (AFEX) Pretreatment and Pelletization on the Moisture Sorption Properties of Corn Stover

    Energy Technology Data Exchange (ETDEWEB)

    Bonner, Ian J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Thompson, David N. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Teymouri, Farzaneh [Michigan Biotechnology Inst., Lansing, MI (United States); Campbell, Timothy [Michigan Biotechnology Inst., Lansing, MI (United States); Bals, Bryan [Michigan Biotechnology Inst., Lansing, MI (United States); Tumuluru, Jaya Shankar [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-05-01

    Combining ammonia fiber expansion (AFEX™) pretreatment with a depot processing facility is a promising option for delivering high-value densified biomass to the emerging bioenergy industry. However, because the pretreatment process results in a high moisture material unsuitable for pelleting or storage (40% wet basis), the biomass must be immediately dried. If AFEX pretreatment results in a material that is difficult to dry, the economics of this already costly operation would be at risk. This work tests the nature of moisture sorption isotherms and thin-layer drying behavior of corn (Zea mays L.) stover at 20°C to 60°C before and after sequential AFEX pretreatment and pelletization to determine whether any negative impacts to material drying or storage may result from the AFEX process. The equilibrium moisture content to equilibrium relative humidity relationship for each of the materials was determined using dynamic vapor sorption isotherms and modeled with modified Chung-Pfost, modified Halsey, and modified Henderson temperature-dependent models as well as the Double Log Polynomial (DLP), Peleg, and Guggenheim Anderson de Boer (GAB) temperature-independent models. Drying kinetics were quantified under thin-layer laboratory testing and modeled using the Modified Page's equation. Water activity isotherms for non-pelleted biomass were best modeled with the Peleg temperature-independent equation while isotherms for the pelleted biomass were best modeled with the Double Log Polynomial equation. Thin-layer drying results were accurately modeled with the Modified Page's equation. The results of this work indicate that AFEX pretreatment results in drying properties more favorable than or equal to that of raw corn stover, and pellets of superior physical stability in storage.

  10. Strategic Path Planning by Sequential Parametric Bayesian Decisions

    Directory of Open Access Journals (Sweden)

    Baro Hyun

    2013-11-01

    Full Text Available The objective of this research is to generate a path for a mobile agent that carries sensors used for classification, where the path is to optimize strategic objectives that account for misclassification and the consequences of misclassification, and where the weights assigned to these consequences are chosen by a strategist. We propose a model that accounts for the interaction between the agent kinematics (i.e., the ability to move, informatics (i.e., the ability to process data to information, classification (i.e., the ability to classify objects based on the information, and strategy (i.e., the mission objective. Within this model, we pose and solve a sequential decision problem that accounts for strategist preferences and the solution to the problem yields a sequence of kinematic decisions of a moving agent. The solution of the sequential decision problem yields the following flying tactics: “approach only objects whose suspected identity matters to the strategy”. These tactics are numerically illustrated in several scenarios.

  11. Validation of a model for predicting smear-positive active pulmonary tuberculosis in patients with initial acid-fast bacilli smear-negative sputum

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, Jun-Jun [Department of Chest Medicine, Section of Thoracic Imaging, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi City (China); Chia Nan University of Pharmacy and Science, Tainan (China); Meiho University, Pingtung (China); Pingtung Christian Hospital, Pingtung (China); Heng Chun Christian Hospital, Pingtung (China)

    2018-01-15

    The objective of this study was to develop a predictive model for final smear-positive (SP) active pulmonary tuberculosis (aPTB) in patients with initial negative acid fast bacilli (AFB) sputum smears (iSN-SP-aPTB) based on high-resolution computed tomography (HRCT). Eighty (126, 21) patients of iSN-SP-aPTB and 402 (459, 876) patients of non-initial positive acid fast bacilli (non-iSP) pulmonary disease without iSN-SP-aPTB were included in a derivation (validation, prospective) cohort. HRCT characteristics were analysed, and multivariable regression and receiver operating characteristic (ROC) curve analysis was performed to develop a score predictive of iSN-SP-aPTB. The derivation cohort showed clusters of nodules/mass of the right upper lobe or left upper lobe were independent predictors of iSN-SP-aPTB, while bronchiectasis in the right middle lobe or left lingual lobe were negatively associated with iSN-SP-aPTB. A predictive score for iSN-SP-aPTB based on these findings was tested in the validation and prospective cohorts. With an ideal cut-off score = 1, the sensitivity, specificity, positive predictive value, and negative predictive value of the prediction model were 87.5% (90%, 90.5%), 99% (97.1%, 98.4%), 94.6% (81.3%, 57.5%), and 97.6% (97%, 99.8%) in the derivation (validation, prospective) cohorts, respectively. The model may help identify iSN-SP-aPTB among patients with non-iSP pulmonary diseases. circle Smear-positive active pulmonary tuberculosis that is initial smear-negative (iSN-SP-aPTB) is infectious. (orig.)

  12. Analysis of membrane fusion as a two-state sequential process: evaluation of the stalk model.

    Science.gov (United States)

    Weinreb, Gabriel; Lentz, Barry R

    2007-06-01

    We propose a model that accounts for the time courses of PEG-induced fusion of membrane vesicles of varying lipid compositions and sizes. The model assumes that fusion proceeds from an initial, aggregated vesicle state ((A) membrane contact) through two sequential intermediate states (I(1) and I(2)) and then on to a fusion pore state (FP). Using this model, we interpreted data on the fusion of seven different vesicle systems. We found that the initial aggregated state involved no lipid or content mixing but did produce leakage. The final state (FP) was not leaky. Lipid mixing normally dominated the first intermediate state (I(1)), but content mixing signal was also observed in this state for most systems. The second intermediate state (I(2)) exhibited both lipid and content mixing signals and leakage, and was sometimes the only leaky state. In some systems, the first and second intermediates were indistinguishable and converted directly to the FP state. Having also tested a parallel, two-intermediate model subject to different assumptions about the nature of the intermediates, we conclude that a sequential, two-intermediate model is the simplest model sufficient to describe PEG-mediated fusion in all vesicle systems studied. We conclude as well that a fusion intermediate "state" should not be thought of as a fixed structure (e.g., "stalk" or "transmembrane contact") of uniform properties. Rather, a fusion "state" describes an ensemble of similar structures that can have different mechanical properties. Thus, a "state" can have varying probabilities of having a given functional property such as content mixing, lipid mixing, or leakage. Our data show that the content mixing signal may occur through two processes, one correlated and one not correlated with leakage. Finally, we consider the implications of our results in terms of the "modified stalk" hypothesis for the mechanism of lipid pore formation. We conclude that our results not only support this hypothesis but

  13. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  14. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

    Directory of Open Access Journals (Sweden)

    Huber-Wagner S

    2010-05-01

    Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

  15. A non-linear association between self-reported negative emotional response to stress and subsequent allostatic load

    DEFF Research Database (Denmark)

    Dich, Nadya; Doan, Stacey N; Kivimäki, Mika

    2014-01-01

    dysregulation. Allostatic load also increased with age, but the association between negative emotional response and allostatic load remained stable over time. These results provide evidence for a more nuanced understanding of the role of negative emotions in long-term physical health....... response to major life events and allostatic load, a multisystem indicator of physiological dysregulation. Study sample was 6764 British civil service workers from the Whitehall II cohort. Negative emotional response was assessed by self-report at baseline. Allostatic load was calculated using...... cardiovascular, metabolic and immune function biomarkers at three clinical follow-up examinations. A non-linear association between negative emotional response and allostatic load was observed: being at either extreme end of the distribution of negative emotional response increased the risk of physiological...

  16. Negative brain scintigrams in brain tumors

    International Nuclear Information System (INIS)

    Dalke, K.G.

    1978-01-01

    With 53 histologically verified and 2 histologically not identified brain tumors, that showed a negative scintigram, it was tried to find reasons for the wrong and negative dropout of these scintigrams. The electroencephalograms and angiograms, that were made simultaneously were taken into consideration with respect to their propositional capability and were compared with the scintigram findings. For the formation of the negative brain scintigrams there could be found no unique cause or causal constellation. The scintigraphic tumor representation is likely based on a complex process. Therefore the reasons for the negativity of the brain scintigrams can be a manifold of causes. An important role plays the vascularisation of the tumor, but not in a sole way. As well the tumor localisation gains some importance; especially in the temporal lobe or in the deeper structures situated tumors can be negative in the scintigram. To hold down the rate of wrong-negative quote in the case of intracranial tumor search, one is advised to continue with an further exposure after 2 to 4 hours besides the usual exposures, unless a sequential scintigraphy was made from the beginning. (orig./MG) [de

  17. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Sequential Elution of Essential Oil Constituents during Steam Distillation of Hops (Humulus lupulus L.) and Influence on Oil Yield and Antimicrobial Activity.

    Science.gov (United States)

    Jeliazkova, Ekaterina; Zheljazkov, Valtcho D; Kačániova, Miroslava; Astatkie, Tess; Tekwani, Babu L

    2018-06-07

    The profile and bioactivity of hops (Humulus lupulus L.) essential oil, a complex natural product extracted from cones via steam distillation, depends on genetic and environmental factors, and may also depend on extraction process. We hypothesized that compound mixtures eluted sequentially and captured at different timeframes during the steam distillation process of whole hop cones would have differential chemical and bioactivity profiles. The essential oil was collected sequentially at 8 distillation time (DT) intervals: 0-2, 2-5, 5-10, 10-30, 30-60, 60-120, 120-180, and 180-240 min. The control was a 4-h non-interrupted distillation. Nonlinear regression models described the DT and essential oil compounds relationship. Fractions yielded 0.035 to 0.313% essential oil, while control yielded 1.47%. The oil eluted during the first hour was 83.2%, 9.6% during the second hour, and only 7.2% during the second half of the distillation. Essential oil (EO) fractions had different chemical profile. Monoterpenes were eluted early, while sequiterpenes were eluted late. Myrcene and linalool were the highest in 0-2 min fraction, β-caryophyllene, β-copaene, β-farnesene, and α-humulene were highest in fractions from middle of distillation, whereas α- bergamotene, γ-muurolene, β- and α-selinene, γ- and δ-cadinene, caryophyllene oxide, humulne epoxide II, τ-cadinol, and 6-pentadecen-2-one were highest in 120-180 or 180-240 min fractions. The Gram-negative Escherichia coli was strongly inhibited by essential oil fractions from 2-5 min and 10-30 min, followed by oil fraction from 0-2 min. The strongest inhibition activity against Gram-negative Yersinia enterocolitica, and Gram-positive Clostridium perfringens, Enterococcus faecalis, and Staphylococcus aureus subs. aureus was observed with the control essential oil. This is the first study to describe significant activity of hops essential oils against Trypanosoma brucei, a parasitic protozoan that causes African

  19. Trade-off between positive and negative design of protein stability: from lattice models to real proteins.

    Directory of Open Access Journals (Sweden)

    Orly Noivirt-Brik

    2009-12-01

    Full Text Available Two different strategies for stabilizing proteins are (i positive design in which the native state is stabilized and (ii negative design in which competing non-native conformations are destabilized. Here, the circumstances under which one strategy might be favored over the other are explored in the case of lattice models of proteins and then generalized and discussed with regard to real proteins. The balance between positive and negative design of proteins is found to be determined by their average "contact-frequency", a property that corresponds to the fraction of states in the conformational ensemble of the sequence in which a pair of residues is in contact. Lattice model proteins with a high average contact-frequency are found to use negative design more than model proteins with a low average contact-frequency. A mathematical derivation of this result indicates that it is general and likely to hold also for real proteins. Comparison of the results of correlated mutation analysis for real proteins with typical contact-frequencies to those of proteins likely to have high contact-frequencies (such as disordered proteins and proteins that are dependent on chaperonins for their folding indicates that the latter tend to have stronger interactions between residues that are not in contact in their native conformation. Hence, our work indicates that negative design is employed when insufficient stabilization is achieved via positive design owing to high contact-frequencies.

  20. Feasibility Study of Sequentially Alternating EGFR-TKIs and Chemotherapy for Patients with Non-small Cell Lung Cancer.

    Science.gov (United States)

    Takemura, Yoshizumi; Chihara, Yusuke; Morimoto, Yoshie; Tanimura, Keiko; Imabayashi, Tatsuya; Seko, Yurie; Kaneko, Yoshiko; Date, Koji; Ueda, Mikio; Arimoto, Taichiro; Iwasaki, Yoshinobu; Takayama, Koichi

    2018-04-01

    The purpose of this trial was to evaluate the feasibility and efficacy of alternating platinum-based doublet chemotherapy with epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs) in patients with EGFR-mutant non-small cell lung cancer (NSCLC). Chemotherapy-naive patients with advanced NSCLC harboring an EGFR mutation were enrolled. All patients underwent induction chemotherapy by sequentially alternating pemetrexed/cisplatin/bevacizumab and EGFR-TKIs followed by maintenance therapy with pemetrexed/bevacizumab and EGFR-TKIs. The primary outcome was the completion rate of the induction therapy. Eighteen eligible patients were enrolled between May 2011 and March 2016. The completion rate of induction therapy was 72.2% (13/18). Unfortunately, one patient developed grade 4 acute renal injury, but no other serious complications concerning this protocol were observed. Furthermore, diarrhea, rashes, and hematological adverse effects were mild. The completion rate of induction therapy was promising. Alternating chemotherapy and EGFR-TKIs should be further investigated regarding feasibility and efficacy. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  1. Distributed data access in the sequential access model at the D0 experiment at Fermilab

    International Nuclear Information System (INIS)

    Terekhov, Igor; White, Victoria

    2000-01-01

    The authors present the Sequential Access Model (SAM), which is the data handling system for D0, one of two primary High Energy Experiments at Fermilab. During the next several years, the D0 experiment will store a total of about 1 PByte of data, including raw detector data and data processed at various levels. The design of SAM is not specific to the D0 experiment and carries few assumptions about the underlying mass storage level; its ideas are applicable to any sequential data access. By definition, in the sequential access mode a user application needs to process a stream of data, by accessing each data unit exactly once, the order of data units in the stream being irrelevant. The units of data are laid out sequentially in files. The adopted model allows for significant optimizations of system performance, decrease of user file latency and increase of overall throughput. In particular, caching is done with the knowledge of all the files needed in the near future, defined as all the files of the already running or submitted jobs. The bulk of the data is stored in files on tape in the mass storage system (MSS) called Enstore[2] and also developed at Fermilab. (The tape drives are served by an ADIC AML/2 Automated Tape Library). At any given time, SAM has a small fraction of the data cached on disk for processing. In the present paper, the authors discuss how data is delivered onto disk and how it is accessed by user applications. They will concentrate on data retrieval (consumption) from the MSS; when SAM is used for storing of data, the mechanisms are rather symmetrical. All of the data managed by SAM is cataloged in great detail in a relational database (ORACLE). The database also serves as the persistency mechanism for the SAM servers described in this paper. Any client or server in the SAM system which needs to store or retrieve information from the database does so through the interfaces of a CORBA-based database server. The users (physicists) use the

  2. How do people learn from negative evidence? Non-monotonic generalizations and sampling assumptions in inductive reasoning.

    Science.gov (United States)

    Voorspoels, Wouter; Navarro, Daniel J; Perfors, Amy; Ransom, Keith; Storms, Gert

    2015-09-01

    A robust finding in category-based induction tasks is for positive observations to raise the willingness to generalize to other categories while negative observations lower the willingness to generalize. This pattern is referred to as monotonic generalization. Across three experiments we find systematic non-monotonicity effects, in which negative observations raise the willingness to generalize. Experiments 1 and 2 show that this effect emerges in hierarchically structured domains when a negative observation from a different category is added to a positive observation. They also demonstrate that this is related to a specific kind of shift in the reasoner's hypothesis space. Experiment 3 shows that the effect depends on the assumptions that the reasoner makes about how inductive arguments are constructed. Non-monotonic reasoning occurs when people believe the facts were put together by a helpful communicator, but monotonicity is restored when they believe the observations were sampled randomly from the environment. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Non-linear time variant model intended for polypyrrole-based actuators

    Science.gov (United States)

    Farajollahi, Meisam; Madden, John D. W.; Sassani, Farrokh

    2014-03-01

    Polypyrrole-based actuators are of interest due to their biocompatibility, low operation voltage and relatively high strain and force. Modeling and simulation are very important to predict the behaviour of each actuator. To develop an accurate model, we need to know the electro-chemo-mechanical specifications of the Polypyrrole. In this paper, the non-linear time-variant model of Polypyrrole film is derived and proposed using a combination of an RC transmission line model and a state space representation. The model incorporates the potential dependent ionic conductivity. A function of ionic conductivity of Polypyrrole vs. local charge is proposed and implemented in the non-linear model. Matching of the measured and simulated electrical response suggests that ionic conductivity of Polypyrrole decreases significantly at negative potential vs. silver/silver chloride and leads to reduced current in the cyclic voltammetry (CV) tests. The next stage is to relate the distributed charging of the polymer to actuation via the strain to charge ratio. Further work is also needed to identify ionic and electronic conductivities as well as capacitance as a function of oxidation state so that a fully predictive model can be created.

  4. A Process Improvement Evaluation of Sequential Compression Device Compliance and Effects of Provider Intervention.

    Science.gov (United States)

    Beachler, Jason A; Krueger, Chad A; Johnson, Anthony E

    This process improvement study sought to evaluate the compliance in orthopaedic patients with sequential compression devices and to monitor any improvement in compliance following an educational intervention. All non-intensive care unit orthopaedic primary patients were evaluated at random times and their compliance with sequential compression devices was monitored and recorded. Following a 2-week period of data collection, an educational flyer was displayed in every patient's room and nursing staff held an in-service training event focusing on the importance of sequential compression device use in the surgical patient. Patients were then monitored, again at random, and compliance was recorded. With the addition of a simple flyer and a single in-service on the importance of mechanical compression in the surgical patient, a significant improvement in compliance was documented at the authors' institution from 28% to 59% (p < .0001).

  5. Sequential Uniformly Reweighted Sum-Product Algorithm for Cooperative Localization in Wireless Networks

    OpenAIRE

    Li, Wei; Yang, Zhen; Hu, Haifeng

    2014-01-01

    Graphical models have been widely applied in solving distributed inference problems in wireless networks. In this paper, we formulate the cooperative localization problem in a mobile network as an inference problem on a factor graph. Using a sequential schedule of message updates, a sequential uniformly reweighted sum-product algorithm (SURW-SPA) is developed for mobile localization problems. The proposed algorithm combines the distributed nature of belief propagation (BP) with the improved p...

  6. Numerical modelling of negative discharges in air with experimental validation

    International Nuclear Information System (INIS)

    Tran, T N; Golosnoy, I O; Lewin, P L; Georghiou, G E

    2011-01-01

    Axisymmetric finite element models have been developed for the simulation of negative discharges in air without and with the presence of dielectrics. The models are based on the hydrodynamic drift-diffusion approximation. A set of continuity equations accounting for the movement, generation and loss of charge carriers (electrons, positive and negative ions) is coupled with Poisson's equation to take into account the effect of space and surface charges on the electric field. The model of a negative corona discharge (without dielectric barriers) in a needle-plane geometry is analysed first. The results obtained show good agreement with experimental observations for various Trichel pulse characteristics. With dielectric barriers introduced into the discharge system, the surface discharge exhibits some similarities and differences to the corona case. The model studies the dynamics of volume charge generation, electric field variations and charge accumulation over the dielectric surface. The predicted surface charge density is consistent with experimental results obtained from the Pockels experiment in terms of distribution form and magnitude.

  7. [WMN: a negative ERPs component related to working memory during non-target visual stimuli processing].

    Science.gov (United States)

    Zhao, Lun; Wei, Jin-he

    2003-10-01

    To study non-target stimuli processing in the brain. Features of the event-related potentials (ERPs) from non-target stimuli during selective response task (SR) was compared with that during visual selective discrimination (DR) task in 26 normal subjects. The stimuli consisted of two color LED flashes (red and green) appeared randomly in left (LVF) or right (RVF) visual field with same probability. ERPs were derived at 9 electrode sites on the scalp under 2 task conditions: a) SR, making switch response to the target (NT) stimuli from LVF or RVF in one direction and making no response to the non-target (NT) ones; b) DR, making switching response to T stimuli differentially, i.e., to the left for T from LVF and to the right for T from RVF. 1) the non-target stimuli in DR conditions, compared with that in SR condition, elicited smaller P2 and P3 components and larger N2 component at the frontal brain areas; 2) a significant negative component, named as WMN (working memory negativity), appeared in the non-target ERPs during DR in the period of 100 to 700 ms post stimulation which was predominant at the frontal brain areas. According to the major difference between brain activities for non-target stimuli during SR and DR, the predominant appearance of WMN at the frontal brain areas demonstrated that the non-target stimulus processing was an active process and was related to working memory, i.e., the temporary elimination and the retrieval of the response mode which was stored in working memory.

  8. Kinetic Modeling of Synthetic Wastewater Treatment by the Moving-bed Sequential Continuous-inflow Reactor (MSCR

    Directory of Open Access Journals (Sweden)

    Mohammadreza Khani

    2016-11-01

    Full Text Available It was the objective of the present study to conduct a kinetic modeling of a Moving-bed Sequential Continuous-inflow Reactor (MSCR and to develop its best prediction model. For this purpose, a MSCR consisting of an aerobic-anoxic pilot 50 l in volume and an anaerobic pilot of 20 l were prepared. The MSCR was fed a variety of organic loads and operated at different hydraulic retention times (HRT using synthetic wastewater at input COD concentrations of 300 to 1000 mg/L with HRTs of 2 to 5 h. Based on the results and the best system operation conditions, the highest COD removal (98.6% was obtained at COD=500 mg/L. The three well-known first order, second order, and Stover-Kincannon models were utilized for the kinetic modeling of the reactor. Based on the kinetic analysis of organic removal, the Stover-Kincannon model was chosen for the kinetic modeling of the moving bed biofilm. Given its advantageous properties in the statisfactory prediction of organic removal at different organic loads, this model is recommended for the design and operation of MSCR systems.

  9. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  10. Non-commutative standard model: model building

    CERN Document Server

    Chaichian, Masud; Presnajder, P

    2003-01-01

    A non-commutative version of the usual electro-weak theory is constructed. We discuss how to overcome the two major problems: (1) although we can have non-commutative U(n) (which we denote by U sub * (n)) gauge theory we cannot have non-commutative SU(n) and (2) the charges in non-commutative QED are quantized to just 0,+-1. We show how the latter problem with charge quantization, as well as with the gauge group, can be resolved by taking the U sub * (3) x U sub * (2) x U sub * (1) gauge group and reducing the extra U(1) factors in an appropriate way. Then we proceed with building the non-commutative version of the standard model by specifying the proper representations for the entire particle content of the theory, the gauge bosons, the fermions and Higgs. We also present the full action for the non-commutative standard model (NCSM). In addition, among several peculiar features of our model, we address the inherentCP violation and new neutrino interactions. (orig.)

  11. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  12. Predictive Modelling Risk Calculators and the Non Dialysis Pathway.

    Science.gov (United States)

    Robins, Jennifer; Katz, Ivor

    2013-04-16

    This guideline will review the current prediction models and survival/mortality scores available for decision making in patients with advanced kidney disease who are being considered for a non-dialysis treatment pathway. Risk prediction is gaining increasing attention with emerging literature suggesting improved patient outcomes through individualised risk prediction (1). Predictive models help inform the nephrologist and the renal palliative care specialists in their discussions with patients and families about suitability or otherwise of dialysis. Clinical decision making in the care of end stage kidney disease (ESKD) patients on a non-dialysis treatment pathway is currently governed by several observational trials (3). Despite the paucity of evidence based medicine in this field, it is becoming evident that the survival advantages associated with renal replacement therapy in these often elderly patients with multiple co-morbidities and limited functional status may be negated by loss of quality of life (7) (6), further functional decline (5, 8), increased complications and hospitalisations. This article is protected by copyright. All rights reserved.

  13. Sequential Administration of Carbon Nanotubes and Near Infrared Radiation for the Treatment of Gliomas

    Directory of Open Access Journals (Sweden)

    Tiago eSantos

    2014-07-01

    Full Text Available The objective was to use carbon nanotubes (CNT coupled with near infrared radiation (NIR to induce hyperthermia, as a novel non-ionizing radiation treatment for primary brain tumors, glioblastoma multiforme (GBM. In this study we report the therapeutic potential of hyperthermia-induced thermal ablation using the sequential administration of carbon nanotubes and NIR. In vitro studies were performed using glioma tumor cell lines (U251, U87, LN229, T98G. Glioma cells were incubated with CNTs for 24 hours followed by exposure to NIR for 10 minutes. Glioma cells preferentially internalized CNTs, which upon NIR exposure, generated heat, causing necrotic cell death. There were minimal effects to normal cells, which correlate to their minimal uptake of CNTs. Furthermore, this protocol caused cell death to glioma cancer stem cells, and drug-resistant as well as drug-sensitive glioma cells. This sequential hyperthermia therapy was effective in vivo, in the rodent tumor model resulting in tumor shrinkage and no recurrence after only one treatment. In conclusion, this sequence of selective CNT administration followed by NIR activation provides a new approach to the treatment of glioma, particularly drug-resistant gliomas.

  14. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  15. On-line Flagging of Anomalies and Adaptive Sequential Hypothesis Testing for Fine-feature Characterization of Geosynchronous Satellites

    Science.gov (United States)

    2015-10-18

    model-based evidence. This work resolves cross-tag using three methods (Z-test for dependent data, classical sequential analysis and Brownian motion...Slider Movement The two-facet model is used as the Inversion Model. It represents a three-axis stabilized satellite as two facets, namely a body...the sequential analysis. If is independent and has an approximately normal distribution then Brownian motion drift analysis is used. If is

  16. A sequential vesicle pool model with a single release sensor and a ca(2+)-dependent priming catalyst effectively explains ca(2+)-dependent properties of neurosecretion

    DEFF Research Database (Denmark)

    Walter, Alexander M; da Silva Pinheiro, Paulo César; Verhage, Matthijs

    2013-01-01

    identified. We here propose a Sequential Pool Model (SPM), assuming a novel Ca(2+)-dependent action: a Ca(2+)-dependent catalyst that accelerates both forward and reverse priming reactions. While both models account for fast fusion from the Readily-Releasable Pool (RRP) under control of synaptotagmin-1...... the simultaneous changes in release rate and amplitude seen when mutating the SNARE-complex. Finally, it can account for the loss of fast- and the persistence of slow release in the synaptotagmin-1 knockout by assuming that the RRP is depleted, leading to slow and Ca(2+)-dependent fusion from the NRP. We conclude...... that the elusive 'alternative Ca(2+) sensor' for slow release might be the upstream priming catalyst, and that a sequential model effectively explains Ca(2+)-dependent properties of secretion without assuming parallel pools or sensors....

  17. Predictors of non- hookah smoking among high-school students based on prototype/willingness model.

    Science.gov (United States)

    Abedini, Sedigheh; MorowatiSharifabad, MohammadAli; Chaleshgar Kordasiabi, Mosharafeh; Ghanbarnejad, Amin

    2014-01-01

    The aim of the study was to determine predictors of refraining from hookah smoking among high-school students in Bandar Abbas, southern Iran based on Prototype/Willingness model. This cross- sectional with analytic approach was performed on 240 high-school students selected by a cluster random sampling. The data of demographic and Prototype-Willingness Model constructs were acquired via a self-administrated questionnaire. Data were analyzed by mean, frequency, correlation, liner and logistic regression statistical tests. Statistically significant determinants of the intention to refrain from hookah smoking were subjective norms, willingness, and attitude. Regression model indicated that the three items together explained 46.9% of the non-smoking hookah intention variance. Attitude and subjective norms predicted 36.0% of the non-smoking hookah intention variance. There was a significant relationship between the participants' negative prototype about the hookah smokers and the willingness to avoid from hookah smoking (P=0.002). Also willingness predicted non-smoking hookah better than the intention (P<0.001). Deigning intervention to increase negative prototype about the hookah smokers and reducing situations and conditions which facilitate hookah smoking, such as easy access to tobacco products in the cafés, beaches can be useful results among adolescents to hookah smoking prevention.

  18. Modeling Tetanus Neonatorum case using the regression of negative binomial and zero-inflated negative binomial

    Science.gov (United States)

    Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni

    2017-12-01

    Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.

  19. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  20. ONC201 demonstrates anti-tumor effects in both triple negative and non-triple negative breast cancers through TRAIL-dependent and TRAIL-independent mechanisms

    OpenAIRE

    Ralff, Marie D.; Kline, Christina L.B.; Küçükkase, Ozan C; Wagner, Jessica; Lim, Bora; Dicker, David T.; Prabhu, Varun V.; Oster, Wolfgang; El-Deiry, Wafik S.

    2017-01-01

    Breast cancer is a major cause of cancer-related death. TRAIL has been of interest as a cancer therapeutic, but only a subset of triple negative breast cancers (TNBC) is sensitive to TRAIL. The small molecule ONC201 induces expression of TRAIL and its receptor DR5. ONC201 has entered clinical trials in advanced cancers. Here we show that ONC201 is efficacious against both TNBC and non-TNBC cells (n=13). A subset of TNBC and non-TNBC cells succumb to ONC201-induced cell death. In 2/8 TNBC cell...

  1. Separating conditional and unconditional cooperation in a sequential Prisoner’s Dilemma game

    Science.gov (United States)

    Mieth, Laura; Buchner, Axel

    2017-01-01

    Most theories of social exchange distinguish between two different types of cooperation, depending on whether or not cooperation occurs conditional upon the partner’s previous behaviors. Here, we used a multinomial processing tree model to distinguish between positive and negative reciprocity and cooperation bias in a sequential Prisoner’s Dilemma game. In Experiments 1 and 2, the facial expressions of the partners were varied to manipulate cooperation bias. In Experiment 3, an extinction instruction was used to manipulate reciprocity. The results confirm that people show a stronger cooperation bias when interacting with smiling compared to angry-looking partners, supporting the notion that a smiling facial expression in comparison to an angry facial expression helps to construe a situation as cooperative rather than competitive. Reciprocity was enhanced for appearance-incongruent behaviors, but only when participants were encouraged to form expectations about the partners’ future behaviors. Negative reciprocity was not stronger than positive reciprocity, regardless of whether expectations were manipulated or not. Experiment 3 suggests that people are able to ignore previous episodes of cheating as well as previous episodes of cooperation if these turn out to be irrelevant for predicting a partner’s future behavior. The results provide important insights into the mechanisms of social cooperation. PMID:29121671

  2. Validation of Spectral Unmixing Results from Informed Non-Negative Matrix Factorization (INMF) of Hyperspectral Imagery

    Science.gov (United States)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2017-12-01

    Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from

  3. [Uncommon non-fermenting Gram-negative rods as pathogens of lower respiratory tract infection].

    Science.gov (United States)

    Juhász, Emese; Iván, Miklós; Pongrácz, Júlia; Kristóf, Katalin

    2018-01-01

    Glucose non-fermenting Gram-negative bacteria are ubiquitous environmental organisms. Most of them are identified as opportunistic, nosocomial pathogens in patients. Uncommon species are identified accurately, mainly due to the introduction of matrix-assisted laser desorption-ionization time of flight mass spectrometry (MALDI-TOF MS) in clinical microbiology practice. Most of these uncommon non-fermenting rods are isolated from lower respiratory tract samples. Their significance in lower respiratory tract infections, such as rules of their testing are not clarified yet. The aim of this study was to review the clinical microbiological features of these bacteria, especially their roles in lower respiratory tract infections and antibiotic treatment options. Lower respiratory tract samples of 3589 patients collected in a four-year period (2013-2016) were analyzed retrospectively at Semmelweis University (Budapest, Hungary). Identification of bacteria was performed by MALDI-TOF MS, the antibiotic susceptibility was tested by disk diffusion method. Stenotrophomonas maltophilia was revealed to be the second, whereas Acinetobacter baumannii the third most common non-fermenting rod in lower respiratory tract samples, behind the most common Pseudomonas aeruginosa. The total number of uncommon non-fermenting Gram-negative isolates was 742. Twenty-three percent of isolates were Achromobacter xylosoxidans. Beside Chryseobacterium, Rhizobium, Delftia, Elizabethkingia, Ralstonia and Ochrobactrum species, and few other uncommon species were identified among our isolates. The accurate identification of this species is obligatory, while most of them show intrinsic resistance to aminoglycosides. Resistance to ceftazidime, cefepime, piperacillin-tazobactam and carbapenems was frequently observed also. Ciprofloxacin, levofloxacin and trimethoprim-sulfamethoxazole were found to be the most effective antibiotic agents. Orv Hetil. 2018; 159(1): 23-30.

  4. The influence of spatial congruency and movement preparation time on saccade curvature in simultaneous and sequential dual-tasks.

    Science.gov (United States)

    Moehler, Tobias; Fiehler, Katja

    2015-11-01

    Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Prospectivity Modeling of Karstic Groundwater Using a Sequential Exploration Approach in Tepal Area, Iran

    Science.gov (United States)

    Sharifi, Fereydoun; Arab-Amiri, Ali Reza; Kamkar-Rouhani, Abolghasem; Yousefi, Mahyar; Davoodabadi-Farahani, Meysam

    2017-09-01

    The purpose of this study is water prospectivity modeling (WPM) for recognizing karstic water-bearing zones by using analyses of geo-exploration data in Kal-Qorno valley, located in Tepal area, north of Iran. For this, a sequential exploration method applied on geo-evidential data to delineate target areas for further exploration. In this regard, two major exploration phases including regional and local scales were performed. In the first phase, indicator geological features, structures and lithological units, were used to model groundwater prospectivity as a regional scale. In this phase, for karstic WPM, fuzzy lithological and structural evidence layers were generated and combined using fuzzy operators. After generating target areas using WPM, in the second phase geophysical surveys including gravimetry and geoelectrical resistivity were carried out on the recognized high potential zones as a local scale exploration. Finally the results of geophysical analyses in the second phase were used to select suitable drilling locations to access and extract karstic groundwater in the study area.

  6. Random and cooperative sequential adsorption

    Science.gov (United States)

    Evans, J. W.

    1993-10-01

    Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

  7. Lineup composition, suspect position, and the sequential lineup advantage.

    Science.gov (United States)

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  8. A locally conservative non-negative finite element formulation for anisotropic advective-diffusive-reactive systems

    Science.gov (United States)

    Mudunuru, M. K.; Shabouei, M.; Nakshatrala, K.

    2015-12-01

    Advection-diffusion-reaction (ADR) equations appear in various areas of life sciences, hydrogeological systems, and contaminant transport. Obtaining stable and accurate numerical solutions can be challenging as the underlying equations are coupled, nonlinear, and non-self-adjoint. Currently, there is neither a robust computational framework available nor a reliable commercial package known that can handle various complex situations. Herein, the objective of this poster presentation is to present a novel locally conservative non-negative finite element formulation that preserves the underlying physical and mathematical properties of a general linear transient anisotropic ADR equation. In continuous setting, governing equations for ADR systems possess various important properties. In general, all these properties are not inherited during finite difference, finite volume, and finite element discretizations. The objective of this poster presentation is two fold: First, we analyze whether the existing numerical formulations (such as SUPG and GLS) and commercial packages provide physically meaningful values for the concentration of the chemical species for various realistic benchmark problems. Furthermore, we also quantify the errors incurred in satisfying the local and global species balance for two popular chemical kinetics schemes: CDIMA (chlorine dioxide-iodine-malonic acid) and BZ (Belousov--Zhabotinsky). Based on these numerical simulations, we show that SUPG and GLS produce unphysical values for concentration of chemical species due to the violation of the non-negative constraint, contain spurious node-to-node oscillations, and have large errors in local and global species balance. Second, we proposed a novel finite element formulation to overcome the above difficulties. The proposed locally conservative non-negative computational framework based on low-order least-squares finite elements is able to preserve these underlying physical and mathematical properties

  9. The subtyping of primary aldosteronism by adrenal vein sampling: sequential blood sampling causes factitious lateralization.

    Science.gov (United States)

    Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo

    2018-02-01

    The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.

  10. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  11. Explaining mental health disparities for non-monosexual women: abuse history and risky sex, or the burdens of non-disclosure?

    Science.gov (United States)

    Persson, Tonje J; Pfaus, James G; Ryder, Andrew G

    2015-03-01

    Research has found that non-monosexual women report worse mental health than their heterosexual and lesbian counterparts. The reasons for these mental health discrepancies are unclear. This study investigated whether higher levels of child abuse and risky sexual behavior, and lower levels of sexual orientation disclosure, may help explain elevated symptoms of depression and anxiety among non-monosexual women. Participants included 388 women living in Canada (Mean age = 24.40, SD = 6.40, 188 heterosexual, 53 mostly heterosexual, 64 bisexual, 32 mostly lesbian, 51 lesbian) who filled out the Beck Depression and Anxiety Inventories as part of an online study running from April 2011 to February 2014. Participants were collapsed into non-monosexual versus monosexual categories. Non-monosexual women reported more child abuse, risky sexual behavior, less sexual orientation disclosure, and more symptoms of depression and anxiety than monosexual women. Statistical mediation analyses, using conditional process modeling, revealed that sexual orientation disclosure and risky sexual behavior uniquely, but not sequentially, mediated the relation between sexual orientation, depression and anxiety. Sexual orientation disclosure and risky sexual behavior were both associated with depression and anxiety. Childhood abuse did not moderate depression, anxiety, or risky sexual behavior. Findings indicate that elevated levels of risky sexual behavior and deflated levels of sexual orientation disclosure may in part explain mental health disparities among non-monosexual women. Results highlight potential targets for preventive interventions aimed at decreasing negative mental health outcomes for non-monosexual women, such as public health campaigns targeting bisexual stigma and the development of sex education programs for vulnerable sexual minority women, such as those defining themselves as bisexual, mostly heterosexual, or mostly lesbian. Copyright © 2014 Elsevier Ltd. All rights

  12. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  13. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  14. Quantum versus classical laws for sequential decay processes

    International Nuclear Information System (INIS)

    Ghirardi, G.C.; Omero, C.; Weber, T.

    1979-05-01

    The problem of the deviations of the quantum from the classical laws for the occupation numbers of the various levels in a sequential decay process is discussed in general. A factorization formula is obtained for the matrix elements of the complete Green function entering in the expression of the occupation numbers of the levels. Through this formula and using specific forms of the quantum non-decay probability for the single levels, explicit expressions for the occupation numbers of the levels are obtained and compared with the classical ones. (author)

  15. Mixing modes in a population-based interview survey: comparison of a sequential and a concurrent mixed-mode design for public health research.

    Science.gov (United States)

    Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia

    2018-01-01

    Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n  = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n  = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between

  16. Sequential feeding using whole wheat and a separate protein-mineral concentrate improved feed efficiency in laying hens.

    Science.gov (United States)

    Umar Faruk, M; Bouvarel, I; Même, N; Rideau, N; Roffidal, L; Tukur, H M; Bastianelli, D; Nys, Y; Lescoat, P

    2010-04-01

    The effect of feeding nutritionally different diets in sequential or loose-mix systems on the performance of laying hen was investigated from 16 to 46 wk of age. Equal proportions of whole wheat grain and protein-mineral concentrate (balancer diet) were fed either alternatively (sequential) or together (loose-mix) to ISA Brown hens. The control was fed a complete layer diet conventionally. Each treatment was allocated 16 cages and each cage contained 5 birds. Light was provided 16 h daily (0400 to 2000 h). Feed offered was controlled (121 g/bird per d) and distributed twice (4 and 11 h after lights-on). In the sequential treatment, only wheat was fed at first distribution, followed by balancer diet at the second distribution. In loose-mix, the 2 rations were mixed and fed together during the 2 distributions. Leftover feed was always removed before the next distribution. Sequential feeding reduced total feed intake when compared with loose-mix and control. It had lower wheat (-9 g/bird per d) but higher balancer (+1.7 g/bird per d) intakes than loose-mix. Egg production, egg mass, and egg weight were similar among treatments. This led to an improvement in efficiency of feed utilization in sequential compared with loose-mix and control (10 and 5%, respectively). Birds fed sequentially had lower calculated ME (kcal/bird per d) intake than those fed in loose-mix and control. Calculated CP (g/bird per d) intake was reduced in sequential compared with loose-mix and control. Sequentially fed hens were lighter in BW. However, they had heavier gizzard, pancreas, and liver. Similar liver lipid was observed among treatments. Liver glycogen was higher in loose-mix than the 2 other treatments. It was concluded that feeding whole wheat and balancer diet, sequentially or loosely mixed, had no negative effect on performance in laying hens. Thus, the 2 systems are alternative to conventional feeding. The increased efficiency of feed utilization in sequential feeding is an added

  17. An exploratory path model of the relationships between positive and negative adaptation to cancer on quality of life among non-Hodgkin lymphoma survivors.

    Science.gov (United States)

    Bryant, Ashley Leak; Smith, Sophia K; Zimmer, Catherine; Crandell, Jamie; Jenerette, Coretta M; Bailey, Donald E; Zimmerman, Sheryl; Mayer, Deborah K

    2015-01-01

    Adaptation is an ongoing, cognitive process with continuous appraisal of the cancer experience by the survivor. This exploratory study tested a path model examining the personal (demographic, disease, and psychosocial) characteristics associated with quality of life (QOL) and whether or not adaptation to living with cancer may mediate these effects. This study employed path analysis to estimate adaptation to cancer. A cross-sectional sample of NHL survivors (N = 750) was used to test the model. Eligible participants were ≥ 18 years, at least 2 years post-diagnosis, and living with or without active disease. Sixty-eight percent of the variance was accounted for in QOL. The strongest effect (-0.596) was direct by negative adaptation, approximately 3 times that of positive adaptation (0.193). The strongest demographic total effects on QOL were age and social support; effect on QOL; each additional comorbidity was associated with a 0.309 standard deviation decline on QOL. There were no fully mediated effects through positive adaptation alone. Our exploratory findings support the coexistence of positive and negative adaptations perception as mediators of personal characteristics of the cancer experience. Negative adaptation can affect QOL in a positive way. Cancer survivorship is simultaneously shaped by both positive and negative adaptation with future research and implications for practice aimed at improving QOL.

  18. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  19. Comparison of measured and modelled negative hydrogen ion densities at the ECR-discharge HOMER

    Energy Technology Data Exchange (ETDEWEB)

    Rauner, D.; Kurutz, U.; Fantz, U. [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); AG Experimentelle Plasmaphysik, Universität Augsburg, 86135 Augsburg (Germany)

    2015-04-08

    As the negative hydrogen ion density n{sub H{sup −}} is a key parameter for the investigation of negative ion sources, its diagnostic quantification is essential in source development and operation as well as for fundamental research. By utilizing the photodetachment process of negative ions, generally two different diagnostic methods can be applied: via laser photodetachment, the density of negative ions is measured locally, but only relatively to the electron density. To obtain absolute densities, the electron density has to be measured additionally, which induces further uncertainties. Via cavity ring-down spectroscopy (CRDS), the absolute density of H{sup −} is measured directly, however LOS-averaged over the plasma length. At the ECR-discharge HOMER, where H{sup −} is produced in the plasma volume, laser photodetachment is applied as the standard method to measure n{sub H{sup −}}. The additional application of CRDS provides the possibility to directly obtain absolute values of n{sub H{sup −}}, thereby successfully bench-marking the laser photodetachment system as both diagnostics are in good agreement. In the investigated pressure range from 0.3 to 3 Pa, the measured negative hydrogen ion density shows a maximum at 1 to 1.5 Pa and an approximately linear response to increasing input microwave powers from 200 up to 500 W. Additionally, the volume production of negative ions is 0-dimensionally modelled by balancing H{sup −} production and destruction processes. The modelled densities are adapted to the absolute measurements of n{sub H{sup −}} via CRDS, allowing to identify collisions of H{sup −} with hydrogen atoms (associative and non-associative detachment) to be the dominant loss process of H{sup −} in the plasma volume at HOMER. Furthermore, the characteristic peak of n{sub H{sup −}} observed at 1 to 1.5 Pa is identified to be caused by a comparable behaviour of the electron density with varying pressure, as n{sub e} determines

  20. Sequential bargaining in a market with one seller and two different buyers

    DEFF Research Database (Denmark)

    Tranæs, Torben; Hendon, Ebbe

    1991-01-01

    A matching and bargaining model in a market with one seller and two buyers, differing only in their reservation price, is analyzed. No subgame perfect equilibrium exists for stationary strategies. We demonstrate the existence of inefficient equilibria in which the low buyer receives the good with...... with large probability, even as friction becomes negligible. We investigate the relationship between the use of Nash and sequential bargaining. Nash bargaining seems applicable only when the sequential approach yields a unique stationary strategy subgame perfect equilibrium...

  1. Sequential bargaining in a market with one seller and two different buyers

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Tranæs, Torben

    1991-01-01

    A matching and bargaining model in a market with one seller and two buyers, differing only in their reservation price, is analyzed. No subgame perfect equilibrium exists for stationary strategies. We demonstrate the existence of inefficient equilibria in which the low buyer receives the good with...... with large probability, even as friction becomes negligible. We investigate the relationship between the use of Nash and sequential bargaining. Nash bargaining seems applicable only when the sequential approach yields a unique stationary strategy subgame perfect equilibrium....

  2. Automatic associations with the sensory aspects of smoking: Positive in habitual smokers but negative in non-smokers

    OpenAIRE

    Huijding, Jorg; Jong, Peter

    2006-01-01

    textabstractTo test whether pictorial stimuli that focus on the sensory aspects of smoking elicit different automatic affective associations in smokers than in non-smokers, 31 smoking and 33 non-smoking students completed a single target IAT. Explicit attitudes were assessed using a semantic differential. Automatic affective associations were positive in smokers but negative in non-smokers. Only automatic affective associations but not self-reported attitudes were significantly correlated wit...

  3. A technique to evaluate bone healing in non-human primates using sequential sub(99m)Tc-methylene diphosphonate scintigraphy

    International Nuclear Information System (INIS)

    Dormehl, I.C.

    1982-01-01

    The assessment of bone healing through sequential nuclear medical scintigraphy requires a method of consistent localization of the exact fracture area in each consecutive image as the study progresses. This is difficult when there is surrounding bone activity as in the early stages of trauma, and also if complications should set in. The image profile feature, available from most nuclear medical computer software, facilitates this procedure considerably, as is indicated in the present report on bone healing in baboons. Together with roentgenology and histology a sup(99m)Tc-MDP study was in this way successfully done on the healing of long bone fractures experimentally induced in non-human primates. Different surgical implants were used. The results indicated that sup(99m)Tc-MDP accurately reflects the physiological activity in bone. The time-activity curves obtained are presently being studied together with extensive histology, bearing possible clinical application in mind. (orig.) [de

  4. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  5. One-loop potential in the new string model with negative stiffness

    International Nuclear Information System (INIS)

    Kleinert, H.; Chervyakov, A.M.; Nesterenko, V.V.

    1996-01-01

    The color-electric flux tube between quarks has a finite thickness therefore also a finite curvature stiffness. Contrary to earlier rigid-string proposal by Polyakov and Kleinert and motivated by the properties of a magnetic flux tube in a type-II superconductor, we put forward the hypothesis that the stiffness is negative. We set up and study the properties of an idealized string model with such negative stiffness. In contrast to the rigid string, the propagator in the new model has no unphysical pole. One-loop calculations show that the model generates an interquark potential which does not contain the square root singularity even for moderate values of a negative stiffness. At large distances, the potential has usual linearly rising term with the universal Luescher correction

  6. Collaborative Filtering Based on Sequential Extraction of User-Item Clusters

    Science.gov (United States)

    Honda, Katsuhiro; Notsu, Akira; Ichihashi, Hidetomo

    Collaborative filtering is a computational realization of “word-of-mouth” in network community, in which the items prefered by “neighbors” are recommended. This paper proposes a new item-selection model for extracting user-item clusters from rectangular relation matrices, in which mutual relations between users and items are denoted in an alternative process of “liking or not”. A technique for sequential co-cluster extraction from rectangular relational data is given by combining the structural balancing-based user-item clustering method with sequential fuzzy cluster extraction appraoch. Then, the tecunique is applied to the collaborative filtering problem, in which some items may be shared by several user clusters.

  7. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  8. Plastic limit analysis with non linear kinematic strain hardening for metalworking processes applications

    International Nuclear Information System (INIS)

    Chaaba, Ali; Aboussaleh, Mohamed; Bousshine, Lahbib; Boudaia, El Hassan

    2011-01-01

    Limit analysis approaches are widely used to deal with metalworking processes analysis; however, they are applied only for perfectly plastic materials and recently for isotropic hardening ones excluding any kind of kinematic hardening. In the present work, using Implicit Standard Materials concept, sequential limit analysis approach and the finite element method, our objective consists in extending the limit analysis application for including linear and non linear kinematic strain hardenings. Because this plastic flow rule is non associative, the Implicit Standard Materials concept is adopted as a framework of non standard plasticity modeling. The sequential limit analysis procedure which considers the plastic behavior with non linear kinematic strain hardening as a succession of perfectly plastic behavior with yielding surfaces updated after each sequence of limit analysis and geometry updating is applied. Standard kinematic finite element method together with a regularization approach is used for performing two large compression cases (cold forging) in plane strain and axisymmetric conditions

  9. Sequential fitting-and-separating reflectance components for analytical bidirectional reflectance distribution function estimation.

    Science.gov (United States)

    Lee, Yu; Yu, Chanki; Lee, Sang Wook

    2018-01-10

    We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.

  10. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

    Science.gov (United States)

    Hayashi, Ken; Hayashi, Hideyuki

    2006-10-01

    To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

  11. Impact of disguise on identification decision and confidence with simultaneous and sequential lineups

    OpenAIRE

    Mansour, Jamal K; Beaudry, J L; Bertrand, M I; Kalmet, N; Melsom, E; Lindsay, R C L

    2012-01-01

    Prior research indicates that disguise negatively affects lineup identifications, but the mechanisms by which disguise works have not been explored, and different disguises have not been compared. In two experiments (Ns = 87 and 91) we manipulated degree of coverage by two different types of disguise: a stocking mask or sunglasses and toque (i.e., knitted hat). Participants viewed mock-crime videos followed by simultaneous or sequential lineups. Disguise and lineup type did not interact. In s...

  12. A simulation to study the feasibility of improving the temporal resolution of LAGEOS geodynamic solutions by using a sequential process noise filter

    Science.gov (United States)

    Hartman, Brian Davis

    1995-01-01

    A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal

  13. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm.

    Science.gov (United States)

    Arber, Madeleine M; Ireland, Michael J; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald

    2017-01-01

    Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.

  14. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm

    Directory of Open Access Journals (Sweden)

    Madeleine M. Arber

    2017-09-01

    Full Text Available Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration. Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.

  15. A multiple relevance feedback strategy with positive and negative models.

    Directory of Open Access Journals (Sweden)

    Yunlong Ma

    Full Text Available A commonly used strategy to improve search accuracy is through feedback techniques. Most existing work on feedback relies on positive information, and has been extensively studied in information retrieval. However, when a query topic is difficult and the results from the first-pass retrieval are very poor, it is impossible to extract enough useful terms from a few positive documents. Therefore, the positive feedback strategy is incapable to improve retrieval in this situation. Contrarily, there is a relatively large number of negative documents in the top of the result list, and it has been confirmed that negative feedback strategy is an important and useful way for adapting this scenario by several recent studies. In this paper, we consider a scenario when the search results are so poor that there are at most three relevant documents in the top twenty documents. Then, we conduct a novel study of multiple strategies for relevance feedback using both positive and negative examples from the first-pass retrieval to improve retrieval accuracy for such difficult queries. Experimental results on these TREC collections show that the proposed language model based multiple model feedback method which is generally more effective than both the baseline method and the methods using only positive or negative model.

  16. Modeling two-phase ferroelectric composites by sequential laminates

    International Nuclear Information System (INIS)

    Idiart, Martín I

    2014-01-01

    Theoretical estimates are given for the overall dissipative response of two-phase ferroelectric composites with complex particulate microstructures under arbitrary loading histories. The ferroelectric behavior of the constituent phases is described via a stored energy density and a dissipation potential in accordance with the theory of generalized standard materials. An implicit time-discretization scheme is used to generate a variational representation of the overall response in terms of a single incremental potential. Estimates are then generated by constructing sequentially laminated microgeometries of particulate type whose overall incremental potential can be computed exactly. Because they are realizable, by construction, these estimates are guaranteed to conform with any material constraints, to satisfy all pertinent bounds and to exhibit the required convexity properties with no duality gap. Predictions for representative composite and porous systems are reported and discussed in the light of existing experimental data. (paper)

  17. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  18. Negative frequencies in wave propagation: A microscopic model

    Science.gov (United States)

    Horsley, S. A. R.; Bugler-Lamb, S.

    2016-06-01

    A change in the sign of the frequency of a wave between two inertial reference frames corresponds to a reversal of the phase velocity. Yet from the point of view of the relation E =ℏ ω , a positive quantum of energy apparently becomes a negative-energy one. This is physically distinct from a change in the sign of the wave vector and can be associated with various effects such as Cherenkov radiation, quantum friction, and the Hawking effect. In this work we provide a more detailed understanding of these negative-frequency modes based on a simple microscopic model of a dielectric medium as a lattice of scatterers. We calculate the classical and quantum mechanical radiation damping of an oscillator moving through such a lattice and find that the modes where the frequency has changed sign contribute negatively. In terms of the lattice of scatterers we find that this negative radiation damping arises due to the phase of the periodic force experienced by the oscillator due to the relative motion of the lattice.

  19. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  20. Simultaneous and sequential transfer of proton and alpha-particle in the elastic 11B+16O scattering

    International Nuclear Information System (INIS)

    Kamys, B.; Rudy, Z.; Kisiel, J.; Kwasniewicsz, E.; Wolter, H.H.

    1992-01-01

    We have developed a method to treat multi-nucleon transfer as the transfer of two - possible different - subclusters, as e.g. with ' 5 Li'=(α,p). As a consequence we take into account two reaction mechanisms, the one-step simultaneous and the two-step sequential transfer of the two clusters. We formulate the method of calculation of the simultaneous transfer form factor for two non-identifical particles and also of the two-cluster spectroscopic amplitudes from shell model wave functions. We apply the method to the elastic transfer reaction 11 B( 16 O, 11 B) 11 O together with the single α and p transfer reaction 11 B( 16 O, 15 N) 12 C for E lab between 30 and 60 MeV. We obtain a consistently good description of all the data by reasonable adjustment of the spectroscopic amplitudes. In particular we find that the simultaneous (αp) transfer is considerably more important than the sequential transfer indicating strong five-nucleon correlations in these light nuclei. (orig.)

  1. Motion sickness: a negative reinforcement model.

    Science.gov (United States)

    Bowins, Brad

    2010-01-15

    Theories pertaining to the "why" of motion sickness are in short supply relative to those detailing the "how." Considering the profoundly disturbing and dysfunctional symptoms of motion sickness, it is difficult to conceive of why this condition is so strongly biologically based in humans and most other mammalian and primate species. It is posited that motion sickness evolved as a potent negative reinforcement system designed to terminate motion involving sensory conflict or postural instability. During our evolution and that of many other species, motion of this type would have impaired evolutionary fitness via injury and/or signaling weakness and vulnerability to predators. The symptoms of motion sickness strongly motivate the individual to terminate the offending motion by early avoidance, cessation of movement, or removal of oneself from the source. The motion sickness negative reinforcement mechanism functions much like pain to strongly motivate evolutionary fitness preserving behavior. Alternative why theories focusing on the elimination of neurotoxins and the discouragement of motion programs yielding vestibular conflict suffer from several problems, foremost that neither can account for the rarity of motion sickness in infants and toddlers. The negative reinforcement model proposed here readily accounts for the absence of motion sickness in infants and toddlers, in that providing strong motivation to terminate aberrant motion does not make sense until a child is old enough to act on this motivation.

  2. [Optimization and Prognosis of Cell Radiosensitivity Enhancement in vitro and in vivo after Sequential Thermoradiactive Action].

    Science.gov (United States)

    Belkina, S V; Petin, V G

    2016-01-01

    Previously developed mathematical model of simultaneous action of two inactivating agents has been adapted and tested to describe the results of sequential action. The possibility of applying the mathematical model to the interpretation and prognosis of the increase in radio-sensitivity of tumor cells as well as mammalian cells after sequential action of two high temperatures or hyperthermia and ionizing radiation is analyzed. The model predicts the value of the thermal enhancement ratio depending on the duration of thermal exposure, its greatest value, and the condition under which it is achieved.

  3. Indications of de Sitter spacetime from classical sequential growth dynamics of causal sets

    International Nuclear Information System (INIS)

    Ahmed, Maqbool; Rideout, David

    2010-01-01

    A large class of the dynamical laws for causal sets described by a classical process of sequential growth yields a cyclic universe, whose cycles of expansion and contraction are punctuated by single 'origin elements' of the causal set. We present evidence that the effective dynamics of the immediate future of one of these origin elements, within the context of the sequential growth dynamics, yields an initial period of de Sitter-like exponential expansion, and argue that the resulting picture has many attractive features as a model of the early universe, with the potential to solve some of the standard model puzzles without any fine-tuning.

  4. Label-Informed Non-negative Matrix Factorization with Manifold Regularization for Discriminative Subnetwork Detection.

    Science.gov (United States)

    Watanabe, Takanori; Tunc, Birkan; Parker, Drew; Kim, Junghoon; Verma, Ragini

    2016-10-01

    In this paper, we present a novel method for obtaining a low dimensional representation of a complex brain network that: (1) can be interpreted in a neurobiologically meaningful way, (2) emphasizes group differences by accounting for label information, and (3) captures the variation in disease subtypes/severity by respecting the intrinsic manifold structure underlying the data. Our method is a supervised variant of non-negative matrix factorization (NMF), and achieves dimensionality reduction by extracting an orthogonal set of subnetworks that are interpretable, reconstructive of the original data, and also discriminative at the group level. In addition, the method includes a manifold regularizer that encourages the low dimensional representations to be smooth with respect to the intrinsic geometry of the data, allowing subjects with similar disease-severity to share similar network representations. While the method is generalizable to other types of non-negative network data, in this work we have used structural connectomes (SCs) derived from diffusion data to identify the cortical/subcortical connections that have been disrupted in abnormal neurological state. Experiments on a traumatic brain injury (TBI) dataset demonstrate that our method can identify subnetworks that can reliably classify TBI from controls and also reveal insightful connectivity patterns that may be indicative of a biomarker.

  5. Robust real-time pattern matching using bayesian sequential hypothesis testing.

    Science.gov (United States)

    Pele, Ofir; Werman, Michael

    2008-08-01

    This paper describes a method for robust real time pattern matching. We first introduce a family of image distance measures, the "Image Hamming Distance Family". Members of this family are robust to occlusion, small geometrical transforms, light changes and non-rigid deformations. We then present a novel Bayesian framework for sequential hypothesis testing on finite populations. Based on this framework, we design an optimal rejection/acceptance sampling algorithm. This algorithm quickly determines whether two images are similar with respect to a member of the Image Hamming Distance Family. We also present a fast framework that designs a near-optimal sampling algorithm. Extensive experimental results show that the sequential sampling algorithm performance is excellent. Implemented on a Pentium 4 3 GHz processor, detection of a pattern with 2197 pixels, in 640 x 480 pixel frames, where in each frame the pattern rotated and was highly occluded, proceeds at only 0.022 seconds per frame.

  6. Sequential Coherence in Sentence Pairs Enhances Imagery during Comprehension: An Individual Differences Study.

    Directory of Open Access Journals (Sweden)

    Carol Madden-Lombardi

    Full Text Available The present study investigates how sequential coherence in sentence pairs (events in sequence vs. unrelated events affects the perceived ability to form a mental image of the sentences for both auditory and visual presentations. In addition, we investigated how the ease of event imagery affected online comprehension (word reading times in the case of sequentially coherent and incoherent sentence pairs. Two groups of comprehenders were identified based on their self-reported ability to form vivid mental images of described events. Imageability ratings were higher and faster for pairs of sentences that described events in coherent sequences rather than non-sequential events, especially for high imagers. Furthermore, reading times on individual words suggested different comprehension patterns with respect to sequence coherence for the two groups of imagers, with high imagers activating richer mental images earlier than low imagers. The present results offer a novel link between research on imagery and discourse coherence, with specific contributions to our understanding of comprehension patterns for high and low imagers.

  7. Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Damiani, Rick; Wendt, Fabian; Musial, Walter; Finucane, Z.; Hulliger, L.; Chilka, S.; Dolan, D.; Cushing, J.; O' Connell, D.; Falk, S.

    2017-06-19

    The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, the turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  8. Modeling the Relationship between Trauma and Psychological Distress among HIV-Positive and HIV-Negative Women.

    Science.gov (United States)

    Brumsey, Ayesha Delany; Joseph, Nataria T; Myers, Hector F; Ullman, Jodie B; Wyatt, Gail E

    2013-01-01

    This study investigated the association between cumulative exposure to multiple traumatic events and psychological distress, as mediated by problematic substance use and impaired psychosocial resources. A sample of HIV-positive and HIV-negative women were assessed for a history of childhood and adult sexual abuse and non-sexual trauma as predictors of psychological distress (i.e., depression, non-specific anxiety, and posttraumatic stress), as mediated by problematic alcohol and drug use and psychosocial resources (i.e., social support, self-esteem and optimism). Structural equation modeling confirmed that cumulative trauma exposure is positively associated with greater psychological distress, and that this association is partially mediated through impaired psychosocial resources. However, although cumulative trauma was associated with greater problematic substance use, substance use did not mediate the relationship between trauma and psychological distress.

  9. Assessing appetitive, aversive, and negative ethanol-mediated reinforcement through an immature rat model.

    Science.gov (United States)

    Pautassi, Ricardo M; Nizhnikov, Michael E; Spear, Norman E

    2009-06-01

    The motivational effects of drugs play a key role during the transition from casual use to abuse and dependence. Ethanol reinforcement has been successfully studied through Pavlovian and operant conditioning in adult rats and mice genetically selected for their ready acceptance of ethanol. Another model for studying ethanol reinforcement is the immature (preweanling) rat, which consumes ethanol and exhibits the capacity to process tactile, odor and taste cues and transfer information between different sensorial modalities. This review describes the motivational effects of ethanol in preweanling, heterogeneous non-selected rats. Preweanlings exhibit ethanol-mediated conditioned taste avoidance and conditioned place aversion. Ethanol's appetitive effects, however, are evident when using first- and second-order conditioning and operant procedures. Ethanol also devalues the motivational representation of aversive stimuli, suggesting early negative reinforcement. It seems that preweanlings are highly sensitive not only to the aversive motivational effects of ethanol but also to its positive and negative (anti-anxiety) reinforcement potential. The review underscores the advantages of using a developing rat to evaluate alcohol's motivational effects.

  10. Prosody and alignment: a sequential perspective

    Science.gov (United States)

    Szczepek Reed, Beatrice

    2010-12-01

    In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of `positive', or aligning actions.

  11. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  12. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

    OpenAIRE

    Utpala Niranjan; R.B.V. Subramanyam; V-Khana

    2010-01-01

    Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

  13. Dissecting the regulatory microenvironment of a large animal model of non-Hodgkin lymphoma: evidence of a negative prognostic impact of FOXP3+ T cells in canine B cell lymphoma.

    Directory of Open Access Journals (Sweden)

    Dammy Pinheiro

    Full Text Available The cancer microenvironment plays a pivotal role in oncogenesis, containing a number of regulatory cells that attenuate the anti-neoplastic immune response. While the negative prognostic impact of regulatory T cells (Tregs in the context of most solid tissue tumors is well established, their role in lymphoid malignancies remains unclear. T cells expressing FOXP3 and Helios were documented in the fine needle aspirates of affected lymph nodes of dogs with spontaneous multicentric B cell lymphoma (BCL, proposed to be a model for human non-Hodgkin lymphoma. Multivariable analysis revealed that the frequency of lymph node FOXP3(+ T cells was an independent negative prognostic factor, impacting both progression-free survival (hazard ratio 1.10; p = 0.01 and overall survival (hazard ratio 1.61; p = 0.01 when comparing dogs showing higher than the median FOXP3 expression with those showing the median value of FOXP3 expression or less. Taken together, these data suggest the existence of a population of Tregs operational in canine multicentric BCL that resembles thymic Tregs, which we speculate are co-opted by the tumor from the periphery. We suggest that canine multicentric BCL represents a robust large animal model of human diffuse large BCL, showing clinical, cytological and immunophenotypic similarities with the disease in man, allowing comparative studies of immunoregulatory mechanisms.

  14. Coding chaotic billiards: I-Non-Compact billiards on a negative curvature manifold

    International Nuclear Information System (INIS)

    Giannoni, M.J.; Ullmo, D.

    1989-03-01

    This paper presents a method for coding billiards. The main device is to use a proper surface of section, the bounce mapping, and foliate the reduced phase space into regions associated with a given code. The alphabet is merely the ensemble of the labels of the sides of the billiard. The procedure is applied here to non-compact polygonal billiard defined on a manifold of constant negative curvature, with all vertices at infinity. A simple grammar rule is necessary and sufficient to insure existence and uniqueness of the coding

  15. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  16. Adaptive decision making in a dynamic environment: a test of a sequential sampling model of relative judgment.

    Science.gov (United States)

    Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew

    2013-09-01

    Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. Instagram Unfiltered: Exploring Associations of Body Image Satisfaction, Instagram #Selfie Posting, and Negative Romantic Relationship Outcomes.

    Science.gov (United States)

    Ridgway, Jessica L; Clayton, Russell B

    2016-01-01

    The purpose of this study was to examine the predictors and consequences associated with Instagram selfie posting. Thus, this study explored whether body image satisfaction predicts Instagram selfie posting and whether Instagram selfie posting is then associated with Instagram-related conflict and negative romantic relationship outcomes. A total of 420 Instagram users aged 18 to 62 years (M = 29.3, SD = 8.12) completed an online survey questionnaire. Analysis of a serial multiple mediator model using bootstrapping methods indicated that body image satisfaction was sequentially associated with increased Instagram selfie posting and Instagram-related conflict, which related to increased negative romantic relationship outcomes. These findings suggest that when Instagram users promote their body image satisfaction in the form of Instagram selfie posts, risk of Instagram-related conflict and negative romantic relationship outcomes might ensue. Findings from the current study provide a baseline understanding to potential and timely trends regarding Instagram selfie posting.

  18. ONC201 Demonstrates Antitumor Effects in Both Triple-Negative and Non-Triple-Negative Breast Cancers through TRAIL-Dependent and TRAIL-Independent Mechanisms.

    Science.gov (United States)

    Ralff, Marie D; Kline, Christina L B; Küçükkase, Ozan C; Wagner, Jessica; Lim, Bora; Dicker, David T; Prabhu, Varun V; Oster, Wolfgang; El-Deiry, Wafik S

    2017-07-01

    Breast cancer is a major cause of cancer-related death. TNF-related apoptosis-inducing ligand (TRAIL) has been of interest as a cancer therapeutic, but only a subset of triple-negative breast cancers (TNBC) is sensitive to TRAIL. The small-molecule ONC201 induces expression of TRAIL and its receptor DR5. ONC201 has entered clinical trials in advanced cancers. Here, we show that ONC201 is efficacious against both TNBC and non-TNBC cells ( n = 13). A subset of TNBC and non-TNBC cells succumbs to ONC201-induced cell death. In 2 of 8 TNBC cell lines, ONC201 treatment induces caspase-8 cleavage and cell death that is blocked by TRAIL-neutralizing antibody RIK2. The proapoptotic effect of ONC201 translates to in vivo efficacy in the MDA-MB-468 xenograft model. In most TNBC lines tested (6/8), ONC201 has an antiproliferative effect but does not induce apoptosis. ONC201 decreases cyclin D1 expression and causes an accumulation of cells in the G 1 phase of the cell cycle. pRb expression is associated with sensitivity to the antiproliferative effects of ONC201, and the compound synergizes with taxanes in less sensitive cells. All non-TNBC cells ( n = 5) are growth inhibited following ONC201 treatment, and unlike what has been observed with TRAIL, a subset ( n = 2) shows PARP cleavage. In these cells, cell death induced by ONC201 is TRAIL independent. Our data demonstrate that ONC201 has potent antiproliferative and proapoptotic effects in a broad range of breast cancer subtypes, through TRAIL-dependent and TRAIL-independent mechanisms. These findings develop a preclinical rationale for developing ONC201 as a single agent and/or in combination with approved therapies in breast cancer. Mol Cancer Ther; 16(7); 1290-8. ©2017 AACR . ©2017 American Association for Cancer Research.

  19. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin

    2012-12-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.

  20. Monitoring sequential electron transfer with EPR

    International Nuclear Information System (INIS)

    Thurnauer, M.C.; Feezel, L.L.; Snyder, S.W.; Tang, J.; Norris, J.R.; Morris, A.L.; Rustandi, R.R.

    1989-01-01

    A completely general model which treats electron spin polarization (ESP) found in a system in which radical pairs with different magnetic interactions are formed sequentially has been described. This treatment has been applied specifically to the ESP found in the bacterial reaction center. Test cases show clearly how parameters such as structure, lifetime, and magnetic interactions within the successive radical pairs affect the ESP, and demonstrate that previous treatments of this problem have been incomplete. The photosynthetic bacterial reaction center protein is an ideal system for testing the general model of ESP. The radical pair which exhibits ESP, P 870 + Q - (P 870 + is the oxidized, primary electron donor, a bacteriochlorophyll special pair and Q - is the reduced, primary quinone acceptor) is formed via sequential electron transport through the intermediary radical pair P 870 + I - (I - is the reduced, intermediary electron acceptor, a bacteriopheophytin). In addition, it is possible to experimentally vary most of the important parameters, such as the lifetime of the intermediary radical pair and the magnetic interactions in each pair. It has been shown how selective isotopic substitution ( 1 H or 2 H) on P 870 , I and Q affects the ESP of the EPR spectrum of P 870 + Q - , observed at two different microwave frequencies, in Fe 2+ -depleted bacterial reaction centers of Rhodobacter sphaeroides R26. Thus, the relative magnitudes of the magnetic properties (nuclear hyperfine and g-factor differences) which influence ESP development were varied. The results support the general model of ESP in that they suggest that the P 870 + Q - radical pair interactions are the dominant source of ESP production in 2 H bacterial reaction centers

  1. Sequential nonadiabatic excitation of large molecules and ions driven by strong laser fields

    International Nuclear Information System (INIS)

    Markevitch, Alexei N.; Levis, Robert J.; Romanov, Dmitri A.; Smith, Stanley M.; Schlegel, H. Bernhard; Ivanov, Misha Yu.

    2004-01-01

    Electronic processes leading to dissociative ionization of polyatomic molecules in strong laser fields are investigated experimentally, theoretically, and numerically. Using time-of-flight ion mass spectroscopy, we study the dependence of fragmentation on laser intensity for a series of related molecules and report regular trends in this dependence on the size, symmetry, and electronic structure of a molecule. Based on these data, we develop a model of dissociative ionization of polyatomic molecules in intense laser fields. The model is built on three elements: (i) nonadiabatic population transfer from the ground electronic state to the excited-state manifold via a doorway (charge-transfer) transition; (ii) exponential enhancement of this transition by collective dynamic polarization of all electrons, and (iii) sequential energy deposition in both neutral molecules and resulting molecular ions. The sequential nonadiabatic excitation is accelerated by a counterintuitive increase of a large molecule's polarizability following its ionization. The generic theory of sequential nonadiabatic excitation forms a basis for quantitative description of various nonlinear processes in polyatomic molecules and ions in strong laser fields

  2. Economic analysis of price premiums in the presence of non-convexities. Evidence from German electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Paschmann, Martin

    2017-11-15

    Analyzing price data from sequential German electricity markets, namely the day-ahead and intraday auction, a puzzling but apparently systematic pattern of price premiums can be identified. The price premiums are highly correlated with the underlying demand profile. As there is evidence that widespread models for electricity forward premiums are not applicable to the market dynamics under analysis, a theoretical model is developed within this article which reveals that non-convexities in only a subset of sequential markets with differing product granularity may cause systematic price premiums at equilibrium. These price premiums may be bidirectional and reflect a value for additional short-term power supply system flexibility.

  3. Economic analysis of price premiums in the presence of non-convexities. Evidence from German electricity markets

    International Nuclear Information System (INIS)

    Paschmann, Martin

    2017-01-01

    Analyzing price data from sequential German electricity markets, namely the day-ahead and intraday auction, a puzzling but apparently systematic pattern of price premiums can be identified. The price premiums are highly correlated with the underlying demand profile. As there is evidence that widespread models for electricity forward premiums are not applicable to the market dynamics under analysis, a theoretical model is developed within this article which reveals that non-convexities in only a subset of sequential markets with differing product granularity may cause systematic price premiums at equilibrium. These price premiums may be bidirectional and reflect a value for additional short-term power supply system flexibility.

  4. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  5. Algorithm for recall of HIV reactive Indian blood donors by sequential immunoassays enables selective donor referral for counseling

    Directory of Open Access Journals (Sweden)

    Thakral B

    2006-01-01

    Full Text Available Background: HIV/AIDS pandemic brought into focus the importance of safe blood donor pool. Aims: To analyze true seroprevalence of HIV infection in our blood donors and devise an algorithm for donor recall avoiding unnecessary referrals to voluntary counseling and testing centre (VCTC. Materials and Methods: 39,784 blood units were screened for anti-HIV 1/2 using ELISA immunoassay (IA-1. Samples which were repeat reactive on IA-1 were further tested using two different immunoassays (IA-2 and IA-3 and Western blot (WB. Based on results of these sequential IAs and WB, an algorithm for recall of true HIV seroreactive blood donors is suggested for countries like India where nucleic acid testing or p24 antigen assays are not mandatory and given the limited resources may not be feasible. Results: The anti-HIV seroreactivity by repeat IA-1, IA-2, IA-3 and WB were 0.16%, 0.11%, 0.098% and 0.07% respectively. Of the 44 IA-1 reactive samples, 95.2% (20/21 of the seroreactive samples by both IA-2 and IA-3 were also WB positive and 100% (6/6 of the non-reactive samples by these IAs were WB negative. IA signal/cutoff ratio was significantly low in biological false reactive donors. WB indeterminate results were largely due to non-specific reactivity to gag protein (p55. Conclusions: HIV seroreactivity by sequential immunoassays (IA-1, IA-2 and IA-3; comparable to WHO Strategy-III prior to donor recall results in decreased referral to VCTC as compared to single IA (WHO Strategy-I being followed currently in India. Moreover, this strategy will repose donor confidence in our blood transfusion services and strengthen voluntary blood donation program.

  6. Foreword to Special Issue on "The Difference between Concurrent and Sequential Computation'' of Mathematical Structures

    DEFF Research Database (Denmark)

    Aceto, Luca; Longo, Giuseppe; Victor, Björn

    2003-01-01

    tarpit’, and argued that some of the most crucial distinctions in computing methodology, such as sequential versus parallel, deterministic versus non-deterministic, local versus distributed disappear if all one sees in computation is pure symbol pushing. How can we express formally the difference between...

  7. Non-equilibrium dog-flea model

    Science.gov (United States)

    Ackerson, Bruce J.

    2017-11-01

    We develop the open dog-flea model to serve as a check of proposed non-equilibrium theories of statistical mechanics. The model is developed in detail. Then it is applied to four recent models for non-equilibrium statistical mechanics. Comparison of the dog-flea solution with these different models allows checking claims and giving a concrete example of the theoretical models.

  8. An Exploratory Path Model of the Relationships between Positive and Negative Adaptation to Cancer on Quality of Life among non-Hodgkin Lymphoma Survivors

    Science.gov (United States)

    Smith, Sophia K.; Zimmer, Catherine; Crandell, Jamie; Jenerette, Coretta M.; Bailey, Donald E.; Zimmerman, Sheryl; Mayer, Deborah K.

    2015-01-01

    Purpose Adaptation is an ongoing, cognitive process with continuous appraisal of the cancer experience by the survivor. This exploratory study tested a path model examining the personal (demographic, disease, and psychosocial) characteristics associated with quality of life (QOL) and whether or not adaptation to living with cancer may mediate these effects. Methods This study employed path analysis to estimate adaptation to cancer. A cross sectional sample of NHL survivors (N=750) was used to test the model. Eligible participants were ≥18 years, at least two years post-diagnosis, and living with or without active disease. Results 68% of the variance was accounted for in QOL. The strongest effect (−0.596) was direct by negative adaptation, approximately three times that of positive adaptation (0.193). The strongest demographic total effects on QOL were age and social support; adaptation compared to those ≥65. Of the disease characteristics, comorbidity score had the strongest direct effect on QOL; each additional comorbidity was associated with a 0.309 standard deviation decline on QOL. There were no fully mediated effects through positive adaptation alone. Our exploratory findings support the coexistence of positive and negative adaptations perception as mediators of personal characteristics of the cancer experience. Negative adaptation can affect QOL in a positive way. Cancer survivorship is simultaneously shaped by both positive and negative adaptation with future research and implications for practice aimed at improving QOL. PMID:25751114

  9. A novel approach to severe acute pancreatitis in sequential liver-kidney transplantation: the first report on the application of VAC therapy.

    Science.gov (United States)

    Zanus, Giacomo; Boetto, Riccardo; D'Amico, Francesco; Gringeri, Enrico; Vitale, Alessandro; Carraro, Amedeo; Bassi, Domenico; Scopelliti, Michele; Bonsignore, Pasquale; Burra, Patrizia; Angeli, Paolo; Feltracco, Paolo; Cillo, Umberto

    2011-03-01

    This work is the first report of vacuum-assisted closure (VAC) therapy applied as a life-saving surgical treatment for severe acute pancreatitis occurring in a sequential liver- and kidney-transplanted patient who had percutaneous biliary drainage for obstructive "late-onset" jaundice. Surgical exploration with necrosectomy and sequential laparotomies was performed because of increasing intra-abdominal pressure with hemodynamic instability and intra-abdominal multidrug-resistant sepsis, with increasingly difficult abdominal closure. Repeated laparotomies with VAC therapy (applying a continuous negative abdominal pressure) enabled a progressive, successful abdominal decompression, with the clearance of infection and definitive abdominal wound closure. The application of a negative pressure is a novel approach to severe abdominal sepsis and laparostomy management with a view to preventing compartment syndrome and fatal sepsis, and it can lead to complete abdominal wound closure. © 2010 The Authors. Transplant International © 2010 European Society for Organ Transplantation.

  10. Promoting success or preventing failure: cultural differences in motivation by positive and negative role models.

    Science.gov (United States)

    Lockwood, Penelope; Marshall, Tara C; Sadler, Pamela

    2005-03-01

    In two studies, cross-cultural differences in reactions to positive and negative role models were examined. The authors predicted that individuals from collectivistic cultures, who have a stronger prevention orientation, would be most motivated by negative role models, who highlight a strategy of avoiding failure; individuals from individualistic cultures, who have a stronger promotion focus, would be most motivated by positive role models, who highlight a strategy of pursuing success. In Study 1, the authors examined participants' reported preferences for positive and negative role models. Asian Canadian participants reported finding negative models more motivating than did European Canadians; self-construals and regulatory focus mediated cultural differences in reactions to role models. In Study 2, the authors examined the impact of role models on the academic motivation of Asian Canadian and European Canadian participants. Asian Canadians were motivated only by a negative model, and European Canadians were motivated only by a positive model.

  11. Negative differential mobility for negative carriers as revealed by space charge measurements on crosslinked polyethylene insulated model cables

    International Nuclear Information System (INIS)

    Teyssedre, G.; Laurent, C.; Vu, T. T. N.

    2015-01-01

    Among features observed in polyethylene materials under relatively high field, space charge packets, consisting in a pulse of net charge that remains in the form of a pulse as it crosses the insulation, are repeatedly observed but without complete theory explaining their formation and propagation. Positive charge packets are more often reported, and the models based on negative differential mobility(NDM) for the transport of holes could account for some charge packets phenomenology. Conversely, NDM for electrons transport has never been reported so far. The present contribution reports space charge measurements by pulsed electroacoustic method on miniature cables that are model of HVDC cables. The measurements were realized at room temperature or with a temperature gradient of 10 °C through the insulation under DC fields on the order 30–60 kV/mm. Space charge results reveal systematic occurrence of a negative front of charges generated at the inner electrode that moves toward the outer electrode at the beginning of the polarization step. It is observed that the transit time of the front of negative charge increases, and therefore the mobility decreases, with the applied voltage. Further, the estimated mobility, in the range 10 −14 –10 −13  m 2  V −1  s −1 for the present results, increases when the temperature increases for the same condition of applied voltage. The features substantiate the hypothesis of negative differential mobility used for modelling space charge packets

  12. Negative differential mobility for negative carriers as revealed by space charge measurements on crosslinked polyethylene insulated model cables

    Science.gov (United States)

    Teyssedre, G.; Vu, T. T. N.; Laurent, C.

    2015-12-01

    Among features observed in polyethylene materials under relatively high field, space charge packets, consisting in a pulse of net charge that remains in the form of a pulse as it crosses the insulation, are repeatedly observed but without complete theory explaining their formation and propagation. Positive charge packets are more often reported, and the models based on negative differential mobility(NDM) for the transport of holes could account for some charge packets phenomenology. Conversely, NDM for electrons transport has never been reported so far. The present contribution reports space charge measurements by pulsed electroacoustic method on miniature cables that are model of HVDC cables. The measurements were realized at room temperature or with a temperature gradient of 10 °C through the insulation under DC fields on the order 30-60 kV/mm. Space charge results reveal systematic occurrence of a negative front of charges generated at the inner electrode that moves toward the outer electrode at the beginning of the polarization step. It is observed that the transit time of the front of negative charge increases, and therefore the mobility decreases, with the applied voltage. Further, the estimated mobility, in the range 10-14-10-13 m2 V-1 s-1 for the present results, increases when the temperature increases for the same condition of applied voltage. The features substantiate the hypothesis of negative differential mobility used for modelling space charge packets.

  13. Modelling the sequential geographical exploitation and potential collapse of marine fisheries through economic globalization, climate change and management alternatives

    Directory of Open Access Journals (Sweden)

    Gorka Merino

    2011-07-01

    Full Text Available Global marine fisheries production has reached a maximum and may even be declining. Underlying this trend is a well-understood sequence of development, overexploitation, depletion and in some instances collapse of individual fish stocks, a pattern that can sequentially link geographically distant populations. Ineffective governance, economic considerations and climate impacts are often responsible for this sequence, although the relative contribution of each factor is contentious. In this paper we use a global bioeconomic model to explore the synergistic effects of climate variability, economic pressures and management measures in causing or avoiding this sequence. The model shows how a combination of climate-induced variability in the underlying fish population production, particular patterns of demand for fish products and inadequate management is capable of driving the world’s fisheries into development, overexploitation, collapse and recovery phases consistent with observations. Furthermore, it demonstrates how a sequential pattern of overexploitation can emerge as an endogenous property of the interaction between regional environmental fluctuations and a globalized trade system. This situation is avoidable through adaptive management measures that ensure the sustainability of regional production systems in the face of increasing global environmental change and markets. It is concluded that global management measures are needed to ensure that global food supply from marine products is optimized while protecting long-term ecosystem services across the world’s oceans.

  14. Losing a dime with a satisfied mind: positive affect predicts less search in sequential decision making.

    Science.gov (United States)

    von Helversen, Bettina; Mata, Rui

    2012-12-01

    We investigated the contribution of cognitive ability and affect to age differences in sequential decision making by asking younger and older adults to shop for items in a computerized sequential decision-making task. Older adults performed poorly compared to younger adults partly due to searching too few options. An analysis of the decision process with a formal model suggested that older adults set lower thresholds for accepting an option than younger participants. Further analyses suggested that positive affect, but not fluid abilities, was related to search in the sequential decision task. A second study that manipulated affect in younger adults supported the causal role of affect: Increased positive affect lowered the initial threshold for accepting an attractive option. In sum, our results suggest that positive affect is a key factor determining search in sequential decision making. Consequently, increased positive affect in older age may contribute to poorer sequential decisions by leading to insufficient search. 2013 APA, all rights reserved

  15. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data

    Science.gov (United States)

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-01

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  16. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

    Science.gov (United States)

    Rußwurm, Marc; Körner, Marco

    2018-03-01

    Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

  17. Modelling sequential Biosphere systems under Climate change for radioactive waste disposal. Project BIOCLIM

    International Nuclear Information System (INIS)

    Texier, D.; Degnan, P.; Loutre, M.F.; Lemaitre, G.; Paillard, D.; Thorne, M.

    2000-01-01

    The BIOCLIM project (Modelling Sequential Biosphere systems under Climate change for Radioactive Waste Disposal) is part of the EURATOM fifth European framework programme. The project was launched in October 2000 for a three-year period. It is coordinated by ANDRA, the French national radioactive waste management agency. The project brings together a number of European radioactive waste management organisations that have national responsibilities for the safe disposal of radioactive wastes, and several highly experienced climate research teams. Waste management organisations involved are: NIREX (UK), GRS (Germany), ENRESA (Spain), NRI (Czech Republic) and ANDRA (France). Climate research teams involved are: LSCE (CEA/CNRS, France), CIEMAT (Spain), UPMETSIMM (Spain), UCL/ASTR (Belgium) and CRU (UEA, UK). The Environmental Agency for England and Wales provides a regulatory perspective. The consulting company Enviros Consulting (UK) assists ANDRA by contributing to both the administrative and scientific aspects of the project. This paper describes the project and progress to date. (authors)

  18. Double tracer autoradiographic method for sequential evaluation of regional cerebral perfusion

    International Nuclear Information System (INIS)

    Matsuda, H.; Tsuji, S.; Oba, H.; Kinuya, K.; Terada, H.; Sumiya, H.; Shiba, K.; Mori, H.; Hisada, K.; Maeda, T.

    1989-01-01

    A new double tracer autoradiographic method for the sequential evaluation of altered regional cerebral perfusion in the same animal is presented. This method is based on the sequential injection of two tracers, 99m Tc-hexamethylpropyleneamine oxime and N-isopropyl-( 125 I)p-iodoamphetamine. This method is validated in the assessment of brovincamine effects on regional cerebral perfusion in an experimental model of chronic brain ischemia in the rat. The drug enhanced perfusion recovery in low-flow areas, selectively in surrounding areas of infarction. The results suggest that this technique is of potential use in the study of neuropharmacological effects applied during the experiment

  19. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

    Science.gov (United States)

    Landsberger, David; Galvin, John J

    2011-09-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

  20. Sequential extraction applied to Peruibe black mud, SP, Brazil

    International Nuclear Information System (INIS)

    Torrecilha, Jefferson Koyaishi

    2014-01-01

    The Peruibe Black mud is used in therapeutic treatments such as psoriasis, peripheral dermatitis, acne and seborrhoea, as well as in the treatment of myalgia, arthritis, rheumatism and non-articular processes. Likewise other medicinal clays, it may not be free from possible adverse health effects due to possible hazardous minerals leading to respiratory system occurrences and other effects, caused by the presence of toxic elements. Once used for therapeutic purposes, any given material should be fully characterized and thus samples of Peruibe black mud were analyzed to determine physical and chemical properties: moisture content, organic matter and loss on ignition; pH, particle size, cation exchange capacity and swelling index. The elemental composition was determined by Neutron Activation Analysis, Atomic Absorption Graphite Furnace and X-ray fluorescence; the mineralogical composition was determined by X-ray diffraction. Another tool widely used to evaluate the behavior of trace elements, in various environmental matrices, is the sequential extraction. Thus, a sequential extraction procedure was applied to fractionate the mud in specific geochemical forms and verify how and how much of the elements may be contained in it. Considering the several sequential extraction procedures, BCR-701 method (Community Bureau of Reference) was used since it is considered the most reproducible among them. A simple extraction with an artificial sweat was, also, applied in order to verify which components are potentially available for absorption by the patient skin during the topical treatment. The results indicated that the mud is basically composed by a silty-clay material, rich in organic matter and with good cation exchange capacity. There were no significant variations in mineralogy and elemental composition of both, in natura and mature mud forms. The analysis by sequential extraction and by simple extraction indicated that the elements possibly available in larger

  1. Estimation Parameters And Modelling Zero Inflated Negative Binomial

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2016-11-01

    Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.

  2. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  3. Sequential hepato-splenic scintigraphy for measurement of hepatic blood flow

    Energy Technology Data Exchange (ETDEWEB)

    Biersack, H J; Knopp, R; Dahlem, R; Winkler, C [Bonn Univ. (Germany, F.R.). Inst. fuer Klinische und Experimentelle Nuklearmedizin; Thelen, M [Bonn Univ. (Germany, F.R.). Radiologische Klinik; Schulz, D; Schmidt, R [Bonn Univ. (Germany, F.R.). Chirurgische Klinik und Poliklinik

    1977-01-01

    The arterial and portal components of total liver blood flow were determined quantitatively in 31 patients by means of a new, non-invasive method. Sequential hepato-splenic scintigraphy has been employed, using a scintillation camera linked to a computer system. In normals, the proportion of portal flow was 71%, whereas in patients with portal hypertension it averaged 21%. Our experience indicates that the procedure can be of considerable value in the pre-operative diagnosis and postoperative follow-up of portal hypertension.

  4. Sequential hepato-splenic scintigraphy for measurement of hepatic blood flow

    International Nuclear Information System (INIS)

    Biersack, H.J.; Knopp, R.; Dahlem, R.; Winkler, C.; Thelen, M.; Schulz, D.; Schmidt, R.

    1977-01-01

    The arterial and portal components of total liver blood flow were determined quantitatively in 31 patients by means of a new, non-invasive method. Sequential hepato-splenic scintigraphy has been employed, using a scintillation camera linked to a computer system. In normals, the proportion of portal flow was 71%, whereas in patients with portal hypertension it averaged 21%. Our experience indicates that the procedure can be of considerable value in the pre-operative diagnosis and postoperative follow-up of portal hypertension. (orig.) [de

  5. A Global Sampling Based Image Matting Using Non-Negative Matrix Factorization

    Directory of Open Access Journals (Sweden)

    NAVEED ALAM

    2017-10-01

    Full Text Available Image matting is a technique in which a foreground is separated from the background of a given image along with the pixel wise opacity. This foreground can then be seamlessly composited in a different background to obtain a novel scene. This paper presents a global non-parametric sampling algorithm over image patches and utilizes a dimension reduction technique known as NMF (Non-Negative Matrix Factorization. Although some existing non-parametric approaches use large nearby foreground and background regions to sample patches but these approaches fail to take the whole image to sample patches. It is because of the high memory and computational requirements. The use of NMF in the proposed algorithm allows the dimension reduction which reduces the computational cost and memory requirement. The use of NMF also allow the proposed approach to use the whole foreground and background region in the image and reduces the patch complexity and help in efficient patch sampling. The use of patches not only allows the incorporation of the pixel colour but also the local image structure. The use of local structures in the image is important to estimate a high-quality alpha matte especially in the images which have regions containing high texture. The proposed algorithm is evaluated on the standard data set and obtained results are comparable to the state-of-the-art matting techniques

  6. A global sampling based image matting using non-negative matrix factorization

    International Nuclear Information System (INIS)

    Alam, N.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image matting is a technique in which a foreground is separated from the background of a given image along with the pixel wise opacity. This foreground can then be seamlessly composited in a different background to obtain a novel scene. This paper presents a global non-parametric sampling algorithm over image patches and utilizes a dimension reduction technique known as NMF (Non-Negative Matrix Factorization). Although some existing non-parametric approaches use large nearby foreground and background regions to sample patches but these approaches fail to take the whole image to sample patches. It is because of the high memory and computational requirements. The use of NMF in the proposed algorithm allows the dimension reduction which reduces the computational cost and memory requirement. The use of NMF also allow the proposed approach to use the whole foreground and background region in the image and reduces the patch complexity and help in efficient patch sampling. The use of patches not only allows the incorporation of the pixel colour but also the local image structure. The use of local structures in the image is important to estimate a high-quality alpha matte especially in the images which have regions containing high texture. The proposed algorithm is evaluated on the standard data set and obtained results are comparable to the state-of-the-art matting techniques. (author)

  7. Sequential application of ligand and structure based modeling approaches to index chemicals for their hH4R antagonism.

    Directory of Open Access Journals (Sweden)

    Matteo Pappalardo

    Full Text Available The human histamine H4 receptor (hH4R, a member of the G-protein coupled receptors (GPCR family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE and Iterative Stochastic Elimination (ISE approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and

  8. Sequential application of ligand and structure based modeling approaches to index chemicals for their hH4R antagonism.

    Science.gov (United States)

    Pappalardo, Matteo; Shachaf, Nir; Basile, Livia; Milardi, Danilo; Zeidan, Mouhammed; Raiyn, Jamal; Guccione, Salvatore; Rayan, Anwar

    2014-01-01

    The human histamine H4 receptor (hH4R), a member of the G-protein coupled receptors (GPCR) family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE) and Iterative Stochastic Elimination (ISE) approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and increase the

  9. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  10. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    Science.gov (United States)

    Whittington, Jesse; Sawaya, Michael A

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth

  11. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    Directory of Open Access Journals (Sweden)

    Jesse Whittington

    Full Text Available Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071 for females, 0.844 (0.703-0.975 for males, and 0.882 (0.779-0.981 for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024 for females, 0.825 (0.700-0.948 for males, and 0.863 (0.771-0.957 for both sexes. The combination of low densities, low reproductive rates, and predominantly negative

  12. A dynamical model of hierarchical selection and coordination in speech planning.

    Directory of Open Access Journals (Sweden)

    Sam Tilsen

    Full Text Available studies of the control of complex sequential movements have dissociated two aspects of movement planning: control over the sequential selection of movement plans, and control over the precise timing of movement execution. This distinction is particularly relevant in the production of speech: utterances contain sequentially ordered words and syllables, but articulatory movements are often executed in a non-sequential, overlapping manner with precisely coordinated relative timing. This study presents a hybrid dynamical model in which competitive activation controls selection of movement plans and coupled oscillatory systems govern coordination. The model departs from previous approaches by ascribing an important role to competitive selection of articulatory plans within a syllable. Numerical simulations show that the model reproduces a variety of speech production phenomena, such as effects of preparation and utterance composition on reaction time, and asymmetries in patterns of articulatory timing associated with onsets and codas. The model furthermore provides a unified understanding of a diverse group of phonetic and phonological phenomena which have not previously been related.

  13. Sequential assimilation of multi-mission dynamical topography into a global finite-element ocean model

    Directory of Open Access Journals (Sweden)

    S. Skachko

    2008-12-01

    Full Text Available This study focuses on an accurate estimation of ocean circulation via assimilation of satellite measurements of ocean dynamical topography into the global finite-element ocean model (FEOM. The dynamical topography data are derived from a complex analysis of multi-mission altimetry data combined with a referenced earth geoid. The assimilation is split into two parts. First, the mean dynamic topography is adjusted. To this end an adiabatic pressure correction method is used which reduces model divergence from the real evolution. Second, a sequential assimilation technique is applied to improve the representation of thermodynamical processes by assimilating the time varying dynamic topography. A method is used according to which the temperature and salinity are updated following the vertical structure of the first baroclinic mode. It is shown that the method leads to a partially successful assimilation approach reducing the rms difference between the model and data from 16 cm to 2 cm. This improvement of the mean state is accompanied by significant improvement of temporal variability in our analysis. However, it remains suboptimal, showing a tendency in the forecast phase of returning toward a free run without data assimilation. Both the mean difference and standard deviation of the difference between the forecast and observation data are reduced as the result of assimilation.

  14. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  15. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

  16. A Possible Minimum Toy Model with Negative Differential Capacitance for Self-sustained Current Oscillation

    International Nuclear Information System (INIS)

    Xiong Gang; Sun Zhouzhou; Wang Xiangrong

    2007-01-01

    We generalize a simple model for superlattices to include the effect of differential capacitance. It is shown that the model always has a stable steady-state solution (SSS) if all differential capacitances are positive. On the other hand, when negative differential capacitance is included, the model can have no stable SSS and be in a self-sustained current oscillation behavior. Therefore, we find a possible minimum toy model with both negative differential resistance and negative differential capacitance which can include the phenomena of both self-sustained current oscillation and I-V oscillation of stable SSSs.

  17. Impact of sequential disorder on the scaling behavior of airplane boarding time

    Science.gov (United States)

    Baek, Yongjoo; Ha, Meesoon; Jeong, Hawoong

    2013-05-01

    Airplane boarding process is an example where disorder properties of the system are relevant to the emergence of universality classes. Based on a simple model, we present a systematic analysis of finite-size effects in boarding time, and propose a comprehensive view of the role of sequential disorder in the scaling behavior of boarding time against the plane size. Using numerical simulations and mathematical arguments, we find how the scaling behavior depends on the number of seat columns and the range of sequential disorder. Our results show that new scaling exponents can arise as disorder is localized to varying extents.

  18. Monte Carlo Tree Search for Continuous and Stochastic Sequential Decision Making Problems

    International Nuclear Information System (INIS)

    Couetoux, Adrien

    2013-01-01

    In this thesis, I studied sequential decision making problems, with a focus on the unit commitment problem. Traditionally solved by dynamic programming methods, this problem is still a challenge, due to its high dimension and to the sacrifices made on the accuracy of the model to apply state of the art methods. I investigated on the applicability of Monte Carlo Tree Search methods for this problem, and other problems that are single player, stochastic and continuous sequential decision making problems. In doing so, I obtained a consistent and anytime algorithm, that can easily be combined with existing strong heuristic solvers. (author)

  19. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  20. Sequential method for the assessment of innovations in computer assisted industrial processes

    International Nuclear Information System (INIS)

    Suarez Antola R.

    1995-01-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs

  1. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC

  2. Negative differential mobility for negative carriers as revealed by space charge measurements on crosslinked polyethylene insulated model cables

    Energy Technology Data Exchange (ETDEWEB)

    Teyssedre, G., E-mail: gilbert.teyssedre@laplace.univ-tlse.fr; Laurent, C. [Université de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d' Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France); CNRS, LAPLACE, F-31062 Toulouse (France); Vu, T. T. N. [Université de Toulouse, UPS, INPT, LAPLACE (Laboratoire Plasma et Conversion d' Energie), 118 route de Narbonne, F-31062 Toulouse cedex 9 (France); Electric Power University, 235 Hoang Quoc Viet, 10000 Hanoi (Viet Nam)

    2015-12-21

    Among features observed in polyethylene materials under relatively high field, space charge packets, consisting in a pulse of net charge that remains in the form of a pulse as it crosses the insulation, are repeatedly observed but without complete theory explaining their formation and propagation. Positive charge packets are more often reported, and the models based on negative differential mobility(NDM) for the transport of holes could account for some charge packets phenomenology. Conversely, NDM for electrons transport has never been reported so far. The present contribution reports space charge measurements by pulsed electroacoustic method on miniature cables that are model of HVDC cables. The measurements were realized at room temperature or with a temperature gradient of 10 °C through the insulation under DC fields on the order 30–60 kV/mm. Space charge results reveal systematic occurrence of a negative front of charges generated at the inner electrode that moves toward the outer electrode at the beginning of the polarization step. It is observed that the transit time of the front of negative charge increases, and therefore the mobility decreases, with the applied voltage. Further, the estimated mobility, in the range 10{sup −14}–10{sup −13} m{sup 2} V{sup −1} s{sup −1} for the present results, increases when the temperature increases for the same condition of applied voltage. The features substantiate the hypothesis of negative differential mobility used for modelling space charge packets.

  3. Using Priced Options to Solve the Exposure Problem in Sequential Auctions

    Science.gov (United States)

    Mous, Lonneke; Robu, Valentin; La Poutré, Han

    This paper studies the benefits of using priced options for solving the exposure problem that bidders with valuation synergies face when participating in multiple, sequential auctions. We consider a model in which complementary-valued items are auctioned sequentially by different sellers, who have the choice of either selling their good directly or through a priced option, after fixing its exercise price. We analyze this model from a decision-theoretic perspective and we show, for a setting where the competition is formed by local bidders, that using options can increase the expected profit for both buyers and sellers. Furthermore, we derive the equations that provide minimum and maximum bounds between which a synergy buyer's bids should fall in order for both sides to have an incentive to use the options mechanism. Next, we perform an experimental analysis of a market in which multiple synergy bidders are active simultaneously.

  4. Genomic prediction based on data from three layer lines using non-linear regression models.

    Science.gov (United States)

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  5. Negative emotionality across diagnostic models: RDoC, DSM-5 Section III, and FFM.

    Science.gov (United States)

    Gore, Whitney L; Widiger, Thomas A

    2018-03-01

    The research domain criteria (RDoC) were established in an effort to explore underlying dimensions that cut across many existing disorders and to provide an alternative to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). One purpose of the present study was to suggest a potential alignment of RDoC negative valence with 2 other dimensional models of negative emotionality: five-factor model (FFM) neuroticism and the DSM-5 Section III negative affectivity. A second purpose of the study, though, was to compare their coverage of negative emotionality, more specifically with respect to affective instability. Participants were adult community residents (N = 90) currently in mental health treatment. Participants received self-report measures of RDoC negative valence, FFM neuroticism, and DSM-5 Section III negative affectivity, along with measures of affective instability, borderline personality disorder, and impairment. Findings suggested that RDoC negative valence is commensurate with FFM neuroticism and DSM-5 Section III negative affectivity, and it would be beneficial if it was expanded to include affective instability. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Revisiting non-Gaussianity from non-attractor inflation models

    Science.gov (United States)

    Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei

    2018-05-01

    Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.

  7. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  8. Using a Negative Binomial Regression Model for Early Warning at the Start of a Hand Foot Mouth Disease Epidemic in Dalian, Liaoning Province, China.

    Science.gov (United States)

    An, Qingyu; Wu, Jun; Fan, Xuesong; Pan, Liyang; Sun, Wei

    2016-01-01

    The hand foot and mouth disease (HFMD) is a human syndrome caused by intestinal viruses like that coxsackie A virus 16, enterovirus 71 and easily developed into outbreak in kindergarten and school. Scientifically and accurately early detection of the start time of HFMD epidemic is a key principle in planning of control measures and minimizing the impact of HFMD. The objective of this study was to establish a reliable early detection model for start timing of hand foot mouth disease epidemic in Dalian and to evaluate the performance of model by analyzing the sensitivity in detectability. The negative binomial regression model was used to estimate the weekly baseline case number of HFMD and identified the optimal alerting threshold between tested difference threshold values during the epidemic and non-epidemic year. Circular distribution method was used to calculate the gold standard of start timing of HFMD epidemic. From 2009 to 2014, a total of 62022 HFMD cases were reported (36879 males and 25143 females) in Dalian, Liaoning Province, China, including 15 fatal cases. The median age of the patients was 3 years. The incidence rate of epidemic year ranged from 137.54 per 100,000 population to 231.44 per 100,000population, the incidence rate of non-epidemic year was lower than 112 per 100,000 population. The negative binomial regression model with AIC value 147.28 was finally selected to construct the baseline level. The threshold value was 100 for the epidemic year and 50 for the non- epidemic year had the highest sensitivity(100%) both in retrospective and prospective early warning and the detection time-consuming was 2 weeks before the actual starting of HFMD epidemic. The negative binomial regression model could early warning the start of a HFMD epidemic with good sensitivity and appropriate detection time in Dalian.

  9. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  10. ONC201 demonstrates anti-tumor effects in both triple negative and non-triple negative breast cancers through TRAIL-dependent and TRAIL-independent mechanisms

    Science.gov (United States)

    Ralff, Marie D.; Kline, Christina L.B.; Küçükkase, Ozan C; Wagner, Jessica; Lim, Bora; Dicker, David T.; Prabhu, Varun V.; Oster, Wolfgang; El-Deiry, Wafik S.

    2017-01-01

    Breast cancer is a major cause of cancer-related death. TRAIL has been of interest as a cancer therapeutic, but only a subset of triple negative breast cancers (TNBC) is sensitive to TRAIL. The small molecule ONC201 induces expression of TRAIL and its receptor DR5. ONC201 has entered clinical trials in advanced cancers. Here we show that ONC201 is efficacious against both TNBC and non-TNBC cells (n=13). A subset of TNBC and non-TNBC cells succumb to ONC201-induced cell death. In 2/8 TNBC cell lines, ONC201 treatment induces caspase-8 cleavage and cell death that is blocked by TRAIL-neutralizing antibody RIK2. The pro-apoptotic effect of ONC201 translates to in vivo efficacy in the MDA-MB-468 xenograft model. In most TNBC lines tested (6/8) ONC201 has an anti-proliferative effect but does not induce apoptosis. ONC201 decreases cyclin D1 expression and causes an accumulation of cells in the G1 phase of the cell cycle. pRb expression is associated with sensitivity to the anti-proliferative effects of ONC201, and the compound synergizes with taxanes in less sensitive cells. All non-TNBC cells (n=5) are growth inhibited following ONC201 treatment, and unlike what has been observed with TRAIL, a subset (n=2) show PARP cleavage. In these cells, cell death induced by ONC201 is TRAIL-independent. Our data demonstrate that ONC201 has potent anti-proliferative and pro-apoptotic effects in a broad range of breast cancer subtypes, through TRAIL-dependent and TRAIL-independent mechanisms. These findings develop a pre-clinical rationale for developing ONC201 as a single agent and/or in combination with approved therapies in breast cancer. PMID:28424227

  11. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering.

    Directory of Open Access Journals (Sweden)

    Bin Ju

    Full Text Available Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.

  12. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering.

    Science.gov (United States)

    Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi

    2015-01-01

    Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.

  13. Clinical Manifestations of Helicobacter pylori-Negative Gastritis.

    Science.gov (United States)

    Shiota, Seiji; Thrift, Aaron P; Green, Linda; Shah, Rajesh; Verstovsek, Gordana; Rugge, Massimo; Graham, David Y; El-Serag, Hashem B

    2017-07-01

    There are data to suggest the existence of non-Helicobacter pylori gastritis. However, the risk factors and clinical course for H pylori-negative gastritis remain unclear. We aimed to examine the prevalence and determinants of H pylori-negative gastritis in a large multiethnic clinical population. We conducted a cross-sectional study among patents scheduled for an elective esophagastroduodenoscopy or attending selected primary care clinics and eligible for screening colonoscopy at a single Veterans Affairs medical center. We identified cases of H pylor-negative gastritis, H pylori-positive gastritis, and H pylori-negative nongastritis, where gastritis was defined by the presence of neutrophils and/or mononuclear cells. Risk factors for H pylori-negative gastritis were analyzed in logistic regression models. A total of 1240 patients had information from all biopsy sites, of whom 695 (56.0%) had gastritis. H pylori-negative gastritis was present in 123 patients (9.9% of all study subjects and 17.7% of all patients with gastritis). Among all patients with gastritis, African Americans were statistically significantly less likely than non-Hispanic whites to have H pylori-negative gastritis (odds ratio, 0.25; 95% confidence interval, 0.14-0.43). Conversely, PPI users were more likely to have H pylori-negative gastritis than H pylori-positive gastritis compared with nonusers (odds ratio, 2.02; 95% confidence interval, 1.17-3.49). The cumulative incidence of gastric erosions and ulcers were higher in patients with H pylori-negative gastritis than H pylori-negative nongastritis. We found that H pylori-negative gastritis was present in approximately 18% of patients with gastritis. The potential for H pylori-negative gastritis to progress or the risk of gastric cancer of those with gastric mucosal atrophy/intestinal metaplasia remains unclear. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  14. Equivalence between contextuality and negativity of the Wigner function for qudits

    Science.gov (United States)

    Delfosse, Nicolas; Okay, Cihan; Bermejo-Vega, Juan; Browne, Dan E.; Raussendorf, Robert

    2017-12-01

    Understanding what distinguishes quantum mechanics from classical mechanics is crucial for quantum information processing applications. In this work, we consider two notions of non-classicality for quantum systems, negativity of the Wigner function and contextuality for Pauli measurements. We prove that these two notions are equivalent for multi-qudit systems with odd local dimension. For a single qudit, the equivalence breaks down. We show that there exist single qudit states that admit a non-contextual hidden variable model description and whose Wigner functions are negative.

  15. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    Science.gov (United States)

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  16. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

    International Nuclear Information System (INIS)

    Yoo, T. S.; Garcia, H. E.

    2006-01-01

    This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

  17. Systems-level modeling the effects of arsenic exposure with sequential pulsed and fluctuating patterns for tilapia and freshwater clam

    International Nuclear Information System (INIS)

    Chen, W.-Y.; Tsai, J.-W.; Ju, Y.-R.; Liao, C.-M.

    2010-01-01

    The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.

  18. Systems-level modeling the effects of arsenic exposure with sequential pulsed and fluctuating patterns for tilapia and freshwater clam

    Energy Technology Data Exchange (ETDEWEB)

    Chen, W.-Y. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Tsai, J.-W. [Institute of Ecology and Evolutionary Ecology, China Medical University, Taichung 40402, Taiwan (China); Ju, Y.-R. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Liao, C.-M., E-mail: cmliao@ntu.edu.t [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China)

    2010-05-15

    The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.

  19. Cross-Domain Statistical-Sequential Dependencies Are Difficult To Learn

    Directory of Open Access Journals (Sweden)

    Anne McClure Walk

    2016-02-01

    Full Text Available Recent studies have demonstrated participants’ ability to learn cross-modal associations during statistical learning tasks. However, these studies are all similar in that the cross-modal associations to be learned occur simultaneously, rather than sequentially. In addition, the majority of these studies focused on learning across sensory modalities but not across perceptual categories. To test both cross-modal and cross-categorical learning of sequential dependencies, we used an artificial grammar learning task consisting of a serial stream of auditory and/or visual stimuli containing both within- and cross-domain dependencies. Experiment 1 examined within-modal and cross-modal learning across two sensory modalities (audition and vision. Experiment 2 investigated within-categorical and cross-categorical learning across two perceptual categories within the same sensory modality (e.g. shape and color; tones and non-words. Our results indicated that individuals demonstrated learning of the within-modal and within-categorical but not the cross-modal or cross-categorical dependencies. These results stand in contrast to the previous demonstrations of cross-modal statistical learning, and highlight the presence of modality constraints that limit the effectiveness of learning in a multimodal environment.

  20. Mamdani-Fuzzy Modeling Approach for Quality Prediction of Non-Linear Laser Lathing Process

    Science.gov (United States)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Lathing is a process to fashioning stock materials into desired cylindrical shapes which usually performed by traditional lathe machine. But, the recent rapid advancements in engineering materials and precision demand gives a great challenge to the traditional method. The main drawback of conventional lathe is its mechanical contact which brings to the undesirable tool wear, heat affected zone, finishing, and dimensional accuracy especially taper quality in machining of stock with high length to diameter ratio. Therefore, a novel approach has been devised to investigate in transforming a 2D flatbed CO2 laser cutting machine into 3D laser lathing capability as an alternative solution. Three significant design parameters were selected for this experiment, namely cutting speed, spinning speed, and depth of cut. Total of 24 experiments were performed with eight (8) sequential runs where they were then replicated three (3) times. The experimental results were then used to establish Mamdani - Fuzzy predictive model where it yields the accuracy of more than 95%. Thus, the proposed Mamdani - Fuzzy modelling approach is found very much suitable and practical for quality prediction of non-linear laser lathing process for cylindrical stocks of 10mm diameter.

  1. A dynamic general disequilibrium model of a sequential monetary production economy

    International Nuclear Information System (INIS)

    Raberto, Marco; Teglio, Andrea; Cincotti, Silvano

    2006-01-01

    A discrete, deterministic, economic model, based on the framework of non-Walrasian or disequilibrium economics, is presented. The main feature of this approach is the presence of non-clearing markets, where not all demands and supplies are satisfied and some agents may be rationed. The model is characterized by three agents (i.e., a representative firm, a representative consumer, and a central bank), three commodities (i.e., goods, labour and money, each homogeneous) and three markets for their exchange. The imbalance between demand and supply in each market determines the dynamics of price, nominal wage and nominal interest rate. The central bank provides the money supply according to an operating target interest rate that is fixed accordingly to Taylor's rule. The model has been studied by means of computer simulations. Results pointed out the presence of business cycles that can be controlled by proper policies of the central bank

  2. Improving the robustness of Surface Enhanced Raman Spectroscopy based sensors by Bayesian Non-negative Matrix Factorization

    DEFF Research Database (Denmark)

    Alstrøm, Tommy Sonne; Frøhling, Kasper Bayer; Larsen, Jan

    2014-01-01

    a Bayesian Non-negative Matrix Factorization (NMF) approach to identify locations of target molecules. The proposed method is able to successfully analyze the spectra and extract the target spectrum. A visualization of the loadings of the basis vector is created and the results show a clear SNR enhancement...

  3. Immediately sequential bilateral cataract surgery: advantages and disadvantages.

    Science.gov (United States)

    Singh, Ranjodh; Dohlman, Thomas H; Sun, Grace

    2017-01-01

    The number of cataract surgeries performed globally will continue to rise to meet the needs of an aging population. This increased demand will require healthcare systems and providers to find new surgical efficiencies while maintaining excellent surgical outcomes. Immediately sequential bilateral cataract surgery (ISBCS) has been proposed as a solution and is increasingly being performed worldwide. The purpose of this review is to discuss the advantages and disadvantages of ISBCS. When appropriate patient selection occurs and guidelines are followed, ISBCS is comparable with delayed sequential bilateral cataract surgery in long-term patient satisfaction, visual acuity and complication rates. In addition, the risk of bilateral postoperative endophthalmitis and concerns of poorer refractive outcomes have not been supported by the literature. ISBCS is cost-effective for the patient, healthcare payors and society, but current reimbursement models in many countries create significant financial barriers for facilities and surgeons. As demand for cataract surgery rises worldwide, ISBCS will become increasingly important as an alternative to delayed sequential bilateral cataract surgery. Advantages include potentially decreased wait times for surgery, patient convenience and cost savings for healthcare payors. Although they are comparable in visual acuity and complication rates, hurdles that prevent wide adoption include liability concerns as ISBCS is not an established standard of care, economic constraints for facilities and surgeons and inability to fine-tune intraocular lens selection in the second eye. Given these considerations, an open discussion regarding the advantages and disadvantages of ISBCS is important for appropriate patient selection.

  4. Effects of a combined parent-student alcohol prevention program on intermediate factors and adolescents' drinking behavior: A sequential mediation model.

    Science.gov (United States)

    Koning, Ina M; Maric, Marija; MacKinnon, David; Vollebergh, Wilma A M

    2015-08-01

    Previous work revealed that the combined parent-student alcohol prevention program (PAS) effectively postponed alcohol initiation through its hypothesized intermediate factors: increase in strict parental rule setting and adolescents' self-control (Koning, van den Eijnden, Verdurmen, Engels, & Vollebergh, 2011). This study examines whether the parental strictness precedes an increase in adolescents' self-control by testing a sequential mediation model. A cluster randomized trial including 3,245 Dutch early adolescents (M age = 12.68, SD = 0.50) and their parents randomized over 4 conditions: (1) parent intervention, (2) student intervention, (3) combined intervention, and (4) control group. Outcome measure was amount of weekly drinking measured at age 12 to 15; baseline assessment (T0) and 3 follow-up assessments (T1-T3). Main effects of the combined and parent intervention on weekly drinking at T3 were found. The effect of the combined intervention on weekly drinking (T3) was mediated via an increase in strict rule setting (T1) and adolescents' subsequent self-control (T2). In addition, the indirect effect of the combined intervention via rule setting (T1) was significant. No reciprocal sequential mediation (self-control at T1 prior to rules at T2) was found. The current study is 1 of the few studies reporting sequential mediation effects of youth intervention outcomes. It underscores the need of involving parents in youth alcohol prevention programs, and the need to target both parents and adolescents, so that change in parents' behavior enables change in their offspring. (c) 2015 APA, all rights reserved).

  5. Anatomy of a defective barrier: sequential glove leak detection in a surgical and dental environment.

    Science.gov (United States)

    Albin, M S; Bunegin, L; Duke, E S; Ritter, R R; Page, C P

    1992-02-01

    a) To determine the frequency of perforations in latex surgical gloves before, during, and after surgical and dental procedures; b) to evaluate the topographical distribution of perforations in latex surgical gloves after surgical and dental procedures; and c) to validate methods of testing for latex surgical glove patency. Multitrial tests under in vitro conditions and a prospective sequential patient study using consecutive testing. An outpatient dental clinic at a university dental school, the operating suite in a medical school affiliated with the Veteran's Hospital, and a biomechanics laboratory. Surgeons, scrub nurses, and dental technicians participating in 50 surgical and 50 dental procedures. We collected 679 latex surgical gloves after surgical procedures and tested them for patency by using a water pressure test. We also employed an electronic glove leak detector before donning, after sequential time intervals, and upon termination of 47 surgical (sequential surgical), 50 dental (sequential dental), and in three orthopedic cases where double gloving was used. The electronic glove leak detector was validated by using electronic point-by-point surface probing, fluorescein dye diffusion, as well as detecting glove punctures made with a 27-gauge needle. The random study indicated a leak rate of 33.0% (224 out of 679) in latex surgical gloves; the sequential surgical study demonstrated patency in 203 out of 347 gloves (58.5%); the sequential dental study showed 34 leaks in the 106 gloves used (32.1%); and with double gloving, the leak rate decreased to 25.0% (13 of 52 gloves tested). While the allowable FDA defect rate for unused latex surgical gloves is 1.5%, we noted defect rates in unused gloves of 5.5% in the sequential surgical, 1.9% in the sequential dental, and 4.0% in our electronic glove leak detector validating study. In the sequential surgical study, 52% of the leaks had occurred by 75 mins, and in the sequential dental study, 75% of the leaks

  6. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

  7. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Effect of sequential fermentations and grape cultivars on volatile compounds and sensory profiles of Danish wines.

    Science.gov (United States)

    Liu, Jing; Arneborg, Nils; Toldam-Andersen, Torben B; Petersen, Mikael A; Bredie, Wender Lp

    2017-08-01

    There has been an increasing interest in the use of selected non-Saccharomyces yeasts in co-culture with Saccharomyces cerevisiae. In this work, three non-Saccharomyces yeast strains (Metschnikowia viticola, Metschnikowia fructicola and Hanseniaspora uvarum) indigenously isolated in Denmark were used in sequential fermentations with S. cerevisiae on three cool-climate grape cultivars, Bolero, Rondo and Regent. During the fermentations, the yeast growth was determined as well as key oenological parameters, volatile compounds and sensory properties of finished rosé wines. The different non-Saccharomyces strains and cool-climate grape cultivars produced wines with a distinctive aromatic profile. A total of 67 volatile compounds were identified, including 43 esters, 14 alcohols, five acids, two ketones, a C13-norisoprenoid, a lactone and a sulfur compound. The use of M. viticola in sequential fermentation with S. cerevisiae resulted in richer berry and fruity flavours in wines. The sensory plot showed a more clear separation among wine samples by grape cultivars compared with yeast strains. Knowledge on the influence of indigenous non-Saccharomyces strains and grape cultivars on the flavour generation contributed to producing diverse wines in cool-climate wine regions. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  9. Sequential allergen desensitization of basophils is non-specific and may involve p38 MAPK.

    Science.gov (United States)

    Witting Christensen, S K; Kortekaas Krohn, I; Thuraiaiyah, J; Skjold, T; Schmid, J M; Hoffmann, H J H

    2014-10-01

    Sequential allergen desensitization provides temporary tolerance for allergic patients. We adapted a clinical protocol to desensitize human blood basophils ex vivo and investigated the mechanism and allergen specificity. We included 28 adult, grass allergic subjects. The optimal, activating allergen concentration was determined by measuring activated CD63(+) CD193(+) SS(Low) basophils in a basophil activation test with 8 log-dilutions of grass allergen. Basophils in whole blood were desensitized by incubation with twofold to 2.5-fold increasing allergen doses in 10 steps starting at 1 : 1000 of the optimal dose. Involvement of p38 mitogen-activated protein kinase (MAPK) was assessed after 3 min of allergen stimulation (n = 7). Allergen specificity was investigated by desensitizing cells from multi-allergic subjects with grass allergen and challenging with optimal doses of grass, birch, recombinant house dust mite (rDer p2) allergen or anti-IgE (n = 10). Desensitization reduced the fraction of blood basophils responding to challenge with an optimal allergen dose from a median (IQR) 81.0% (66.3-88.8) to 35.4% (19.8-47.1, P desensitized with grass allergen. Challenge with grass allergen resulted in 39.6% activation (15.8-58.3). An unrelated challenge (birch, rDer p2 or anti-IgE) resulted in 53.4% activation (30.8-66.8, P = 0.16 compared with grass). Desensitization reduced p38 MAPK phosphorylation from a median 48.1% (15.6-92.8) to 26.1% (7.4-71.2, P = 0.047) and correlated with decrease in CD63 upregulation (n = 7, r > 0.79, P Desensitization attenuated basophil response rapidly and non-specifically at a stage before p38 MAPK phosphorylation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Cognitive processes associated with sequential tool use in New Caledonian crows.

    Directory of Open Access Journals (Sweden)

    Joanna H Wimpenny

    Full Text Available BACKGROUND: Using tools to act on non-food objects--for example, to make other tools--is considered to be a hallmark of human intelligence, and may have been a crucial step in our evolution. One form of this behaviour, 'sequential tool use', has been observed in a number of non-human primates and even in one bird, the New Caledonian crow (Corvus moneduloides. While sequential tool use has often been interpreted as evidence for advanced cognitive abilities, such as planning and analogical reasoning, the behaviour itself can be underpinned by a range of different cognitive mechanisms, which have never been explicitly examined. Here, we present experiments that not only demonstrate new tool-using capabilities in New Caledonian crows, but allow examination of the extent to which crows understand the physical interactions involved. METHODOLOGY/PRINCIPAL FINDINGS: In two experiments, we tested seven captive New Caledonian crows in six tasks requiring the use of up to three different tools in a sequence to retrieve food. Our study incorporated several novel features: (i we tested crows on a three-tool problem (subjects were required to use a tool to retrieve a second tool, then use the second tool to retrieve a third one, and finally use the third one to reach for food; (ii we presented tasks of different complexity in random rather than progressive order; (iii we included a number of control conditions to test whether tool retrieval was goal-directed; and (iv we manipulated the subjects' pre-testing experience. Five subjects successfully used tools in a sequence (four from their first trial, and four subjects repeatedly solved the three-tool condition. Sequential tool use did not require, but was enhanced by, pre-training on each element in the sequence ('chaining', an explanation that could not be ruled out in earlier studies. By analyzing tool choice, tool swapping and improvement over time, we show that successful subjects did not use a random

  11. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

    Science.gov (United States)

    Lin, Jane-Ming; Tsai, Yi-Yu

    2005-01-01

    To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

  12. Non-negative matrix factorization techniques advances in theory and applications

    CERN Document Server

    2016-01-01

    This book collects new results, concepts and further developments of NMF. The open problems discussed include, e.g. in bioinformatics: NMF and its extensions applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining etc. The research results previously scattered in different scientific journals and conference proceedings are methodically collected and presented in a unified form. While readers can read the book chapters sequentially, each chapter is also self-contained. This book can be a good reference work for researchers and engineers interested in NMF, and can also be used as a handbook for students and professionals seeking to gain a better understanding of the latest applications of NMF.

  13. A Survey of Multi-Objective Sequential Decision-Making

    NARCIS (Netherlands)

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential

  14. Microstructure history effect during sequential thermomechanical processing

    International Nuclear Information System (INIS)

    Yassar, Reza S.; Murphy, John; Burton, Christina; Horstemeyer, Mark F.; El kadiri, Haitham; Shokuhfar, Tolou

    2008-01-01

    The key to modeling the material processing behavior is the linking of the microstructure evolution to its processing history. This paper quantifies various microstructural features of an aluminum automotive alloy that undergoes sequential thermomechanical processing which is comprised hot rolling of a 150-mm billet to a 75-mm billet, rolling to 3 mm, annealing, and then cold rolling to a 0.8-mm thickness sheet. The microstructural content was characterized by means of electron backscatter diffraction, scanning electron microscopy, and transmission electron microscopy. The results clearly demonstrate the evolution of precipitate morphologies, dislocation structures, and grain orientation distributions. These data can be used to improve material models that claim to capture the history effects of the processing materials

  15. Sequential lineups: shift in criterion or decision strategy?

    Science.gov (United States)

    Gronlund, Scott D

    2004-04-01

    R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

  16. Negative velocity fluctuations and non-equilibrium fluctuation relation for a driven high critical current vortex state.

    Science.gov (United States)

    Bag, Biplab; Shaw, Gorky; Banerjee, S S; Majumdar, Sayantan; Sood, A K; Grover, A K

    2017-07-17

    Under the influence of a constant drive the moving vortex state in 2H-NbS 2 superconductor exhibits a negative differential resistance (NDR) transition from a steady flow to an immobile state. This state possesses a high depinning current threshold ([Formula: see text]) with unconventional depinning characteristics. At currents well above [Formula: see text], the moving vortex state exhibits a multimodal velocity distribution which is characteristic of vortex flow instabilities in the NDR regime. However at lower currents which are just above [Formula: see text], the velocity distribution is non-Gaussian with a tail extending to significant negative velocity values. These unusual negative velocity events correspond to vortices drifting opposite to the driving force direction. We show that this distribution obeys the Gallavotti-Cohen Non-Equilibrium Fluctuation Relation (GC-NEFR). Just above [Formula: see text], we also find a high vortex density fluctuating driven state not obeying the conventional GC-NEFR. The GC-NEFR analysis provides a measure of an effective energy scale (E eff ) associated with the driven vortex state. The E eff corresponds to the average energy dissipated by the fluctuating vortex state above [Formula: see text]. We propose the high E eff value corresponds to the onset of high energy dynamic instabilities in this driven vortex state just above [Formula: see text].

  17. The effect of sequential dual-gas testing on laser-induced breakdown spectroscopy-based discrimination: Application to brass samples and bacterial strains

    International Nuclear Information System (INIS)

    Rehse, Steven J.; Mohaidat, Qassem I.

    2009-01-01

    Four Cu-Zn brass alloys with different stoichiometries and compositions have been analyzed by laser-induced breakdown spectroscopy (LIBS) using nanosecond laser pulses. The intensities of 15 emission lines of copper, zinc, lead, carbon, and aluminum (as well as the environmental contaminants sodium and calcium) were normalized and analyzed with a discriminant function analysis (DFA) to rapidly categorize the samples by alloy. The alloys were tested sequentially in two different noble gases (argon and helium) to enhance discrimination between them. When emission intensities from samples tested sequentially in both gases were combined to form a single 30-spectral line 'fingerprint' of the alloy, an overall 100% correct identification was achieved. This was a modest improvement over using emission intensities acquired in argon gas alone. A similar study was performed to demonstrate an enhanced discrimination between two strains of Escherichia coli (a Gram-negative bacterium) and a Gram-positive bacterium. When emission intensities from bacteria sequentially ablated in two different gas environments were combined, the DFA achieved a 100% categorization accuracy. This result showed the benefit of sequentially testing highly similar samples in two different ambient gases to enhance discrimination between the samples.

  18. Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.

    Science.gov (United States)

    Domenger, D; Schwarting, R K W

    2008-10-31

    Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired

  19. Non-thermal plasma destruction of allyl alcohol in waste gas: kinetics and modelling

    Science.gov (United States)

    DeVisscher, A.; Dewulf, J.; Van Durme, J.; Leys, C.; Morent, R.; Van Langenhove, H.

    2008-02-01

    Non-thermal plasma treatment is a promising technique for the destruction of volatile organic compounds in waste gas. A relatively unexplored technique is the atmospheric negative dc multi-pin-to-plate glow discharge. This paper reports experimental results of allyl alcohol degradation and ozone production in this type of plasma. A new model was developed to describe these processes quantitatively. The model contains a detailed chemical degradation scheme, and describes the physics of the plasma by assuming that the fraction of electrons that takes part in chemical reactions is an exponential function of the reduced field. The model captured the experimental kinetic data to less than 2 ppm standard deviation.

  20. Non-thermal plasma destruction of allyl alcohol in waste gas: kinetics and modelling

    International Nuclear Information System (INIS)

    Visscher, A de; Dewulf, J; Durme, J van; Leys, C; Morent, R; Langenhove, H Van

    2008-01-01

    Non-thermal plasma treatment is a promising technique for the destruction of volatile organic compounds in waste gas. A relatively unexplored technique is the atmospheric negative dc multi-pin-to-plate glow discharge. This paper reports experimental results of allyl alcohol degradation and ozone production in this type of plasma. A new model was developed to describe these processes quantitatively. The model contains a detailed chemical degradation scheme, and describes the physics of the plasma by assuming that the fraction of electrons that takes part in chemical reactions is an exponential function of the reduced field. The model captured the experimental kinetic data to less than 2 ppm standard deviation

  1. Network modeling reveals prevalent negative regulatory relationships between signaling sectors in Arabidopsis immune signaling.

    Directory of Open Access Journals (Sweden)

    Masanao Sato

    Full Text Available Biological signaling processes may be mediated by complex networks in which network components and network sectors interact with each other in complex ways. Studies of complex networks benefit from approaches in which the roles of individual components are considered in the context of the network. The plant immune signaling network, which controls inducible responses to pathogen attack, is such a complex network. We studied the Arabidopsis immune signaling network upon challenge with a strain of the bacterial pathogen Pseudomonas syringae expressing the effector protein AvrRpt2 (Pto DC3000 AvrRpt2. This bacterial strain feeds multiple inputs into the signaling network, allowing many parts of the network to be activated at once. mRNA profiles for 571 immune response genes of 22 Arabidopsis immunity mutants and wild type were collected 6 hours after inoculation with Pto DC3000 AvrRpt2. The mRNA profiles were analyzed as detailed descriptions of changes in the network state resulting from the genetic perturbations. Regulatory relationships among the genes corresponding to the mutations were inferred by recursively applying a non-linear dimensionality reduction procedure to the mRNA profile data. The resulting static network model accurately predicted 23 of 25 regulatory relationships reported in the literature, suggesting that predictions of novel regulatory relationships are also accurate. The network model revealed two striking features: (i the components of the network are highly interconnected; and (ii negative regulatory relationships are common between signaling sectors. Complex regulatory relationships, including a novel negative regulatory relationship between the early microbe-associated molecular pattern-triggered signaling sectors and the salicylic acid sector, were further validated. We propose that prevalent negative regulatory relationships among the signaling sectors make the plant immune signaling network a "sector

  2. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  3. The effect of sequential coupling on radial displacement accuracy in electromagnetic inside-bead forming: simulation and experimental analysis using Maxwell and ABAQUS software

    Energy Technology Data Exchange (ETDEWEB)

    Chaharmiri, Rasoul; Arezoodar, Alireza Fallahi [Amirkabir University, Tehran (Iran, Islamic Republic of)

    2016-05-15

    Electromagnetic forming (EMF) is a high strain rate forming technology which can effectively deform and shape high electrically conductive materials at room temperature. In this study, the electromagnetic and mechanical parts of the process simulated using Maxwell and ABAQUS software, respectively. To provide a link between the software, two approaches include 'loose' and 'sequential' coupling were applied. This paper is aimed to investigate how sequential coupling would affect radial displacement accuracy, as an indicator of tube final shape, at various discharge voltages. The results indicated a good agreement for the both approaches at lower discharge voltages with more accurate results for sequential coupling, but at high discharge voltages, there was a non-negligible overestimation of about 43% for the loose coupling reduced to only 8.2% difference by applying sequential coupling in the case studied. Therefore, in order to reach more accurate predictions, applying sequential coupling especially at higher discharge voltages is strongly recommended.

  4. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  5. Decoding restricted participation in sequential electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Knaut, Andreas; Paschmann, Martin

    2017-06-15

    Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.

  6. Sequential fragmentation of Pleistocene forests in an East Africa biodiversity hotspot: chameleons as a model to track forest history.

    Directory of Open Access Journals (Sweden)

    G John Measey

    Full Text Available The Eastern Arc Mountains (EAM is an example of naturally fragmented tropical forests, which contain one of the highest known concentrations of endemic plants and vertebrates. Numerous paleo-climatic studies have not provided direct evidence for ancient presence of Pleistocene forests, particularly in the regions in which savannah presently occurs. Knowledge of the last period when forests connected EAM would provide a sound basis for hypothesis testing of vicariance and dispersal models of speciation. Dated phylogenies have revealed complex patterns throughout EAM, so we investigated divergence times of forest fauna on four montane isolates in close proximity to determine whether forest break-up was most likely to have been simultaneous or sequential, using population genetics of a forest restricted arboreal chameleon, Kinyongia boehmei.We used mitochondrial and nuclear genetic sequence data and mutation rates from a fossil-calibrated phylogeny to estimate divergence times between montane isolates using a coalescent approach. We found that chameleons on all mountains are most likely to have diverged sequentially within the Pleistocene from 0.93-0.59 Ma (95% HPD 0.22-1.84 Ma. In addition, post-hoc tests on chameleons on the largest montane isolate suggest a population expansion ∼182 Ka.Sequential divergence is most likely to have occurred after the last of three wet periods within the arid Plio-Pleistocene era, but was not correlated with inter-montane distance. We speculate that forest connection persisted due to riparian corridors regardless of proximity, highlighting their importance in the region's historic dispersal events. The population expansion coincides with nearby volcanic activity, which may also explain the relative paucity of the Taita's endemic fauna. Our study shows that forest chameleons are an apposite group to track forest fragmentation, with the inference that forest extended between some EAM during the Pleistocene 1

  7. Multichannel, sequential or combined X-ray spectrometry

    International Nuclear Information System (INIS)

    Florestan, J.

    1979-01-01

    X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr

  8. Memory and decision making: Effects of sequential presentation of probabilities and outcomes in risky prospects.

    Science.gov (United States)

    Millroth, Philip; Guath, Mona; Juslin, Peter

    2018-06-07

    The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Topics in acoustics, non destructive testing, and thermo-mechanics of continua

    International Nuclear Information System (INIS)

    Suarez Antola, R.

    2001-03-01

    A small scale physical model of a granular porous medium was studied .Osmosis, filtration and fracture were considered, both experimentally and mathematically.Longitudinal ultrasonic pulse velocity was measured in slender timber and concrete bodies in order to characterized the geometric dispersion effects.A mathematical model is developed to described geometric dispersion in reinforced concrete.A sequential method for non destructive testing of structures by mechanicals vibrations is proposed and theoretically considered.Some simple examples are fully developed from a theoretical stand point

  10. Group sequential designs for stepped-wedge cluster randomised trials.

    Science.gov (United States)

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into

  11. Sequential double photodetachment of He- in elliptically polarized laser fields

    Science.gov (United States)

    Génévriez, Matthieu; Dunseath, Kevin M.; Terao-Dunseath, Mariko; Urbain, Xavier

    2018-02-01

    Four-photon double detachment of the helium negative ion is investigated experimentally and theoretically for photon energies where the transient helium atom is in the 1 s 2 s 3S or 1 s 2 p P3o states, which subsequently ionize by absorption of three photons. Ionization is enhanced by intermediate resonances, giving rise to series of peaks in the He+ spectrum, which we study in detail. The He+ yield is measured in the wavelength ranges from 530 to 560 nm and from 685 to 730 nm and for various polarizations of the laser light. Double detachment is treated theoretically as a sequential process, within the framework of R -matrix theory for the first step and effective Hamiltonian theory for the second step. Experimental conditions are accurately modeled, and the measured and simulated yields are in good qualitative and, in some cases, quantitative agreement. Resonances in the double detachment spectra can be attributed to well-defined Rydberg states of the transient atom. The double detachment yield exhibits a strong dependence on the laser polarization which can be related to the magnetic quantum number of the intermediate atomic state. We also investigate the possibility of nonsequential double detachment with a two-color experiment but observe no evidence for it.

  12. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  13. Variability in results from negative binomial models for Lyme disease measured at different spatial scales.

    Science.gov (United States)

    Tran, Phoebe; Waller, Lance

    2015-01-01

    Lyme disease has been the subject of many studies due to increasing incidence rates year after year and the severe complications that can arise in later stages of the disease. Negative binomial models have been used to model Lyme disease in the past with some success. However, there has been little focus on the reliability and consistency of these models when they are used to study Lyme disease at multiple spatial scales. This study seeks to explore how sensitive/consistent negative binomial models are when they are used to study Lyme disease at different spatial scales (at the regional and sub-regional levels). The study area includes the thirteen states in the Northeastern United States with the highest Lyme disease incidence during the 2002-2006 period. Lyme disease incidence at county level for the period of 2002-2006 was linked with several previously identified key landscape and climatic variables in a negative binomial regression model for the Northeastern region and two smaller sub-regions (the New England sub-region and the Mid-Atlantic sub-region). This study found that negative binomial models, indeed, were sensitive/inconsistent when used at different spatial scales. We discuss various plausible explanations for such behavior of negative binomial models. Further investigation of the inconsistency and sensitivity of negative binomial models when used at different spatial scales is important for not only future Lyme disease studies and Lyme disease risk assessment/management but any study that requires use of this model type in a spatial context. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. The clinicopathologic characteristics and prognostic significance of triple-negativity in node-negative breast cancer

    International Nuclear Information System (INIS)

    Rhee, Jiyoung; Kim, Tae-You; Han, Sae-Won; Oh, Do-Youn; Kim, Jee Hyun; Im, Seock-Ah; Han, Wonshik; Ae Park, In; Noh, Dong-Young; Bang, Yung-Jue

    2008-01-01

    Triple-negative (TN) breast cancer, which is defined as being negative for the estrogen receptor (ER), the progesterone receptor (PR), and the human epidermal growth factor receptor 2 (HER-2), represents a subset of breast cancer with different biologic behaviour. We investigated the clinicopathologic characteristics and prognostic indicators of lymph node-negative TN breast cancer. Medical records were reviewed from patients with node-negative breast cancer who underwent curative surgery at Seoul National University Hospital between Jan. 2000 and Jun. 2003. Clinicopathologic variables and clinical outcomes were evaluated. Among 683 patients included, 136 had TN breast cancer and 529 had non-TN breast cancer. TN breast cancer correlated with younger age (< 35 y, p = 0.003), and higher histologic and nuclear grade (p < 0.001). It also correlated with a molecular profile associated with biological aggressiveness: negative for bcl-2 expression (p < 0.001), positive for the epidermal growth factor receptor (p = 0.003), and a high level of p53 (p < 0.001) and Ki67 expression (p < 0.00). The relapse rates during the follow-up period (median, 56.8 months) were 14.7% for TN breast cancer and 6.6% for non-TN breast cancer (p = 0.004). Relapse free survival (RFS) was significantly shorter among patients with TN breast cancer compared with those with non-TN breast cancer (4-year RFS rate 85.5% vs. 94.2%, respectively; p = 0.001). On multivariate analysis, young age, close resection margin, and triple-negativity were independent predictors of shorter RFS. TN breast cancer had higher relapse rate and more aggressive clinicopathologic characteristics than non-TN in node-negative breast cancer. Thus, TN breast cancer should be integrated into the risk factor analysis for node-negative breast cancer

  15. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  16. Equivalence between quantum simultaneous games and quantum sequential games

    OpenAIRE

    Kobayashi, Naoki

    2007-01-01

    A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...

  17. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  18. A Comparison of Ultimate Loads from Fully and Sequentially Coupled Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Damiani, Rick R [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-11-14

    This poster summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between two modeling approaches (fully coupled and sequentially coupled) through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  19. A Cyclical Approach to Continuum Modeling: A Conceptual Model of Diabetic Foot Care

    Directory of Open Access Journals (Sweden)

    Martha L. Carvour

    2017-12-01

    Full Text Available “Cascade” or “continuum” models have been developed for a number of diseases and conditions. These models define the desired, successive steps in care for that disease or condition and depict the proportion of the population that has completed each step. These models may be used to compare care across subgroups or populations and to identify and evaluate interventions intended to improve outcomes on the population level. Previous cascade or continuum models have been limited by several factors. These models are best suited to processes with stepwise outcomes—such as screening, diagnosis, and treatment—with a single defined outcome (e.g., treatment or cure for each member of the population. However, continuum modeling is not well developed for complex processes with non-sequential or recurring steps or those without singular outcomes. As shown here using the example of diabetic foot care, the concept of continuum modeling may be re-envisioned with a cyclical approach. Cyclical continuum modeling may permit incorporation of non-sequential and recurring steps into a single continuum, while recognizing the presence of multiple desirable outcomes within the population. Cyclical models may simultaneously represent the distribution of clinical severity and clinical resource use across a population, thereby extending the benefits of traditional continuum models to complex processes for which population-based monitoring is desired. The models may also support communication with other stakeholders in the process of care, including health care providers and patients.

  20. Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.

    Science.gov (United States)

    Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A

    The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.

  1. Describing Growth Pattern of Bali Cows Using Non-linear Regression Models

    Directory of Open Access Journals (Sweden)

    Mohd. Hafiz A.W

    2016-12-01

    Full Text Available The objective of this study was to evaluate the best fit non-linear regression model to describe the growth pattern of Bali cows. Estimates of asymptotic mature weight, rate of maturing and constant of integration were derived from Brody, von Bertalanffy, Gompertz and Logistic models which were fitted to cross-sectional data of body weight taken from 74 Bali cows raised in MARDI Research Station Muadzam Shah Pahang. Coefficient of determination (R2 and residual mean squares (MSE were used to determine the best fit model in describing the growth pattern of Bali cows. Von Bertalanffy model was the best model among the four growth functions evaluated to determine the mature weight of Bali cattle as shown by the highest R2 and lowest MSE values (0.973 and 601.9, respectively, followed by Gompertz (0.972 and 621.2, respectively, Logistic (0.971 and 648.4, respectively and Brody (0.932 and 660.5, respectively models. The correlation between rate of maturing and mature weight was found to be negative in the range of -0.170 to -0.929 for all models, indicating that animals of heavier mature weight had lower rate of maturing. The use of non-linear model could summarize the weight-age relationship into several biologically interpreted parameters compared to the entire lifespan weight-age data points that are difficult and time consuming to interpret.

  2. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    Science.gov (United States)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  3. Cultivation of Chlorella vulgaris in a pilot-scale sequential-baffled column photobioreactor for biomass and biodiesel production

    International Nuclear Information System (INIS)

    Lam, Man Kee; Lee, Keat Teong

    2014-01-01

    Highlights: • A new sequential baffled photobioreactor was developed to cultivate microalgae. • Organic fertilizer was used as the main nutrients source. • Negative energy balance was observed in producing microalgae biodiesel. - Abstract: Pilot-scale cultivation of Chlorella vulgaris in a 100 L sequential baffled photobioreactor was carried out in the present study. The highest biomass yield attained under indoor and outdoor environment was 0.52 g/L and 0.28 g/L, respectively. Although low microalgae biomass yield was attained under outdoor cultivation, however, the overall life cycle energy efficiency ratio was 3.3 times higher than the indoor cultivation. In addition, negative energy balance was observed in producing microalgae biodiesel under both indoor and outdoor cultivation. The minimum production cost of microalgae biodiesel was about RM 237/L (or USD 73.5/L), which was exceptionally high compared to the current petrol diesel price in Malaysia (RM 3.6/L or USD 1.1/L). On the other hand, the estimated production cost of dried microalgae biomass cultivated under outdoor environment was RM 46/kg (or USD 14.3/kg), which was lower than cultivation using chemical fertilizer (RM 111/kg or USD 34.4/kg) and current market price of Chlorella biomass (RM 145/kg or USD 45/kg)

  4. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  5. AUTOMATIC 3D BUILDING MODEL GENERATION FROM LIDAR AND IMAGE DATA USING SEQUENTIAL MINIMUM BOUNDING RECTANGLE

    Directory of Open Access Journals (Sweden)

    E. Kwak

    2012-07-01

    Full Text Available Digital Building Model is an important component in many applications such as city modelling, natural disaster planning, and aftermath evaluation. The importance of accurate and up-to-date building models has been discussed by many researchers, and many different approaches for efficient building model generation have been proposed. They can be categorised according to the data source used, the data processing strategy, and the amount of human interaction. In terms of data source, due to the limitations of using single source data, integration of multi-senor data is desired since it preserves the advantages of the involved datasets. Aerial imagery and LiDAR data are among the commonly combined sources to obtain 3D building models with good vertical accuracy from laser scanning and good planimetric accuracy from aerial images. The most used data processing strategies are data-driven and model-driven ones. Theoretically one can model any shape of buildings using data-driven approaches but practically it leaves the question of how to impose constraints and set the rules during the generation process. Due to the complexity of the implementation of the data-driven approaches, model-based approaches draw the attention of the researchers. However, the major drawback of model-based approaches is that the establishment of representative models involves a manual process that requires human intervention. Therefore, the objective of this research work is to automatically generate building models using the Minimum Bounding Rectangle algorithm and sequentially adjusting them to combine the advantages of image and LiDAR datasets.

  6. Sequential and joint hydrogeophysical inversion using a field-scale groundwater model with ERT and TDEM data

    Directory of Open Access Journals (Sweden)

    D. Herckenrath

    2013-10-01

    Full Text Available Increasingly, ground-based and airborne geophysical data sets are used to inform groundwater models. Recent research focuses on establishing coupling relationships between geophysical and groundwater parameters. To fully exploit such information, this paper presents and compares different hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM and electrical resistivity tomography (ERT data. In a sequential hydrogeophysical inversion (SHI a groundwater model is calibrated with geophysical data by coupling groundwater model parameters with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI. In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical relationship and its accuracy. Simulations for a synthetic groundwater model and TDEM data showed improved estimates for groundwater model parameters that were coupled to relatively well-resolved geophysical parameters when employing a high-quality petrophysical relationship. Compared to a SHI these improvements were insignificant and geophysical parameter estimates became slightly worse. When employing a low-quality petrophysical relationship, groundwater model parameters improved less for both the SHI and JHI, where the SHI performed relatively better. When comparing a SHI and JHI for a real-world groundwater model and ERT data, differences in parameter estimates were small. For both cases investigated in this paper, the SHI seems favorable, taking into account parameter error, data fit and the complexity of implementing a JHI in combination with its larger computational burden.

  7. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    Science.gov (United States)

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  8. Reading Remediation Based on Sequential and Simultaneous Processing.

    Science.gov (United States)

    Gunnison, Judy; And Others

    1982-01-01

    The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…

  9. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation

    Science.gov (United States)

    Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori

    2017-12-01

    It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.

  10. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

    DEFF Research Database (Denmark)

    Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

    2016-01-01

    generated by a mathematical model of the competitive growth of multiple strains of Escherichia coli.Results: Simulation studies showed that sequential use of tetracycline and ampicillin reduced the level of double resistance, when compared to the combination treatment. The effect of the cycling frequency...... frequency did not play a role in suppressing the growth of resistant strains, but the specific order of the two antimicrobials did. Predictions made from the study could be used to redesign multidrug treatment strategies not only for intramuscular treatment in pigs, but also for other dosing routes.......Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...

  11. MODEL NON LINIER GARCH (NGARCH UNTUK MENGESTIMASI NILAI VALUE at RISK (VaR PADA IHSG

    Directory of Open Access Journals (Sweden)

    I KOMANG TRY BAYU MAHENDRA

    2015-06-01

    Full Text Available In investment, risk measurement is important. One of risk measure is Value at Risk (VaR. There are many methods that can be used to estimate risk based on VaR framework. One of them Non Linier GARCH (NGARCH model. In this research, determination of VaR used NGARCH model. NGARCH model allowed for asymetric behaviour in the volatility such that “good news” or positive return and “bad news” or negative return. Based on calculations of VaR, the higher of the confidence level and the longer the investment period, the risk was greater. Determination of VaR using NGARCH model was less than GARCH model.

  12. Multistrain models predict sequential multidrug treatment strategies to result in less antimicrobial resistance than combination treatment

    DEFF Research Database (Denmark)

    Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo

    2016-01-01

    Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...... the sensitive fraction of the commensal flora.Growth parameters for competing bacterial strains were estimated from the combined in vitro pharmacodynamic effect of two antimicrobials using the relationship between concentration and net bacterial growth rate. Predictions of in vivo bacterial growth were...... (how frequently antibiotics are alternated in a sequential treatment) of the two drugs was dependent upon the order in which the two drugs were used.Conclusion: Sequential treatment was more effective in preventing the growth of resistant strains when compared to the combination treatment. The cycling...

  13. 128-slice Dual-source Computed Tomography Coronary Angiography in Patients with Atrial Fibrillation: Image Quality and Radiation Dose of Prospectively Electrocardiogram-triggered Sequential Scan Compared with Retrospectively Electrocardiogram-gated Spiral Scan.

    Science.gov (United States)

    Lin, Lu; Wang, Yi-Ning; Kong, Ling-Yan; Jin, Zheng-Yu; Lu, Guang-Ming; Zhang, Zhao-Qi; Cao, Jian; Li, Shuo; Song, Lan; Wang, Zhi-Wei; Zhou, Kang; Wang, Ming

    2013-01-01

    Objective To evaluate the image quality (IQ) and radiation dose of 128-slice dual-source computed tomography (DSCT) coronary angiography using prospectively electrocardiogram (ECG)-triggered sequential scan mode compared with ECG-gated spiral scan mode in a population with atrial fibrillation. Methods Thirty-two patients with suspected coronary artery disease and permanent atrial fibrillation referred for a second-generation 128-slice DSCT coronary angiography were included in the prospective study. Of them, 17 patients (sequential group) were randomly selected to use a prospectively ECG-triggered sequential scan, while the other 15 patients (spiral group) used a retrospectively ECG-gated spiral scan. The IQ was assessed by two readers independently, using a four-point grading scale from excel-lent (grade 1) to non-assessable (grade 4), based on the American Heart Association 15-segment model. IQ of each segment and effective dose of each patient were compared between the two groups. Results The mean heart rate (HR) of the sequential group was 96±27 beats per minute (bpm) with a variation range of 73±25 bpm, while the mean HR of the spiral group was 86±22 bpm with a variationrange of 65±24 bpm. Both of the mean HR (t=1.91, P=0.243) and HR variation range (t=0.950, P=0.350) had no significant difference between the two groups. In per-segment analysis, IQ of the sequential group vs. spiral group was rated as excellent (grade 1) in 190/244 (78%) vs. 177/217 (82%) by reader1 and 197/245 (80%) vs. 174/214 (81%) by reader2, as non-assessable (grade 4) in 4/244 (2%) vs. 2/217 (1%) by reader1 and 6/245 (2%) vs. 4/214 (2%) by reader2. Overall averaged IQ per-patient in the sequential and spiral group showed equally good (1.27±0.19 vs. 1.25±0.22, Z=-0.834, P=0.404). The effective radiation dose of the sequential group reduced significantly compared with the spiral group (4.88±1.77 mSv vs. 10.20±3.64 mSv; t=-5.372, P=0.000). Conclusion Compared with retrospectively

  14. C-quence: a tool for analyzing qualitative sequential data.

    Science.gov (United States)

    Duncan, Starkey; Collier, Nicholson T

    2002-02-01

    C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

  15. Confirmatory Analysis of Simultaneous, Sequential, and Achievement Factors on the K-ABC at 11 Age Levels Ranging from 2 1/2 to 12 1/2 years.

    Science.gov (United States)

    Willson, Victor L.; And Others

    1985-01-01

    Presents results of confirmatory factor analysis of the Kaufman Assessment Battery for children which is based on the underlying theoretical model of sequential, simultaneous, and achievement factors. Found support for the two-factor, simultaneous and sequential processing model. (MCF)

  16. Modelling non-hydrostatic processes in sill regions

    Science.gov (United States)

    Souza, A.; Xing, J.; Davies, A.; Berntsen, J.

    2007-12-01

    We use a non-hydrostatic model to compute tidally induced flow and mixing in the region of bottom topography representing the sill at the entrance to Loch Etive (Scotland). This site is chosen since detailed measurements were recently made there. With non-hydrostatic dynamics in the model our results showed that the model could reproduce the observed flow characteristics, e.g., hydraulic transition, flow separation and internal waves. However, when calculations were performed using the model in the hydrostatic form, significant artificial convective mixing occurred. This influenced the computed temperature and flow field. We will discuss in detail the effects of non-hydrostatic dynamics on flow over the sill, especially investigate non-linear and non-hydrostatic contributions to modelled internal waves and internal wave energy fluxes.

  17. Characterization of the Extended-Spectrum beta-Lactamase Producers among Non-Fermenting Gram-Negative Bacteria Isolated from Burnt Patients

    Directory of Open Access Journals (Sweden)

    Mojdeh Hakemi Vala

    2013-09-01

    Full Text Available Please cite this article as: Hakemi Vala M, Hallajzadeh M, Fallah F, Hashemi A, Goudarzi H. Characterization of the extended-spectrum beta-lactamase producers among non-fermenting gram-negative bacteria isolated from burnt patients. Arch Hyg Sci 2013;2(1:1-6. Background & Aims of the Study: Extended-spectrum beta-Lactamases (ESBLs represent a major group of beta-lactamases which are responsible for resistance to oxyimino-cephalosporins and aztreonam and currently being identified in large numbers throughout the world. The objective of this study was to characterize ESBL producers among non-fermenter gram-negative bacteria isolated from burnt patients. Materials & Methods: During April to July 2012, 75 non-fermenter gram-negative bacilli were isolated from 240 bacterial cultures collected from wounds of burnt patients admitted to the Burn Unit at Shahid Motahari Hospital (Tehran, Iran. Bacterial isolation and identification was done using standard methods. Antimicrobial susceptibility testing was performed by disk diffusion method for all strains against selected antibiotics and minimum inhibitory concentration was determined by microdilution test. The ability to produce ESBL was detected through double disk synergy test among candidate strains. Results: Of 75 non-fermenter isolates, 47 Pseudomonas aeruginosa and 28 Acinetobacter baumannii were identified. The resistance of P. aeruginosa isolates to tested antibiotics in antibiogram test were 100% to cefpodoxime, 82.98% to ceftriaxone, 78.73% to imipenem, 75% to meropenem, 72.72% to gentamicin, 69.23% to ciprofloxacin and aztreonam, 67.57% to cefepime, 65.95% to ceftazidime, and 61.53% to piperacillin. The results for Acinetobacter baumannii were 100% to ceftazidime, cefepime, ciprofloxacin, imipenem, meropenem, cefpodoxime, and cefotaxim, 96.85% to gentamicin, 89.65% to ceftriaxone, 65.51% to aztreonam, and 40% to piperacillin. Double disk synergy test showed that 21 (28% of non

  18. Measuring sustainability by Energy Efficiency Analysis for Korean Power Companies: A Sequential Slacks-Based Efficiency Measure

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2014-03-01

    Full Text Available Improving energy efficiency has been widely regarded as one of the most cost-effective ways to improve sustainability and mitigate climate change. This paper presents a sequential slack-based efficiency measure (SSBM application to model total-factor energy efficiency with undesirable outputs. This approach simultaneously takes into account the sequential environmental technology, total input slacks, and undesirable outputs for energy efficiency analysis. We conduct an empirical analysis of energy efficiency incorporating greenhouse gas emissions of Korean power companies during 2007–2011. The results indicate that most of the power companies are not performing at high energy efficiency. Sequential technology has a significant effect on the energy efficiency measurements. Some policy suggestions based on the empirical results are also presented.

  19. Sequential analysis of biochemical markers of bone resorption and bone densitometry in multiple myeloma

    DEFF Research Database (Denmark)

    Abildgaard, Niels; Brixen, K; Eriksen, E.F

    2004-01-01

    BACKGROUND AND OBJECTIVES: Bone lesions often occur in multiple myeloma (MM), but no tests have proven useful in identifying patients with increased risk. Bone marker assays and bone densitometry are non-invasive methods that can be used repeatedly at low cost. This study was performed to evaluate...... 6 weeks, DEXA-scans performed every 3 months, and skeletal radiographs were done every 6 months as well as when indicated. RESULTS: Serum ICTP and urinary NTx were predictive of progressive bone events. Markers of bone formation, bone mineral density assessments, and M component measurements were...... changes, and our data do not support routine use of sequential DEXA-scans. However, lumbar DEXA-scans at diagnosis can identify patients with increased risk of early vertebral collapses. Sequential analyses of serum ICTP and urinary NTx are useful for monitoring bone damage....

  20. Examining Age-Related Movement Representations for Sequential (Fine-Motor) Finger Movements

    Science.gov (United States)

    Gabbard, Carl; Cacola, Priscila; Bobbio, Tatiana

    2011-01-01

    Theory suggests that imagined and executed movement planning relies on internal models for action. Using a chronometry paradigm to compare the movement duration of imagined and executed movements, we tested children aged 7-11 years and adults on their ability to perform sequential finger movements. Underscoring this tactic was our desire to gain a…