WorldWideScience

Sample records for rapid sequential computed

  1. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  2. Effects of sequential and discrete rapid naming on reading in Japanese children with reading difficulty.

    Science.gov (United States)

    Wakamiya, Eiji; Okumura, Tomohito; Nakanishi, Makoto; Takeshita, Takashi; Mizuta, Mekumi; Kurimoto, Naoko; Tamai, Hiroshi

    2011-06-01

    To clarify whether rapid naming ability itself is a main underpinning factor of rapid automatized naming tests (RAN) and how deep an influence the discrete decoding process has on reading, we performed discrete naming tasks and discrete hiragana reading tasks as well as sequential naming tasks and sequential hiragana reading tasks with 38 Japanese schoolchildren with reading difficulty. There were high correlations between both discrete and sequential hiragana reading and sentence reading, suggesting that some mechanism which automatizes hiragana reading makes sentence reading fluent. In object and color tasks, there were moderate correlations between sentence reading and sequential naming, and between sequential naming and discrete naming. But no correlation was found between reading tasks and discrete naming tasks. The influence of rapid naming ability of objects and colors upon reading seemed relatively small, and multi-item processing may work in relation to these. In contrast, in the digit naming task there was moderate correlation between sentence reading and discrete naming, while no correlation was seen between sequential naming and discrete naming. There was moderate correlation between reading tasks and sequential digit naming tasks. Digit rapid naming ability has more direct effect on reading while its effect on RAN is relatively limited. The ratio of how rapid naming ability influences RAN and reading seems to vary according to kind of the stimuli used. An assumption about components in RAN which influence reading is discussed in the context of both sequential processing and discrete naming speed. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  3. Computing sequential equilibria for two-player games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...

  4. Computing Sequential Equilibria for Two-Player Games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...

  5. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  6. Time sequential single photon emission computed tomography studies in brain tumour using thallium-201

    International Nuclear Information System (INIS)

    Ueda, Takashi; Kaji, Yasuhiro; Wakisaka, Shinichiro; Watanabe, Katsushi; Hoshi, Hiroaki; Jinnouchi, Seishi; Futami, Shigemi

    1993-01-01

    Time sequential single photon emission computed tomography (SPECT) studies using thallium-201 were performed in 25 patients with brain tumours to evaluate the kinetics of thallium in the tumour and the biological malignancy grade preoperatively. After acquisition and reconstruction of SPECT data from 1 min post injection to 48 h (1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 15-20 min, followed by 4-6, 24 and 48 h), the thallium uptake ratio in the tumour versus the homologous contralateral area of the brain was calculated and compared with findings of X-ray CT, magnetic resonance imaging, cerebral angiography and histological investigations. Early uptake of thallium in tumours was related to tumour vascularity and the disruption of the blood-brain barrier. High and rapid uptake and slow reduction of thallium indicated a hypervascular malignant tumour; however, high and rapid uptake but rapid reduction of thallium indicated a hypervascular benign tumour, such as meningioma. Hypovascular and benign tumours tended to show low uptake and slow reduction of thallium. Long-lasting retention or uptake of thallium indicates tumour malignancy. (orig.)

  7. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  8. Relative resilience to noise of standard and sequential approaches to measurement-based quantum computation

    Science.gov (United States)

    Gallagher, C. B.; Ferraro, A.

    2018-05-01

    A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.

  9. PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.

    Science.gov (United States)

    MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S

    2005-06-01

    Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.

  10. A Computer Program for Simplifying Incompletely Specified Sequential Machines Using the Paull and Unger Technique

    Science.gov (United States)

    Ebersole, M. M.; Lecoq, P. E.

    1968-01-01

    This report presents a description of a computer program mechanized to perform the Paull and Unger process of simplifying incompletely specified sequential machines. An understanding of the process, as given in Ref. 3, is a prerequisite to the use of the techniques presented in this report. This process has specific application in the design of asynchronous digital machines and was used in the design of operational support equipment for the Mariner 1966 central computer and sequencer. A typical sequential machine design problem is presented to show where the Paull and Unger process has application. A description of the Paull and Unger process together with a description of the computer algorithms used to develop the program mechanization are presented. Several examples are used to clarify the Paull and Unger process and the computer algorithms. Program flow diagrams, program listings, and a program user operating procedures are included as appendixes.

  11. Foreword to Special Issue on "The Difference between Concurrent and Sequential Computation'' of Mathematical Structures

    DEFF Research Database (Denmark)

    Aceto, Luca; Longo, Giuseppe; Victor, Björn

    2003-01-01

    tarpit’, and argued that some of the most crucial distinctions in computing methodology, such as sequential versus parallel, deterministic versus non-deterministic, local versus distributed disappear if all one sees in computation is pure symbol pushing. How can we express formally the difference between...

  12. Sequential decision making in computational sustainability via adaptive submodularity

    Science.gov (United States)

    Krause, Andreas; Golovin, Daniel; Converse, Sarah J.

    2015-01-01

    Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.

  13. Efficacy of sequential or simultaneous interactive computer-tailored interventions for increasing physical activity and decreasing fat intake.

    Science.gov (United States)

    Vandelanotte, Corneel; De Bourdeaudhuij, Ilse; Sallis, James F; Spittaels, Heleen; Brug, Johannes

    2005-04-01

    Little evidence exists about the effectiveness of "interactive" computer-tailored interventions and about the combined effectiveness of tailored interventions on physical activity and diet. Furthermore, it is unknown whether they should be executed sequentially or simultaneously. The purpose of this study was to examine (a) the effectiveness of interactive computer-tailored interventions for increasing physical activity and decreasing fat intake and (b) which intervening mode, sequential or simultaneous, is most effective in behavior change. Participants (N = 771) were randomly assigned to receive (a) the physical activity and fat intake interventions simultaneously at baseline, (b) the physical activity intervention at baseline and the fat intake intervention 3 months later, (c) the fat intake intervention at baseline and the physical activity intervention 3 months later, or (d) a place in the control group. Six months postbaseline, the results showed that the tailored interventions produced significantly higher physical activity scores, F(2, 573) = 11.4, p physical activity intervention, the simultaneous mode appeared to work better than the sequential mode.

  14. Development of computer-aided software engineering tool for sequential control of JT-60U

    International Nuclear Information System (INIS)

    Shimono, M.; Akasaka, H.; Kurihara, K.; Kimura, T.

    1995-01-01

    Discharge sequential control (DSC) is an essential control function for the intermittent and pulse discharge operation of a tokamak device, so that many subsystems may work with each other in correct order and/or synchronously. In the development of the DSC program, block diagrams of logical operation for sequential control are illustrated in its design at first. Then, the logical operators and I/O's which are involved in the block diagrams are compiled and converted to a certain particular form. Since the block diagrams of the sequential control amounts to about 50 sheets in the case of the JT-60 upgrade tokamak (JT-60U) high power discharge and the above steps of the development have been performed manually so far, a great effort has been required for the program development. In order to remove inefficiency in such development processes, a computer-aided software engineering (CASE) tool has been developed on a UNIX workstation. This paper reports how the authors design it for the development of the sequential control programs. The tool is composed of the following three tools: (1) Automatic drawing tool, (2) Editing tool, and (3) Trace tool. This CASE tool, an object-oriented programming tool having graphical formalism, can powerfully accelerate the cycle for the development of the sequential control function commonly associated with pulse discharge in a tokamak fusion device

  15. Sequential decisions: a computational comparison of observational and reinforcement accounts.

    Directory of Open Access Journals (Sweden)

    Nazanin Mohammadi Sepahvand

    Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.

  16. Computer-assisted sequential quantitative analysis of gallium scans in pulmonary sarcoidosis

    International Nuclear Information System (INIS)

    Rohatgi, P.K.; Bates, H.R.; Noss, R.W.

    1985-01-01

    Fifty-one sequential gallium citrate scans were performed in 22 patients with biopsy-proven sarcoidosis. A computer-assisted quantitative analysis of these scans was performed to obtain a gallium score. The changes in gallium score were correlated with changes in serum angiotensin converting enzyme (SACE) activity and objective changes in clinical status. There was a good concordance between changes in gallium score, SACE activity and clinical assessment in patients with sarcoidosis, and changes in gallium index were slightly superior to SACE index in assessing activity of sarcoidosis. (author)

  17. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  18. Micro-computer based control system and its software for carrying out the sequential acceleration on SMCAMS

    International Nuclear Information System (INIS)

    Li Deming

    2001-01-01

    Micro-computer based control system and its software for carrying out the sequential acceleration on SMCAMS is described. Also, the establishment of the 14 C particle measuring device and the improvement of the original power supply system are described

  19. Precise Sequential DNA Ligation on A Solid Substrate: Solid-Based Rapid Sequential Ligation of Multiple DNA Molecules

    Science.gov (United States)

    Takita, Eiji; Kohda, Katsunori; Tomatsu, Hajime; Hanano, Shigeru; Moriya, Kanami; Hosouchi, Tsutomu; Sakurai, Nozomu; Suzuki, Hideyuki; Shinmyo, Atsuhiko; Shibata, Daisuke

    2013-01-01

    Ligation, the joining of DNA fragments, is a fundamental procedure in molecular cloning and is indispensable to the production of genetically modified organisms that can be used for basic research, the applied biosciences, or both. Given that many genes cooperate in various pathways, incorporating multiple gene cassettes in tandem in a transgenic DNA construct for the purpose of genetic modification is often necessary when generating organisms that produce multiple foreign gene products. Here, we describe a novel method, designated PRESSO (precise sequential DNA ligation on a solid substrate), for the tandem ligation of multiple DNA fragments. We amplified donor DNA fragments with non-palindromic ends, and ligated the fragment to acceptor DNA fragments on solid beads. After the final donor DNA fragments, which included vector sequences, were joined to the construct that contained the array of fragments, the ligation product (the construct) was thereby released from the beads via digestion with a rare-cut meganuclease; the freed linear construct was circularized via an intra-molecular ligation. PRESSO allowed us to rapidly and efficiently join multiple genes in an optimized order and orientation. This method can overcome many technical challenges in functional genomics during the post-sequencing generation. PMID:23897972

  20. Temporal characteristics of radiologists’ and novices’ lesion detection in viewing medical images presented rapidly and sequentially

    Directory of Open Access Journals (Sweden)

    Ryoichi Nakashima

    2016-10-01

    Full Text Available Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers’ attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy. This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.

  1. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  2. Rapid Sequential in Situ Multiplexing with DNA Exchange Imaging in Neuronal Cells and Tissues.

    Science.gov (United States)

    Wang, Yu; Woehrstein, Johannes B; Donoghue, Noah; Dai, Mingjie; Avendaño, Maier S; Schackmann, Ron C J; Zoeller, Jason J; Wang, Shan Shan H; Tillberg, Paul W; Park, Demian; Lapan, Sylvain W; Boyden, Edward S; Brugge, Joan S; Kaeser, Pascal S; Church, George M; Agasti, Sarit S; Jungmann, Ralf; Yin, Peng

    2017-10-11

    To decipher the molecular mechanisms of biological function, it is critical to map the molecular composition of individual cells or even more importantly tissue samples in the context of their biological environment in situ. Immunofluorescence (IF) provides specific labeling for molecular profiling. However, conventional IF methods have finite multiplexing capabilities due to spectral overlap of the fluorophores. Various sequential imaging methods have been developed to circumvent this spectral limit but are not widely adopted due to the common limitation of requiring multirounds of slow (typically over 2 h at room temperature to overnight at 4 °C in practice) immunostaining. We present here a practical and robust method, which we call DNA Exchange Imaging (DEI), for rapid in situ spectrally unlimited multiplexing. This technique overcomes speed restrictions by allowing for single-round immunostaining with DNA-barcoded antibodies, followed by rapid (less than 10 min) buffer exchange of fluorophore-bearing DNA imager strands. The programmability of DEI allows us to apply it to diverse microscopy platforms (with Exchange Confocal, Exchange-SIM, Exchange-STED, and Exchange-PAINT demonstrated here) at multiple desired resolution scales (from ∼300 nm down to sub-20 nm). We optimized and validated the use of DEI in complex biological samples, including primary neuron cultures and tissue sections. These results collectively suggest DNA exchange as a versatile, practical platform for rapid, highly multiplexed in situ imaging, potentially enabling new applications ranging from basic science, to drug discovery, and to clinical pathology.

  3. A Comparison of Sequential and GPU Implementations of Iterative Methods to Compute Reachability Probabilities

    Directory of Open Access Journals (Sweden)

    Elise Cormie-Bowins

    2012-10-01

    Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.

  4. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  5. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  6. More power : Accelerating sequential Computer Vision algorithms using commodity parallel hardware

    NARCIS (Netherlands)

    Jaap van de Loosdrecht; K. Dijkstra

    2014-01-01

    The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer

  7. Catalysis of Silver catfish Major Hepatic Glutathione Transferase proceeds via rapid equilibrium sequential random Mechanism

    Directory of Open Access Journals (Sweden)

    Ayodele O. Kolawole

    Full Text Available Fish hepatic glutathione transferases are connected with the elimination of intracellular pollutants and detoxification of organic micro-pollutants in their aquatic ecosystem. The two-substrate steady state kinetic mechanism of Silver catfish (Synodontis eupterus major hepatic glutathione transferases purified to apparent homogeneity was explored. The enzyme was dimeric enzyme with a monomeric size of 25.6 kDa. Initial-velocity studies and Product inhibition patterns by methyl glutathione and chloride with respect to GSH-CDNB; GSH-ρ-nitrophenylacetate; and GSH-Ethacrynic acid all conforms to a rapid equilibrium sequential random Bi Bi kinetic mechanism rather than steady state sequential random Bi Bi kinetic. α was 2.96 ± 0.35 for the model. The pH profile of Vmax/KM (with saturating 1-chloro-2,4-dinitrobenzene and variable GSH concentrations showed apparent pKa value of 6.88 and 9.86. Inhibition studies as a function of inhibitor concentration show that the enzyme is a homodimer and near neutral GST. The enzyme poorly conjugates 4-hydroxylnonenal and cumene hydroperoxide and may not be involved in oxidative stress protection. The seGST is unique and overwhelmingly shows characteristics similar to those of homodimeric class Pi GSTs, as was indicated by its kinetic mechanism, substrate specificity and inhibition studies. The rate- limiting step, probably the product release, of the reaction is viscosity-dependent and is consequential if macro-viscosogen or micro-viscosogen. Keywords: Silver catfish, Glutathione transferase, Steady-state, Kinetic mechanism, Inhibition

  8. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  9. Computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9-in. NaI(Tl) crystal containing a 3.25-in. deep by 3.5-in. diameter well. This gamma detection system is controlled by a mini-computer with a dual floppy disk storage medium. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing

  10. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  11. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  12. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    Science.gov (United States)

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  13. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  14. Accelerated Combinatorial High Throughput Star Polymer Synthesis via a Rapid One-Pot Sequential Aqueous RAFT (rosa-RAFT) Polymerization Scheme.

    Science.gov (United States)

    Cosson, Steffen; Danial, Maarten; Saint-Amans, Julien Rosselgong; Cooper-White, Justin J

    2017-04-01

    Advanced polymerization methodologies, such as reversible addition-fragmentation transfer (RAFT), allow unprecedented control over star polymer composition, topology, and functionality. However, using RAFT to produce high throughput (HTP) combinatorial star polymer libraries remains, to date, impracticable due to several technical limitations. Herein, the methodology "rapid one-pot sequential aqueous RAFT" or "rosa-RAFT," in which well-defined homo-, copolymer, and mikto-arm star polymers can be prepared in very low to medium reaction volumes (50 µL to 2 mL) via an "arm-first" approach in air within minutes, is reported. Due to the high conversion of a variety of acrylamide/acrylate monomers achieved during each successive short reaction step (each taking 3 min), the requirement for intermediary purification is avoided, drastically facilitating and accelerating the star synthesis process. The presented methodology enables RAFT to be applied to HTP polymeric bio/nanomaterials discovery pipelines, in which hundreds of complex polymeric formulations can be rapidly produced, screened, and scaled up for assessment in a wide range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Programme for test generation for combinatorial and sequential systems

    International Nuclear Information System (INIS)

    Tran Huy Hoan

    1973-01-01

    This research thesis reports the computer-assisted search for tests aimed at failure detection in combinatorial and sequential logic circuits. As he wants to deal with complex circuits with many modules such as those met in large scale integrated circuits (LSI), the author used propagation paths. He reports the development of a method which is valid for combinatorial systems and for several sequential circuits comprising elementary logic modules and JK and RS flip-flops. This method is developed on an IBM 360/91 computer in PL/1 language. The used memory space is limited and adjustable with respect to circuit dimension. Computing time is short when compared to that needed by other programmes. The solution is practical and efficient for failure test and localisation

  16. Computer programmes for the control and data manipulation of a sequential x-ray-fluorescence spectrometer

    International Nuclear Information System (INIS)

    Spimpolo, G.F.

    1984-01-01

    Two computer programmes have been written for use on a fully automated Siemens SRS200 sequential X-ray-fluorescence spectrometer. The first of these is used to control the spectrometer via an LC200 logic controller using a Data General Nova IV minicomputer; the second is used for the on-line evaluation of the intensity results and the printout of the analytical results. This system is an alternative to the systems offered by Siemens Ltd, which consist of a Process PR310 or Digital DEC PDP1103 computer and the Siemens Spectra 310 software package. The multibatch capabilities of the programmes, with the option of measuring one sample or a tray of samples before the results are calculated, give the new programmes a major advantage over the dedicated software and, together with the elimination of human error in calculation, have resulted in increased efficiency and quality in routine analyses. A description is given of the two programmes, as well as instruction and guidelines to the user

  17. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development

    OpenAIRE

    Ziatdinov, Rushan; Musa, Sajid

    2013-01-01

    In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  18. Computation of Stackelberg Equilibria of Finite Sequential Games

    DEFF Research Database (Denmark)

    Bosanski, Branislav; Branzei, Simina; Hansen, Kristoffer Arnsfelt

    2015-01-01

    The Stackelberg equilibrium is a solution concept that describes optimal strategies to commit to: Player~1 (the leader) first commits to a strategy that is publicly announced, then Player~2 (the follower) plays a best response to the leader's choice. We study Stackelberg equilibria in finite...... sequential (i.e., extensive-form) games and provide new exact algorithms, approximate algorithms, and hardness results for finding equilibria for several classes of such two-player games....

  19. Computer automation and artificial intelligence

    International Nuclear Information System (INIS)

    Hasnain, S.B.

    1992-01-01

    Rapid advances in computing, resulting from micro chip revolution has increased its application manifold particularly for computer automation. Yet the level of automation available, has limited its application to more complex and dynamic systems which require an intelligent computer control. In this paper a review of Artificial intelligence techniques used to augment automation is presented. The current sequential processing approach usually adopted in artificial intelligence has succeeded in emulating the symbolic processing part of intelligence, but the processing power required to get more elusive aspects of intelligence leads towards parallel processing. An overview of parallel processing with emphasis on transputer is also provided. A Fuzzy knowledge based controller for amination drug delivery in muscle relaxant anesthesia on transputer is described. 4 figs. (author)

  20. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  1. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  2. Rapid computation of chemical equilibrium composition - An application to hydrocarbon combustion

    Science.gov (United States)

    Erickson, W. D.; Prabhu, R. K.

    1986-01-01

    A scheme for rapidly computing the chemical equilibrium composition of hydrocarbon combustion products is derived. A set of ten governing equations is reduced to a single equation that is solved by the Newton iteration method. Computation speeds are approximately 80 times faster than the often used free-energy minimization method. The general approach also has application to many other chemical systems.

  3. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  4. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  5. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  6. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC

  7. Sequential nearest-neighbor effects on computed {sup 13}C{sup {alpha}} chemical shifts

    Energy Technology Data Exchange (ETDEWEB)

    Vila, Jorge A. [Cornell University, Baker Laboratory of Chemistry and Chemical Biology (United States); Serrano, Pedro; Wuethrich, Kurt [The Scripps Research Institute, Department of Molecular Biology (United States); Scheraga, Harold A., E-mail: has5@cornell.ed [Cornell University, Baker Laboratory of Chemistry and Chemical Biology (United States)

    2010-09-15

    To evaluate sequential nearest-neighbor effects on quantum-chemical calculations of {sup 13}C{sup {alpha}} chemical shifts, we selected the structure of the nucleic acid binding (NAB) protein from the SARS coronavirus determined by NMR in solution (PDB id 2K87). NAB is a 116-residue {alpha}/{beta} protein, which contains 9 prolines and has 50% of its residues located in loops and turns. Overall, the results presented here show that sizeable nearest-neighbor effects are seen only for residues preceding proline, where Pro introduces an overestimation, on average, of 1.73 ppm in the computed {sup 13}C{sup {alpha}} chemical shifts. A new ensemble of 20 conformers representing the NMR structure of the NAB, which was calculated with an input containing backbone torsion angle constraints derived from the theoretical {sup 13}C{sup {alpha}} chemical shifts as supplementary data to the NOE distance constraints, exhibits very similar topology and comparable agreement with the NOE constraints as the published NMR structure. However, the two structures differ in the patterns of differences between observed and computed {sup 13}C{sup {alpha}} chemical shifts, {Delta}{sub ca,i}, for the individual residues along the sequence. This indicates that the {Delta}{sub ca,i} -values for the NAB protein are primarily a consequence of the limited sampling by the bundles of 20 conformers used, as in common practice, to represent the two NMR structures, rather than of local flaws in the structures.

  8. Fast and accurate non-sequential protein structure alignment using a new asymmetric linear sum assignment heuristic.

    Science.gov (United States)

    Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi

    2016-02-01

    The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Real sequential evaluation of materials balance data with the computer program PROSA

    International Nuclear Information System (INIS)

    Bicking, U.; Golly, W.; Seifert, R.

    1991-01-01

    Material accountancy is an important tool for international nuclear safeguards. The aim is to detect a possible loss of material timely and with high probability. In this context, a computer program called PROSA (Program for Sequential Analysis of NRTA data) was developed at the Karlsruhe Nuclear Research Center. PROSA is a statistical tool to decide on the basis of statistical considerations whether or not in a given sequence of material balances a loss of material might have occurred. The evaluation of the material balance data (MUF values) is carried out with statistical test procedures. In the present PROSA version 4.0 three tests, Page's test, CUMUF test and GEMUF test are applied at a time. These three test procedures are the result of several years of research and are supposed to be the most promising ones with respect to the detection probability of possible losses of material as well as to the timeliness of such a detection. PROSA version 4.0 is a user-friendly, menudriven computer program which is suitable for routine field application. Data input - that means MUF values and measurement model - can be performed either by diskette or by key-enter. The output consists of an information whether or not an alarm is indicated. This information can be displayed either numerically or graphically. Therefore, a comfortable graphical output utility is attached to PROSA version 4.0. In this presentation the theoretical concepts implemented in PROSA will be explained. Furthermore, the functioning of the program will be presented and the performance of PROSA will be demonstrated using balance data of a real reprocessing campaign. (J.P.N.)

  10. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  11. Sequential method for the assessment of innovations in computer assisted industrial processes

    International Nuclear Information System (INIS)

    Suarez Antola R.

    1995-01-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs

  12. Cortical responses following simultaneous and sequential retinal neurostimulation with different return configurations.

    Science.gov (United States)

    Barriga-Rivera, Alejandro; Morley, John W; Lovell, Nigel H; Suaning, Gregg J

    2016-08-01

    Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas.

  13. Computational area measurement of orbital floor fractures: Reliability, accuracy and rapidity

    International Nuclear Information System (INIS)

    Schouman, Thomas; Courvoisier, Delphine S.; Imholz, Benoit; Van Issum, Christopher; Scolozzi, Paolo

    2012-01-01

    Objective: To evaluate the reliability, accuracy and rapidity of a specific computational method for assessing the orbital floor fracture area on a CT scan. Method: A computer assessment of the area of the fracture, as well as that of the total orbital floor, was determined on CT scans taken from ten patients. The ratio of the fracture's area to the orbital floor area was also calculated. The test–retest precision of measurement calculations was estimated using the Intraclass Correlation Coefficient (ICC) and Dahlberg's formula to assess the agreement across observers and across measures. The time needed for the complete assessment was also evaluated. Results: The Intraclass Correlation Coefficient across observers was 0.92 [0.85;0.96], and the precision of the measures across observers was 4.9%, according to Dahlberg's formula .The mean time needed to make one measurement was 2 min and 39 s (range, 1 min and 32 s to 4 min and 37 s). Conclusion: This study demonstrated that (1) the area of the orbital floor fracture can be rapidly and reliably assessed by using a specific computer system directly on CT scan images; (2) this method has the potential of being routinely used to standardize the post-traumatic evaluation of orbital fractures

  14. The composite sequential clustering technique for analysis of multispectral scanner data

    Science.gov (United States)

    Su, M. Y.

    1972-01-01

    The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.

  15. Rapid Determination of Plutonium Isotopes in Environmental Samples Using Sequential Injection Extraction Chromatography and Detection by Inductively Coupled Plasma Mass Spectrometry

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2009-01-01

    This article presents an automated method for the rapid determination of 239Pu and 240Pu in various environmental samples. The analytical method involves the in-line separation of Pu isotopes using extraction chromatography (TEVA) implemented in a sequential injection (SI) network followed...... by detection of isolated analytes with inductively coupled plasma mass spectrometry (ICP-MS). The method has been devised for the determination of Pu isotopes at environmentally relevant concentrations, whereby it has been successfully applied to the analyses of large volumes/amounts of samples, for example......, 100−200 g of soil and sediment, 20 g of seaweed, and 200 L of seawater following analyte preconcentration. The investigation of the separation capability of the assembled SI system revealed that up to 200 g of soil or sediment can be treated using a column containing about 0.70 g of TEVA resin...

  16. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  17. A computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities (RASA) Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9 inch NaI(T1) crystal containing a 3.25 inch deep by 3.5 inch diameter well. This gamma detection system is controlled by a minicomputer with a dual floppy disk storage medium, line printer, and optional X-Y plotter. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing. The computer system is a Commodore Business Machines (CBM) Model 8032 personal computer with CBM peripherals. Control and data signals are utilized via the parallel user's port to the interface unit. The analog-to-digital converter (ADC) is controlled in machine language, bootstrapped to high memory, and is addressed through the BASIC program. The BASIC program is designed to be ''user friendly'' and provides the operator with several modes of operation such as background and analysis acquisition. Any number of energy regions-of-interest (ROI) may be analyzed with automatic background substraction. Also employed in the BASIC program are the 226 Ra algorithms which utilize linear and polynomial regression equations for data conversion and look-up tables for radon equilibrating coefficients. The optional X-Y plotter may be used with two- or three-dimensional curve programs to enhance data analysis and presentation. A description of the system is presented and typical applications are discussed

  18. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  19. Rapid prototyping of an EEG-based brain-computer interface (BCI).

    Science.gov (United States)

    Guger, C; Schlögl, A; Neuper, C; Walterspacher, D; Strein, T; Pfurtscheller, G

    2001-03-01

    The electroencephalogram (EEG) is modified by motor imagery and can be used by patients with severe motor impairments (e.g., late stage of amyotrophic lateral sclerosis) to communicate with their environment. Such a direct connection between the brain and the computer is known as an EEG-based brain-computer interface (BCI). This paper describes a new type of BCI system that uses rapid prototyping to enable a fast transition of various types of parameter estimation and classification algorithms to real-time implementation and testing. Rapid prototyping is possible by using Matlab, Simulink, and the Real-Time Workshop. It is shown how to automate real-time experiments and perform the interplay between on-line experiments and offline analysis. The system is able to process multiple EEG channels on-line and operates under Windows 95 in real-time on a standard PC without an additional digital signal processor (DSP) board. The BCI can be controlled over the Internet, LAN or modem. This BCI was tested on 3 subjects whose task it was to imagine either left or right hand movement. A classification accuracy between 70% and 95% could be achieved with two EEG channels after some sessions with feedback using an adaptive autoregressive (AAR) model and linear discriminant analysis (LDA).

  20. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application

  1. Rapid and simultaneous determination of neptunium and plutonium isotopes in environmental samples by extraction chromatography using sequential injection analysis and ICP-MS

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2010-01-01

    plutonium and neptunium in three reference materials were in agreement with the recommended or literature values at the 0.05 significance level. The developed method is suitable for the analysis of up to 10 g of soil and 20 g of seaweed samples. The extraction chromatographic separation within the SI system......This paper reports an automated analytical method for rapid and simultaneous determination of plutonium isotopes (239Pu and 240Pu) and neptunium (237Np) in environmental samples. An extraction chromatographic column packed with TrisKem TEVA® resin was incorporated in a sequential injection (SI...... for a single sample takes less than 1.5 h. As compared to batchwise procedures, the developed method significantly improves the analysis efficiency, reduces the labor intensity and expedites the simultaneous determination of plutonium and neptunium as demanded in emergency actions....

  2. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  3. Development of a rapid multi-line detector for industrial computed tomography

    International Nuclear Information System (INIS)

    Nachtrab, Frank; Firsching, Markus; Hofmann, Thomas; Uhlmann, Norman; Neubauer, Harald; Nowak, Arne

    2015-01-01

    In this paper we present the development of a rapid multi-row detector is optimized for industrial computed tomography. With a high frame rate, high spatial resolution and the ability to use up to 450 kVp it is particularly suitable for applications such as fast acquisition of large objects, inline CT or time-resolved 4D CT. (Contains PowerPoint slides). [de

  4. A novel technique for presurgical nasoalveolar molding using computer-aided reverse engineering and rapid prototyping.

    Science.gov (United States)

    Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang

    2011-01-01

    To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.

  5. Sequential pattern recognition by maximum conditional informativity

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří

    2014-01-01

    Roč. 45, č. 1 (2014), s. 39-45 ISSN 0167-8655 R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S Keywords : Multivariate statistics * Statistical pattern recognition * Sequential decision making * Product mixtures * EM algorithm * Shannon information Subject RIV: IN - Informatics, Computer Sci ence Impact factor: 1.551, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/grim-0428565.pdf

  6. Fast regularizing sequential subspace optimization in Banach spaces

    International Nuclear Information System (INIS)

    Schöpfer, F; Schuster, T

    2009-01-01

    We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level

  7. Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations

    Directory of Open Access Journals (Sweden)

    Nah-Oak Song

    2015-08-01

    Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.

  8. Sequential and double sequential fission observed in heavy ion interaction of (11.67 MeV/u)197Au projectile with 197Au target

    International Nuclear Information System (INIS)

    Nasir, Tabassum; Khan, Ehsan Ullah; Baluch, Javaid Jahan; Shafi-Ur-Rehman; Matiullah; Rafique, Muhammad

    2009-01-01

    The heavy ion interaction of 11.67 MeV/u 197 Au+ 197 Au has been investigated using mica as a passive detector. By employing Solid State Nuclear Track Detection Technique the data of elastic scattering as well as inelastic reaction channel was collected. The off-line data analysis of multi-pronged events was performed by measuring the three-dimensional geometrical coordinates of correlated tracks on event-by-event basis. Multi pronged events observed in this reaction were due to sequential and double sequential fission. Using a computer code PRONGY based on the procedure of internal calibration, it was possible to derive quantities like mass transfer, total kinetic energy loss and scattering angles. (author)

  9. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  10. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  11. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  12. Collaborative Filtering Based on Sequential Extraction of User-Item Clusters

    Science.gov (United States)

    Honda, Katsuhiro; Notsu, Akira; Ichihashi, Hidetomo

    Collaborative filtering is a computational realization of “word-of-mouth” in network community, in which the items prefered by “neighbors” are recommended. This paper proposes a new item-selection model for extracting user-item clusters from rectangular relation matrices, in which mutual relations between users and items are denoted in an alternative process of “liking or not”. A technique for sequential co-cluster extraction from rectangular relational data is given by combining the structural balancing-based user-item clustering method with sequential fuzzy cluster extraction appraoch. Then, the tecunique is applied to the collaborative filtering problem, in which some items may be shared by several user clusters.

  13. Sequential-Injection Analysis: Principles, Instrument Construction, and Demonstration by a Simple Experiment

    Science.gov (United States)

    Economou, A.; Tzanavaras, P. D.; Themelis, D. G.

    2005-01-01

    The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…

  14. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    Science.gov (United States)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type

  15. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  16. Computed tomographic demonstration of rapid changes in fatty infiltration of the liver

    International Nuclear Information System (INIS)

    Bashist, B.; Hecht, H.L.; Harely, W.D.

    1982-01-01

    Two alcoholic patients in whom computed tomography (CT) demonstrated reversal of fatty infiltration of the liver are described. The rapid reversibility of fatty infiltration can be useful in monitoring alcoholics with fatty livers. Focal fatty infiltration can mimic focal hepatic lesions and repeat scans can be utilized to assess changes in CT attenuation values when this condition is suspected

  17. Efficient computation of hashes

    International Nuclear Information System (INIS)

    Lopes, Raul H C; Franqueira, Virginia N L; Hobson, Peter R

    2014-01-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  18. Synthesizing genetic sequential logic circuit with clock pulse generator.

    Science.gov (United States)

    Chuang, Chia-Hua; Lin, Chun-Liang

    2014-05-28

    Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal.

  19. Utilizing a sequential injection system furnished with an extraction microcolumn as a novel approach for executing sequential extractions of metal species in solid samples

    DEFF Research Database (Denmark)

    Chomchoei, R.; Hansen, Elo Harald; Shiowatana, J.

    2007-01-01

    This communication presents a novel approach to perform sequential extraction of elements in solid samples by using a sequential injection (SI) system incorporating a specially designed extraction microcolumn. Based on the operation of the syringe pump, different modes of extraction are potentially...... that the system entails many advantages such as being fully automated, and besides being characterised by rapidity, ease of operation and robustness, it is less prone to risks of contamination and personal errors as encountered in traditional batch systems. Moreover, improvement of the precision and accuracy...... of the chemical fractionation of metal in solids as compared with previous reports are obtained. The system ensures that extraction is performed at designated pH values. Variation of sample weight to column volume ratios do not affect the amounts of extractable metals, nor do extraction flow rates ranging from 50...

  20. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  1. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licensing reviews of nuclear power plant structures. The documentation of the Seismic Module of CARES consists of three volumes. This report is Volume 2 of the three volume documentation of the Seismic Module of CARES and represents the User's Manual. 14 refs

  2. [Co-composting high moisture vegetable waste and flower waste in a sequential fed operation].

    Science.gov (United States)

    Zhang, Xiangfeng; Wang, Hongtao; Nie, Yongfeng

    2003-11-01

    Co-composting of high moisture vegetable wastes (celery and cabbage) and flower wastes (carnation) were studied in a sequential fed bed. The preliminary materials of composting were celery and carnation wastes. The sequential fed materials of composting were cabbage wastes and were fed every 4 days. Moisture content of mixture materials was between 60% and 70%. Composting was done in an aerobic static bed of composting based temperature feedback and control via aeration rate regulation. Aeration was ended when temperature of the pile was about 40 degrees C. Changes of composting of temperature, aeration rate, water content, organic matter, ash, pH, volume, NH4(+)-N, and NO3(-)-N were studied. Results show that co-composting of high moisture vegetable wastes and flower wastes, in a sequential fed aerobic static bed based temperature feedback and control via aeration rate regulation, can stabilize organic matter and removal water rapidly. The sequential fed operation are effective to overcome the difficult which traditional composting cannot applied successfully where high moisture vegetable wastes in more excess of flower wastes, such as Dianchi coastal.

  3. Sequential and double sequential fission observed in heavy ion interaction of (11.67 MeV/u){sup 197}Au projectile with {sup 197}Au target

    Energy Technology Data Exchange (ETDEWEB)

    Nasir, Tabassum [Gomal University, Dera Ismail Khan (Pakistan). Dept. of Physics; Khan, Ehsan Ullah [COMSATS Institute of Information Technology (CIIT), Islamabad (Pakistan). Dept. of Physics; Baluch, Javaid Jahan [COMSATS Institute of Information Technology (CIIT), Abbottabad, (Pakistan). Dept. of Environmental Sciences; Shafi-Ur-Rehman, [PAEC, Dera Ghazi Khan (Pakistan). ISL Project; Matiullah, [PINSTECH, Nilore, Islamabad (Pakistan). Physics Div.; Rafique, Muhammad [University of Azad Jammu and Kashmir, Muzaffarabad (Pakistan). Dept. of Physics

    2009-09-15

    The heavy ion interaction of 11.67 MeV/u {sup 197}Au+ {sup 197}Au has been investigated using mica as a passive detector. By employing Solid State Nuclear Track Detection Technique the data of elastic scattering as well as inelastic reaction channel was collected. The off-line data analysis of multi-pronged events was performed by measuring the three-dimensional geometrical coordinates of correlated tracks on event-by-event basis. Multi pronged events observed in this reaction were due to sequential and double sequential fission. Using a computer code PRONGY based on the procedure of internal calibration, it was possible to derive quantities like mass transfer, total kinetic energy loss and scattering angles. (author)

  4. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Efficient sequential and parallel algorithms for record linkage.

    Science.gov (United States)

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  6. Sequential Triangle Strip Generator based on Hopfield Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Lněnička, Radim

    2009-01-01

    Roč. 21, č. 2 (2009), s. 583-617 ISSN 0899-7667 R&D Projects: GA MŠk(CZ) 1M0545; GA AV ČR 1ET100300517; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10300504; CEZ:AV0Z10750506 Keywords : sequential triangle strip * combinatorial optimization * Hopfield network * minimum energy * simulated annealing Subject RIV: IN - Informatics, Computer Science Impact factor: 2.175, year: 2009

  7. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  8. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  9. On the sequentiality of the multiple Coulomb-excitation process

    International Nuclear Information System (INIS)

    Dannhaeuser, G.; Boer, J. de

    1978-01-01

    This paper describes the results of 'computer experiments' illustrating the meaning of a new concept called 'sequentiality'. This concept applies to processes in which the excitation of a given state is mainly accomplished by a large multiple of steps, and it deals with the question as to what extent a transition close to the ground state occurs before one between the highest excited states. (orig.) [de

  10. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    Science.gov (United States)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  11. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licensing reviews of nuclear power plant structures. The documentation of the Seismic Module of CARES consists of three volumes. This report represents Volume 3 of the volume documentation of the Seismic Module of CARES. It presents three sample problems typically encountered in the Soil-Structure Interaction analyses. 14 refs., 36 figs., 2 tabs

  12. Rapid development of scalable scientific software using a process oriented approach

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Vinter, Brian

    2011-01-01

    Scientific applications are often not written with multiprocessing, cluster computing or grid computing in mind. This paper suggests using Python and PyCSP to structure scientific software through Communicating Sequential Processes. Three scientific applications are used to demonstrate the features...... of PyCSP and how networks of processes may easily be mapped into a visual representation for better understanding of the process workflow. We show that for many sequential solutions, the difficulty in implementing a parallel application is removed. The use of standard multi-threading mechanisms...

  13. Double-blind photo lineups using actual eyewitnesses: an experimental test of a sequential versus simultaneous lineup procedure.

    Science.gov (United States)

    Wells, Gary L; Steblay, Nancy K; Dysart, Jennifer E

    2015-02-01

    Eyewitnesses (494) to actual crimes in 4 police jurisdictions were randomly assigned to view simultaneous or sequential photo lineups using laptop computers and double-blind administration. The sequential procedure used in the field experiment mimicked how it is conducted in actual practice (e.g., using a continuation rule, witness does not know how many photos are to be viewed, witnesses resolve any multiple identifications), which is not how most lab experiments have tested the sequential lineup. No significant differences emerged in rates of identifying lineup suspects (25% overall) but the sequential procedure produced a significantly lower rate (11%) of identifying known-innocent lineup fillers than did the simultaneous procedure (18%). The simultaneous/sequential pattern did not significantly interact with estimator variables and no lineup-position effects were observed for either the simultaneous or sequential procedures. Rates of nonidentification were not significantly different for simultaneous and sequential but nonidentifiers from the sequential procedure were more likely to use the "not sure" response option than were nonidentifiers from the simultaneous procedure. Among witnesses who made an identification, 36% (41% of simultaneous and 32% of sequential) identified a known-innocent filler rather than a suspect, indicating that eyewitness performance overall was very poor. The results suggest that the sequential procedure that is used in the field reduces the identification of known-innocent fillers, but the differences are relatively small.

  14. Deciphering Intrinsic Inter-subunit Couplings that Lead to Sequential Hydrolysis of F 1 -ATPase Ring

    Science.gov (United States)

    Dai, Liqiang; Flechsig, Holger; Yu, Jin

    2017-10-01

    The rotary sequential hydrolysis of metabolic machine F1-ATPase is a prominent feature to reveal high coordination among multiple chemical sites on the stator F1 ring, which also contributes to tight coupling between the chemical reaction and central {\\gamma}-shaft rotation. High-speed AFM experiments discovered that the sequential hydrolysis was maintained on the F1 ring even in the absence of the {\\gamma} rotor. To explore how the intrinsic sequential performance arises, we computationally investigated essential inter-subunit couplings on the hexameric ring of mitochondrial and bacterial F1. We first reproduced the sequential hydrolysis schemes as experimentally detected, by simulating tri-site ATP hydrolysis cycles on the F1 ring upon kinetically imposing inter-subunit couplings to substantially promote the hydrolysis products release. We found that it is key for certain ATP binding and hydrolysis events to facilitate the neighbor-site ADP and Pi release to support the sequential hydrolysis. The kinetically feasible couplings were then scrutinized through atomistic molecular dynamics simulations as well as coarse-grained simulations, in which we enforced targeted conformational changes for the ATP binding or hydrolysis. Notably, we detected the asymmetrical neighbor-site opening that would facilitate the ADP release upon the enforced ATP binding, and computationally captured the complete Pi release through charge hopping upon the enforced neighbor-site ATP hydrolysis. The ATP-hydrolysis triggered Pi release revealed in current TMD simulation confirms a recent prediction made from statistical analyses of single molecule experimental data in regard to the role ATP hydrolysis plays. Our studies, therefore, elucidate both the concerted chemical kinetics and underlying structural dynamics of the inter-subunit couplings that lead to the rotary sequential hydrolysis of the F1 ring.

  15. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  16. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  17. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  18. Sequential Detection of Fission Processes for Harbor Defense

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J V; Walston, S E; Chambers, D H

    2015-02-12

    With the large increase in terrorist activities throughout the world, the timely and accurate detection of special nuclear material (SNM) has become an extremely high priority for many countries concerned with national security. The detection of radionuclide contraband based on their γ-ray emissions has been attacked vigorously with some interesting and feasible results; however, the fission process of SNM has not received as much attention due to its inherent complexity and required predictive nature. In this paper, on-line, sequential Bayesian detection and estimation (parameter) techniques to rapidly and reliably detect unknown fissioning sources with high statistical confidence are developed.

  19. Automatic quantitative analysis of liver functions by a computer system

    International Nuclear Information System (INIS)

    Shinpo, Takako

    1984-01-01

    In the previous paper, we confirmed the clinical usefulness of hepatic clearance (hepatic blood flow), which is the hepatic uptake and blood disappearance rate coefficients. These were obtained by the initial slope index of each minute during a period of five frames of a hepatogram by injecting sup(99m)Tc-Sn-colloid 37 MBq. To analyze the information simply, rapidly and accurately, we developed a automatic quantitative analysis for liver functions. Information was obtained every quarter minute during a period of 60 frames of the sequential image. The sequential counts were measured for the heart, whole liver, both left lobe and right lobes using a computer connected to a scintillation camera. We measured the effective hepatic blood flow, from the disappearance rate multiplied by the percentage of hepatic uptake as follows, (liver counts)/(tatal counts of the field) Our method of analysis automatically recorded the reappearance graph of the disappearance curve and uptake curve on the basis of the heart and the whole liver, respectively; and computed using BASIC language. This method makes it possible to obtain the image of the initial uptake of sup(99m)Tc-Sn-colloid into the liver by a small dose of it. (author)

  20. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-10-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  1. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  2. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  3. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licencing reviews of nuclear power plant structures. The docomentation of the Seismic Module of CARES consists of three volumes. This report represents Volume 1 of the three volume documentation of the Seismic Module of CARES. It concentrates on the theoretical basis of the system and presents modeling assumptions and limitations as well as solution schemes and algorithms of CARES. 31 refs., 6 figs

  4. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

    KAUST Repository

    Abediseid, Walid

    2013-06-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter - the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity. © 2013 IEEE.

  5. Moving Synergistically Acting Drug Combinations to the Clinic by Comparing Sequential versus Simultaneous Drug Administrations.

    Science.gov (United States)

    Dinavahi, Saketh S; Noory, Mohammad A; Gowda, Raghavendra; Drabick, Joseph J; Berg, Arthur; Neves, Rogerio I; Robertson, Gavin P

    2018-03-01

    Drug combinations acting synergistically to kill cancer cells have become increasingly important in melanoma as an approach to manage the recurrent resistant disease. Protein kinase B (AKT) is a major target in this disease but its inhibitors are not effective clinically, which is a major concern. Targeting AKT in combination with WEE1 (mitotic inhibitor kinase) seems to have potential to make AKT-based therapeutics effective clinically. Since agents targeting AKT and WEE1 have been tested individually in the clinic, the quickest way to move the drug combination to patients would be to combine these agents sequentially, enabling the use of existing phase I clinical trial toxicity data. Therefore, a rapid preclinical approach is needed to evaluate whether simultaneous or sequential drug treatment has maximal therapeutic efficacy, which is based on a mechanistic rationale. To develop this approach, melanoma cell lines were treated with AKT inhibitor AZD5363 [4-amino- N -[(1 S )-1-(4-chlorophenyl)-3-hydroxypropyl]-1-(7 H -pyrrolo[2,3- d ]pyrimidin-4-yl)piperidine-4-carboxamide] and WEE1 inhibitor AZD1775 [2-allyl-1-(6-(2-hydroxypropan-2-yl)pyridin-2-yl)-6-((4-(4-methylpiperazin-1-yl)phenyl)amino)-1 H -pyrazolo[3,4- d ]pyrimidin-3(2 H )-one] using simultaneous and sequential dosing schedules. Simultaneous treatment synergistically reduced melanoma cell survival and tumor growth. In contrast, sequential treatment was antagonistic and had a minimal tumor inhibitory effect compared with individual agents. Mechanistically, simultaneous targeting of AKT and WEE1 enhanced deregulation of the cell cycle and DNA damage repair pathways by modulating transcription factors p53 and forkhead box M1, which was not observed with sequential treatment. Thus, this study identifies a rapid approach to assess the drug combinations with a mechanistic basis for selection, which suggests that combining AKT and WEE1 inhibitors is needed for maximal efficacy. Copyright © 2018 by The American

  6. epSICAR: An Emerging Patterns based Approach to Sequential, Interleaved and Concurrent Activity Recognition

    DEFF Research Database (Denmark)

    Gu, Tao; Wu, Zhanqing; Tao, Xianping

    2009-01-01

    Recognizing human activity from sensor readings has recently attracted much research interest in pervasive computing. Human activity recognition is particularly challenging because activities are often performed in not only simple (i.e., sequential), but also complex (i.e., interleaved...

  7. Constructing Multiply Substituted Arenes Using Sequential Pd(II)-Catalyzed C–H Olefination**

    Science.gov (United States)

    Engle, Keary M.; Wang, Dong-Hui; Yu, Jin-Quan

    2011-01-01

    Complementary catalytic systems have been developed in which the reactivity/selectivity balance in Pd(II)-catalyzed ortho-C–H olefination can be modulated through ligand control. This allows for sequential C–H functionalization for the rapid preparation of 1,2,3-trisubstituted arenes. Additionally, a rare example of iterative C–H activation, in which a newly installed functional group directs subsequent C–H activation has been demonstrated. PMID:20632344

  8. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    Directory of Open Access Journals (Sweden)

    Dario Cuevas Rivera

    2015-10-01

    Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.

  9. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  10. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  11. Computer-facilitated rapid HIV testing in emergency care settings: provider and patient usability and acceptability.

    Science.gov (United States)

    Spielberg, Freya; Kurth, Ann E; Severynen, Anneleen; Hsieh, Yu-Hsiang; Moring-Parris, Daniel; Mackenzie, Sara; Rothman, Richard

    2011-06-01

    Providers in emergency care settings (ECSs) often face barriers to expanded HIV testing. We undertook formative research to understand the potential utility of a computer tool, "CARE," to facilitate rapid HIV testing in ECSs. Computer tool usability and acceptability were assessed among 35 adult patients, and provider focus groups were held, in two ECSs in Washington State and Maryland. The computer tool was usable by patients of varying computer literacy. Patients appreciated the tool's privacy and lack of judgment and their ability to reflect on HIV risks and create risk reduction plans. Staff voiced concerns regarding ECS-based HIV testing generally, including resources for follow-up of newly diagnosed people. Computer-delivered HIV testing support was acceptable and usable among low-literacy populations in two ECSs. Such tools may help circumvent some practical barriers associated with routine HIV testing in busy settings though linkages to care will still be needed.

  12. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  13. Bursts and heavy tails in temporal and sequential dynamics of foraging decisions.

    Directory of Open Access Journals (Sweden)

    Kanghoon Jung

    2014-08-01

    Full Text Available A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a a highly biased choice distribution; and (b preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices.

  14. Bursts and Heavy Tails in Temporal and Sequential Dynamics of Foraging Decisions

    Science.gov (United States)

    Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D.; Jeong, Jaeseung

    2014-01-01

    A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices. PMID:25122498

  15. Program completion of a web-based tailored lifestyle intervention for adults: differences between a sequential and a simultaneous approach.

    Science.gov (United States)

    Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J

    2012-03-08

    Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P simultaneous condition: OR = 1.04; P sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout

  16. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  17. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  18. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  19. An Undergraduate Survey Course on Asynchronous Sequential Logic, Ladder Logic, and Fuzzy Logic

    Science.gov (United States)

    Foster, D. L.

    2012-01-01

    For a basic foundation in computer engineering, universities traditionally teach synchronous sequential circuit design, using discrete gates or field programmable gate arrays, and a microcomputers course that includes basic I/O processing. These courses, though critical, expose students to only a small subset of tools. At co-op schools like…

  20. 128-slice Dual-source Computed Tomography Coronary Angiography in Patients with Atrial Fibrillation: Image Quality and Radiation Dose of Prospectively Electrocardiogram-triggered Sequential Scan Compared with Retrospectively Electrocardiogram-gated Spiral Scan.

    Science.gov (United States)

    Lin, Lu; Wang, Yi-Ning; Kong, Ling-Yan; Jin, Zheng-Yu; Lu, Guang-Ming; Zhang, Zhao-Qi; Cao, Jian; Li, Shuo; Song, Lan; Wang, Zhi-Wei; Zhou, Kang; Wang, Ming

    2013-01-01

    Objective To evaluate the image quality (IQ) and radiation dose of 128-slice dual-source computed tomography (DSCT) coronary angiography using prospectively electrocardiogram (ECG)-triggered sequential scan mode compared with ECG-gated spiral scan mode in a population with atrial fibrillation. Methods Thirty-two patients with suspected coronary artery disease and permanent atrial fibrillation referred for a second-generation 128-slice DSCT coronary angiography were included in the prospective study. Of them, 17 patients (sequential group) were randomly selected to use a prospectively ECG-triggered sequential scan, while the other 15 patients (spiral group) used a retrospectively ECG-gated spiral scan. The IQ was assessed by two readers independently, using a four-point grading scale from excel-lent (grade 1) to non-assessable (grade 4), based on the American Heart Association 15-segment model. IQ of each segment and effective dose of each patient were compared between the two groups. Results The mean heart rate (HR) of the sequential group was 96±27 beats per minute (bpm) with a variation range of 73±25 bpm, while the mean HR of the spiral group was 86±22 bpm with a variationrange of 65±24 bpm. Both of the mean HR (t=1.91, P=0.243) and HR variation range (t=0.950, P=0.350) had no significant difference between the two groups. In per-segment analysis, IQ of the sequential group vs. spiral group was rated as excellent (grade 1) in 190/244 (78%) vs. 177/217 (82%) by reader1 and 197/245 (80%) vs. 174/214 (81%) by reader2, as non-assessable (grade 4) in 4/244 (2%) vs. 2/217 (1%) by reader1 and 6/245 (2%) vs. 4/214 (2%) by reader2. Overall averaged IQ per-patient in the sequential and spiral group showed equally good (1.27±0.19 vs. 1.25±0.22, Z=-0.834, P=0.404). The effective radiation dose of the sequential group reduced significantly compared with the spiral group (4.88±1.77 mSv vs. 10.20±3.64 mSv; t=-5.372, P=0.000). Conclusion Compared with retrospectively

  1. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  2. Comparison of simultaneous and sequential SPECT imaging for discrimination tasks in assessment of cardiac defects.

    Science.gov (United States)

    Trott, C M; Ouyang, J; El Fakhri, G

    2010-11-21

    Simultaneous rest perfusion/fatty-acid metabolism studies have the potential to replace sequential rest/stress perfusion studies for the assessment of cardiac function. Simultaneous acquisition has the benefits of increased signal and lack of need for patient stress, but is complicated by cross-talk between the two radionuclide signals. We consider a simultaneous rest (99m)Tc-sestamibi/(123)I-BMIPP imaging protocol in place of the commonly used sequential rest/stress (99m)Tc-sestamibi protocol. The theoretical precision with which the severity of a cardiac defect and the transmural extent of infarct can be measured is computed for simultaneous and sequential SPECT imaging, and their performance is compared for discriminating (1) degrees of defect severity and (2) sub-endocardial from transmural defects. We consider cardiac infarcts for which reduced perfusion and metabolism are observed. From an information perspective, simultaneous imaging is found to yield comparable or improved performance compared with sequential imaging for discriminating both severity of defect and transmural extent of infarct, for three defects of differing location and size.

  3. Comparison of simultaneous and sequential SPECT imaging for discrimination tasks in assessment of cardiac defects

    International Nuclear Information System (INIS)

    Trott, C M; Ouyang, J; El Fakhri, G

    2010-01-01

    Simultaneous rest perfusion/fatty-acid metabolism studies have the potential to replace sequential rest/stress perfusion studies for the assessment of cardiac function. Simultaneous acquisition has the benefits of increased signal and lack of need for patient stress, but is complicated by cross-talk between the two radionuclide signals. We consider a simultaneous rest 99m Tc-sestamibi/ 123 I-BMIPP imaging protocol in place of the commonly used sequential rest/stress 99m Tc-sestamibi protocol. The theoretical precision with which the severity of a cardiac defect and the transmural extent of infarct can be measured is computed for simultaneous and sequential SPECT imaging, and their performance is compared for discriminating (1) degrees of defect severity and (2) sub-endocardial from transmural defects. We consider cardiac infarcts for which reduced perfusion and metabolism are observed. From an information perspective, simultaneous imaging is found to yield comparable or improved performance compared with sequential imaging for discriminating both severity of defect and transmural extent of infarct, for three defects of differing location and size.

  4. Sequential determination of fat- and water-soluble vitamins in Rhodiola imbricata root from trans-Himalaya with rapid resolution liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Tayade, Amol B; Dhar, Priyanka; Kumar, Jatinder; Sharma, Manu; Chaurasia, Om P; Srivastava, Ravi B

    2013-07-30

    A rapid method was developed to determine both types of vitamins in Rhodiola imbricata root for the accurate quantification of free vitamin forms. Rapid resolution liquid chromatography/tandem mass spectrometry (RRLC-MS/MS) with electrospray ionization (ESI) source operating in multiple reactions monitoring (MRM) mode was optimized for the sequential analysis of nine water-soluble vitamins (B1, B2, two B3 vitamins, B5, B6, B7, B9, and B12) and six fat-soluble vitamins (A, E, D2, D3, K1, and K2). Both types of vitamins were separated by ion-suppression reversed-phase liquid chromatography with gradient elution within 30 min and detected in positive ion mode. Deviations in the intra- and inter-day precision were always below 0.6% and 0.3% for recoveries and retention time. Intra- and inter-day relative standard deviation (RSD) values of retention time for water- and fat-soluble vitamin were ranged between 0.02-0.20% and 0.01-0.15%, respectively. The mean recoveries were ranged between 88.95 and 107.07%. Sensitivity and specificity of this method allowed the limits of detection (LOD) and limits of quantitation (LOQ) of the analytes at ppb levels. The linear range was achieved for fat- and water-soluble vitamins at 100-1000 ppb and 10-100 ppb. Vitamin B-complex and vitamin E were detected as the principle vitamins in the root of this adaptogen which would be of great interest to develop novel foods from the Indian trans-Himalaya. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  6. Hemodynamic analysis of sequential graft from right coronary system to left coronary system.

    Science.gov (United States)

    Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun

    2016-12-28

    Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.

  7. Sequential MR images of uterus after Gd-DTPA injection

    International Nuclear Information System (INIS)

    Okada, Susumu; Kato, Tomoyasu; Yamada, Keiko; Sawano, Seishi; Yamashita, Takashi; Hirai, Yasuo; Hasumi, Katsuhiko

    1993-01-01

    To investigate the sequential changes in signal intensity (SI) of normal and abnormal uteri, T1-weighted images were taken repeatedly after the injection of Gd-diethylenetriaminepentaacetic acid (DTPA). Six volunteers and 19 patients with known uterine body malignancy (18 carcinomas, one carcinosarcoma) were examined. The results in volunteers were as follows. In the secretory phase, SI of the endometrium was stronger in the late images than in the early ones, whereas in the proliferative phase, SI was stronger in the early images. SI of the myometrium decreased rapidly and there were no differences in SI between menstrual phases. In 17 of 18 endometrial carcinomas, the tumors showed hypointensity relative to the myometrium, and the contrast between the tumor and the myometrium was better in the early images. In the remaining two cases, the tumor showed hyperintensity and the contrast was better in the late images. After the injection of Gd-DTPA, the endometrium appeared differently according to the menstrual cycle in normal volunteers, and the appearance of uterine structures and endometrial malignant tumors changed sequentially. These findings must be kept in mind when evaluating uterine diseases by Gd-DTPA enhanced MRI. (author)

  8. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    Science.gov (United States)

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  9. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  10. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  11. Efficient Mining and Detection of Sequential Intrusion Patterns for Network Intrusion Detection Systems

    Science.gov (United States)

    Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli

    In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.

  12. Simultaneous optimization of sequential IMRT plans

    International Nuclear Information System (INIS)

    Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.

    2005-01-01

    Radiotherapy often comprises two phases, in which irradiation of a volume at risk for microscopic disease is followed by a sequential dose escalation to a smaller volume either at a higher risk for microscopic disease or containing only gross disease. This technique is difficult to implement with intensity modulated radiotherapy, as the tolerance doses of critical structures must be respected over the sum of the two plans. Techniques that include an integrated boost have been proposed to address this problem. However, clinical experience with such techniques is limited, and many clinicians are uncomfortable prescribing nonconventional fractionation schemes. To solve this problem, we developed an optimization technique that simultaneously generates sequential initial and boost IMRT plans. We have developed an optimization tool that uses a commercial treatment planning system (TPS) and a high level programming language for technical computing. The tool uses the TPS to calculate the dose deposition coefficients (DDCs) for optimization. The DDCs were imported into external software and the treatment ports duplicated to create the boost plan. The initial, boost, and tolerance doses were specified and used to construct cost functions. The initial and boost plans were optimized simultaneously using a gradient search technique. Following optimization, the fluence maps were exported to the TPS for dose calculation. Seven patients treated using sequential techniques were selected from our clinical database. The initial and boost plans used to treat these patients were developed independently of each other by dividing the tolerance doses proportionally between the initial and boost plans and then iteratively optimizing the plans until a summation that met the treatment goals was obtained. We used the simultaneous optimization technique to generate plans that met the original planning goals. The coverage of the initial and boost target volumes in the simultaneously optimized

  13. Fast-responding liquid crystal light-valve technology for color-sequential display applications

    Science.gov (United States)

    Janssen, Peter J.; Konovalov, Victor A.; Muravski, Anatoli A.; Yakovenko, Sergei Y.

    1996-04-01

    A color sequential projection system has some distinct advantages over conventional systems which make it uniquely suitable for consumer TV as well as high performance professional applications such as computer monitors and electronic cinema. A fast responding light-valve is, clearly, essential for a good performing system. Response speed of transmissive LC lightvalves has been marginal thus far for good color rendition. Recently, Sevchenko Institute has made some very fast reflective LC cells which were evaluated at Philips Labs. These devices showed sub millisecond-large signal-response times, even at room temperature, and produced good color in a projector emulation testbed. In our presentation we describe our highly efficient color sequential projector and demonstrate its operation on video tape. Next we discuss light-valve requirements and reflective light-valve test results.

  14. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    Science.gov (United States)

    Zhang, G.

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  15. Sinonasal carcinoma presenting as chronic sinusitis and sequential bilateral visual loss

    Directory of Open Access Journals (Sweden)

    Wei-Yu Chiang

    2015-01-01

    Full Text Available Sinonasal undifferentiated carcinoma-related rhinogenic optic neuropathy is rare and may lead to visual loss. To the best of our knowledge, this is the first report of bilateral sequential visual loss induced by this etiology. It is important to differentiate between chronic sinusitis and malignancy on the basis of specific findings on magnetic resonance images. Surgical decompression with multidisciplinary therapy, including steroids, chemotherapy, and radiotherapy, is indicated. However, no visual improvement was noted in this case, emphasizing the rapid disease progression and importance of early diagnosis and treatment.

  16. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  17. Mining Sequential Update Summarization with Hierarchical Text Analysis

    Directory of Open Access Journals (Sweden)

    Chunyun Zhang

    2016-01-01

    Full Text Available The outbreak of unexpected news events such as large human accident or natural disaster brings about a new information access problem where traditional approaches fail. Mostly, news of these events shows characteristics that are early sparse and later redundant. Hence, it is very important to get updates and provide individuals with timely and important information of these incidents during their development, especially when being applied in wireless and mobile Internet of Things (IoT. In this paper, we define the problem of sequential update summarization extraction and present a new hierarchical update mining system which can broadcast with useful, new, and timely sentence-length updates about a developing event. The new system proposes a novel method, which incorporates techniques from topic-level and sentence-level summarization. To evaluate the performance of the proposed system, we apply it to the task of sequential update summarization of temporal summarization (TS track at Text Retrieval Conference (TREC 2013 to compute four measurements of the update mining system: the expected gain, expected latency gain, comprehensiveness, and latency comprehensiveness. Experimental results show that our proposed method has good performance.

  18. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  19. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  20. Synthetic Aperture Sequential Beamforming implemented on multi-core platforms

    DEFF Research Database (Denmark)

    Kjeldsen, Thomas; Lassen, Lee; Hemmsen, Martin Christian

    2014-01-01

    This paper compares several computational ap- proaches to Synthetic Aperture Sequential Beamforming (SASB) targeting consumer level parallel processors such as multi-core CPUs and GPUs. The proposed implementations demonstrate that ultrasound imaging using SASB can be executed in real- time with ...... per second) on an Intel Core i7 2600 CPU with an AMD HD7850 and a NVIDIA GTX680 GPU. The fastest CPU and GPU implementations use 14% and 1.3% of the real-time budget of 62 ms/frame, respectively. The maximum achieved processing rate is 1265 frames/s....

  1. Sequential hepato-splenic scintigraphy for measurement of hepatic blood flow

    Energy Technology Data Exchange (ETDEWEB)

    Biersack, H J; Knopp, R; Dahlem, R; Winkler, C [Bonn Univ. (Germany, F.R.). Inst. fuer Klinische und Experimentelle Nuklearmedizin; Thelen, M [Bonn Univ. (Germany, F.R.). Radiologische Klinik; Schulz, D; Schmidt, R [Bonn Univ. (Germany, F.R.). Chirurgische Klinik und Poliklinik

    1977-01-01

    The arterial and portal components of total liver blood flow were determined quantitatively in 31 patients by means of a new, non-invasive method. Sequential hepato-splenic scintigraphy has been employed, using a scintillation camera linked to a computer system. In normals, the proportion of portal flow was 71%, whereas in patients with portal hypertension it averaged 21%. Our experience indicates that the procedure can be of considerable value in the pre-operative diagnosis and postoperative follow-up of portal hypertension.

  2. Sequential hepato-splenic scintigraphy for measurement of hepatic blood flow

    International Nuclear Information System (INIS)

    Biersack, H.J.; Knopp, R.; Dahlem, R.; Winkler, C.; Thelen, M.; Schulz, D.; Schmidt, R.

    1977-01-01

    The arterial and portal components of total liver blood flow were determined quantitatively in 31 patients by means of a new, non-invasive method. Sequential hepato-splenic scintigraphy has been employed, using a scintillation camera linked to a computer system. In normals, the proportion of portal flow was 71%, whereas in patients with portal hypertension it averaged 21%. Our experience indicates that the procedure can be of considerable value in the pre-operative diagnosis and postoperative follow-up of portal hypertension. (orig.) [de

  3. Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.

    Science.gov (United States)

    Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq

    2016-01-01

    This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.

  4. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  5. Development of New Lipid-Based Paclitaxel Nanoparticles Using Sequential Simplex Optimization

    Science.gov (United States)

    Dong, Xiaowei; Mattingly, Cynthia A.; Tseng, Michael; Cho, Moo; Adams, Val R.; Mumper, Russell J.

    2008-01-01

    The objective of these studies was to develop Cremophor-free lipid-based paclitaxel (PX) nanoparticle formulations prepared from warm microemulsion precursors. To identify and optimize new nanoparticles, experimental design was performed combining Taguchi array and sequential simplex optimization. The combination of Taguchi array and sequential simplex optimization efficiently directed the design of paclitaxel nanoparticles. Two optimized paclitaxel nanoparticles (NPs) were obtained: G78 NPs composed of glyceryl tridodecanoate (GT) and polyoxyethylene 20-stearyl ether (Brij 78), and BTM NPs composed of Miglyol 812, Brij 78 and D-alpha-tocopheryl polyethylene glycol 1000 succinate (TPGS). Both nanoparticles successfully entrapped paclitaxel at a final concentration of 150 μg/ml (over 6% drug loading) with particle sizes less than 200 nm and over 85% of entrapment efficiency. These novel paclitaxel nanoparticles were stable at 4°C over three months and in PBS at 37°C over 102 hours as measured by physical stability. Release of paclitaxel was slow and sustained without initial burst release. Cytotoxicity studies in MDA-MB-231 cancer cells showed that both nanoparticles have similar anticancer activities compared to Taxol®. Interestingly, PX BTM nanocapsules could be lyophilized without cryoprotectants. The lyophilized powder comprised only of PX BTM NPs in water could be rapidly rehydrated with complete retention of original physicochemical properties, in-vitro release properties, and cytotoxicity profile. Sequential Simplex Optimization has been utilized to identify promising new lipid-based paclitaxel nanoparticles having useful attributes. PMID:19111929

  6. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  7. An Alternative Algorithm for Computing Watersheds on Shared Memory Parallel Computers

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.

    1995-01-01

    In this paper a parallel implementation of a watershed algorithm is proposed. The algorithm can easily be implemented on shared memory parallel computers. The watershed transform is generally considered to be inherently sequential since the discrete watershed of an image is defined using recursion.

  8. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  9. Lineup composition, suspect position, and the sequential lineup advantage.

    Science.gov (United States)

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  10. Sequential reduction of external networks for the security- and short circuit monitor in power system control centers

    Energy Technology Data Exchange (ETDEWEB)

    Dietze, P [Siemens A.G., Erlangen (Germany, F.R.). Abt. ESTE

    1978-01-01

    For the evaluation of the effects of switching operations or simulation of line, transformer, and generator outages the influence of interconnected neighbor networks is modelled by network equivalents in the process computer. The basic passive conductivity model is produced by sequential reduction and adapted to fit the active network behavior. The reduction routine uses the admittance matrix, sparse technique and optimal ordering; it is applicable to process computer applications.

  11. Radiographic and computed tomographic demonstration of pseudotumor cerebri due to rapid weight gain in a child with pelvic rhabdomyosarcoma

    Energy Technology Data Exchange (ETDEWEB)

    Berdon, W.E.; Barker, D.H.; Barash, F.S.

    1982-06-01

    Rapid weight gain in a malnourished child can be associated with suture diastasis in the pattern of pseudotumor cerebri; this has been previously reported in deprivational dwarfism and cystic fibrosis. In a child with pelvic rhabdomyosarcoma, skull radiographs and cranial computed tomographic (CT) scans were available prior to a period of rapid weight gain induced by hyperalimentation. Suture diastasis developed and repeat CT scans showed this to be accompanied by smaller ventricles.

  12. Radiographic and computed tomographic demonstration of pseudotumor cerebri due to rapid weight gain in a child with pelvic rhabdomyosarcoma

    International Nuclear Information System (INIS)

    Berdon, W.E.; Barker, D.H.; Barash, F.S.

    1982-01-01

    Rapid weight gain in a malnourished child can be associated with suture diastasis in the pattern of pseudotumor cerebri; this has been previously reported in deprivational dwarfism and cystic fibrosis. In a child with pelvic rhabdomyosarcoma, skull radiographs and cranial computed tomographic (CT) scans were available prior to a period of rapid weight gain induced by hyperalimentation. Suture diastasis developed and repeat CT scans showed this to be accompanied by smaller ventricles

  13. Fast Parallel Computation of Polynomials Using Few Processors

    DEFF Research Database (Denmark)

    Valiant, Leslie G.; Skyum, Sven; Berkowitz, S.

    1983-01-01

    It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors....

  14. Sequential Classification of Palm Gestures Based on A* Algorithm and MLP Neural Network for Quadrocopter Control

    Directory of Open Access Journals (Sweden)

    Wodziński Marek

    2017-06-01

    Full Text Available This paper presents an alternative approach to the sequential data classification, based on traditional machine learning algorithms (neural networks, principal component analysis, multivariate Gaussian anomaly detector and finding the shortest path in a directed acyclic graph, using A* algorithm with a regression-based heuristic. Palm gestures were used as an example of the sequential data and a quadrocopter was the controlled object. The study includes creation of a conceptual model and practical construction of a system using the GPU to ensure the realtime operation. The results present the classification accuracy of chosen gestures and comparison of the computation time between the CPU- and GPU-based solutions.

  15. The relative timing between eye and hand rapid sequential pointing is affected by time pressure, but not by advance knowledge

    NARCIS (Netherlands)

    Deconinck, F.; van Polanen, V.; Savelsbergh, G.J.P.; Bennett, S.

    2011-01-01

    The present study examined the effect of timing constraints and advance knowledge on eye-hand coordination strategy in a sequential pointing task. Participants were required to point at two successively appearing targets on a screen while the inter-stimulus interval (ISI) and the trial order were

  16. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  17. Fast parallel computation of polynomials using few processors

    DEFF Research Database (Denmark)

    Valiant, Leslie; Skyum, Sven

    1981-01-01

    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  18. Addressing unmet need for HIV testing in emergency care settings: a role for computer-facilitated rapid HIV testing?

    Science.gov (United States)

    Kurth, Ann E; Severynen, Anneleen; Spielberg, Freya

    2013-08-01

    HIV testing in emergency departments (EDs) remains underutilized. The authors evaluated a computer tool to facilitate rapid HIV testing in an urban ED. Randomly assigned nonacute adult ED patients were randomly assigned to a computer tool (CARE) and rapid HIV testing before a standard visit (n = 258) or to a standard visit (n = 259) with chart access. The authors assessed intervention acceptability and compared noted HIV risks. Participants were 56% nonWhite and 58% male; median age was 37 years. In the CARE arm, nearly all (251/258) of the patients completed the session and received HIV results; four declined to consent to the test. HIV risks were reported by 54% of users; one participant was confirmed HIV-positive, and two were confirmed false-positive (seroprevalence 0.4%, 95% CI [0.01, 2.2]). Half (55%) of the patients preferred computerized rather than face-to-face counseling for future HIV testing. In the standard arm, one HIV test and two referrals for testing occurred. Computer-facilitated HIV testing appears acceptable to ED patients. Future research should assess cost-effectiveness compared with staff-delivered approaches.

  19. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  20. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  1. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

    Science.gov (United States)

    Hayashi, Ken; Hayashi, Hideyuki

    2006-10-01

    To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

  2. In-situ sequential laser transfer and laser reduction of graphene oxide films

    Science.gov (United States)

    Papazoglou, S.; Petridis, C.; Kymakis, E.; Kennou, S.; Raptis, Y. S.; Chatzandroulis, S.; Zergioti, I.

    2018-04-01

    Achieving high quality transfer of graphene on selected substrates is a priority in device fabrication, especially where drop-on-demand applications are involved. In this work, we report an in-situ, fast, simple, and one step process that resulted in the reduction, transfer, and fabrication of reduced graphene oxide-based humidity sensors, using picosecond laser pulses. By tuning the laser illumination parameters, we managed to implement the sequential printing and reduction of graphene oxide flakes. The overall process lasted only a few seconds compared to a few hours that our group has previously published. DC current measurements, X-Ray Photoelectron Spectroscopy, X-Ray Diffraction, and Raman Spectroscopy were employed in order to assess the efficiency of our approach. To demonstrate the applicability and the potential of the technique, laser printed reduced graphene oxide humidity sensors with a limit of detection of 1700 ppm are presented. The results demonstrated in this work provide a selective, rapid, and low-cost approach for sequential transfer and photochemical reduction of graphene oxide micro-patterns onto various substrates for flexible electronics and sensor applications.

  3. Accuracy of using computer-aided rapid prototyping templates for mandible reconstruction with an iliac crest graft

    Science.gov (United States)

    2014-01-01

    Background This study aimed to evaluate the accuracy of surgical outcomes in free iliac crest mandibular reconstructions that were carried out with virtual surgical plans and rapid prototyping templates. Methods This study evaluated eight patients who underwent mandibular osteotomy and reconstruction with free iliac crest grafts using virtual surgical planning and designed guiding templates. Operations were performed using the prefabricated guiding templates. Postoperative three-dimensional computer models were overlaid and compared with the preoperatively designed models in the same coordinate system. Results Compared to the virtual osteotomy, the mean error of distance of the actual mandibular osteotomy was 2.06 ± 0.86 mm. When compared to the virtual harvested grafts, the mean error volume of the actual harvested grafts was 1412.22 ± 439.24 mm3 (9.12% ± 2.84%). The mean error between the volume of the actual harvested grafts and the shaped grafts was 2094.35 ± 929.12 mm3 (12.40% ± 5.50%). Conclusions The use of computer-aided rapid prototyping templates for virtual surgical planning appears to positively influence the accuracy of mandibular reconstruction. PMID:24957053

  4. Decision-making in research tasks with sequential testing.

    Directory of Open Access Journals (Sweden)

    Thomas Pfeiffer

    Full Text Available BACKGROUND: In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low priors of the tested hypotheses and biases in the evaluation and publication of research findings increase the fraction of false positives. These findings raise concerns about the reliability of research. However, they are based on a very simple scenario of scientific research, where single tests are used to evaluate independent hypotheses. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present computer simulations and experimental approaches for analyzing more realistic scenarios. In these scenarios, research tasks are solved sequentially, i.e. subsequent tests can be chosen depending on previous results. We investigate simple sequential testing and scenarios where only a selected subset of results can be published and used for future rounds of test choice. Results from computer simulations indicate that for the tasks analyzed in this study, the fraction of false among the positive findings declines over several rounds of testing if the most informative tests are performed. Our experiments show that human subjects frequently perform the most informative tests, leading to a decline of false positives as expected from the simulations. CONCLUSIONS/SIGNIFICANCE: For the research tasks studied here, findings tend to become more reliable over time. We also find that the performance in those experimental settings where not all performed tests could be published turned out to be surprisingly inefficient. Our results may help optimize existing procedures used in the practice of scientific research and provide guidance for the development of novel forms of scholarly communication.

  5. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  6. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  7. Flow Injection/Sequential Injection Analysis Systems: Potential Use as Tools for Rapid Liver Diseases Biomarker Study

    Directory of Open Access Journals (Sweden)

    Supaporn Kradtap Hartwell

    2012-01-01

    Full Text Available Flow injection/sequential injection analysis (FIA/SIA systems are suitable for carrying out automatic wet chemical/biochemical reactions with reduced volume and time consumption. Various parts of the system such as pump, valve, and reactor may be built or adapted from available materials. Therefore the systems can be at lower cost as compared to other instrumentation-based analysis systems. Their applications for determination of biomarkers for liver diseases have been demonstrated in various formats of operation but only a few and limited types of biomarkers have been used as model analytes. This paper summarizes these applications for different types of reactions as a guide for using flow-based systems in more biomarker and/or multibiomarker studies.

  8. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  9. Basic and clinical studies of dynamic sequential computed tomography during arterial portography in the diagnosis of hepatic cancers

    International Nuclear Information System (INIS)

    Matsui, Osamu

    1986-01-01

    The author has developed dynamic sequential computed tomography with table incrementation during arterial portography (DSCTI-AP) for the precise diagnosis of liver cancers. 1. Basic Studies on DSCTI-AP; 1) The phantom experiment simulating DSCTI-AP revealed that it is possible to resolve a cylindrical object 5 mm in diameter with more than 50 Hounsfield Unit (HU) difference in the attenuation value compared to the surrounding area by third generation CT scanner. 2) All macroscopically visible nodules of hepatocellular carcinoma (HCC) chemically induced in rat liver and VX2 tumor transplanted via the portal vein in rabbit liver were visualized as portal perfusion defects on portal microangiograms and nodular low density areas in CT during portography. 2. Analysis of the clinical usefulness of DSCTI-AP; 1) The smallest nodules of HCC and metastatic liver cancer detected by DSCTI-AP were 5 mm in diameter. 2) DSCTI-AP was superior to radionuclide liver scanning, ultrasound (US), computed tomography (CT), selective celiac arteriography (SCA) and infusion hepatic angiography (IHA) in the detection of small HCC. But IHA was superior to DSCTI-AP in visualizing extremely hypervascular HCC nodules, and all of the small HCCs were demonstrated by the combination of DSCTI-AP and IHA. 3) DSCTI-AP was superior to the all other imaging modalities including CT following intraarterial injection of iodized oil (Lipiodol CT) in detecting metastatic liver cancers especially less than 1 cm in diameter. 4) Lipiodol CT was superior to DSCTI-AP in visualizing small hypervascular HCC nodules. 5) DSCTI-AP was the most sensitive method in diagnosing peripheral intrahepatic portal tumor thrombus. 6) DSCTI-AP had the advantage in visualizing the location of hepatic tumors and their relation to major vessels objectively. (J.P.N.)

  10. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

    OpenAIRE

    Utpala Niranjan; R.B.V. Subramanyam; V-Khana

    2010-01-01

    Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

  11. Quantitative analysis of bowel gas by plain abdominal radiograph combined with computer image processing

    International Nuclear Information System (INIS)

    Gao Yan; Peng Kewen; Zhang Houde; Shen Bixian; Xiao Hanxin; Cai Juan

    2003-01-01

    Objective: To establish a method for quantitative analysis of bowel gas by plain abdominal radiograph and computer graphics. Methods: Plain abdominal radiographs in supine position from 25 patients with irritable bowel syndrome (IBS) and 20 health controls were studied. A gastroenterologist and a radiologist independently conducted the following procedure on each radiograph. After the outline of bowel gas was traced by axe pen, the radiograph was digitized by a digital camera and transmitted to the computer with Histogram software. The total gas area was determined as the pixel value on images. The ratio of the bowel gas quantity to the pixel value in the region surrounded by a horizontal line tangential to the superior pubic symphysis margin, a horizontal line tangential to the tenth dorsal vertebra inferior margin, and the lateral line tangential to the right and left anteriosuperior iliac crest, was defined as the gas volume score (GVS). To examine the sequential reproducibility, a second plain abdominal radiograph was performed in 5 normal controls 1 week later, and the GVS were compared. Results: Bowel gas was easily identified on the plain abdominal radiograph. Both large and small intestine located in the selected region. Both observers could finish one radiographic measurement in less than 10 mins. The correlation coefficient between the two observers was 0.986. There was no statistical difference on GVS between the two sequential radiographs in 5 health controls. Conclusion: Quantification of bowel gas based on plain abdominal radiograph and computer is simple, rapid, and reliable

  12. Quantum steady computation

    International Nuclear Information System (INIS)

    Castagnoli, G.

    1991-01-01

    This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition

  13. Quantum steady computation

    Energy Technology Data Exchange (ETDEWEB)

    Castagnoli, G. (Dipt. di Informatica, Sistemistica, Telematica, Univ. di Genova, Viale Causa 13, 16145 Genova (IT))

    1991-08-10

    This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

  14. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  15. Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment

    Science.gov (United States)

    Carpenter, James R.; Markley, F Landis

    2014-01-01

    This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.

  16. Dancing Twins: Stellar Hierarchies That Formed Sequentially?

    Science.gov (United States)

    Tokovinin, Andrei

    2018-04-01

    This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).

  17. Computer-Based Career Interventions.

    Science.gov (United States)

    Mau, Wei-Cheng

    The possible utilities and limitations of computer-assisted career guidance systems (CACG) have been widely discussed although the effectiveness of CACG has not been systematically considered. This paper investigates the effectiveness of a theory-based CACG program, integrating Sequential Elimination and Expected Utility strategies. Three types of…

  18. Memory and decision making: Effects of sequential presentation of probabilities and outcomes in risky prospects.

    Science.gov (United States)

    Millroth, Philip; Guath, Mona; Juslin, Peter

    2018-06-07

    The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Quantitative evaluation of the disintegration of orally rapid disintegrating tablets by X-ray computed tomography.

    Science.gov (United States)

    Otsuka, Makoto; Yamanaka, Azusa; Uchino, Tomohiro; Otsuka, Kuniko; Sadamoto, Kiyomi; Ohshima, Hiroyuki

    2012-01-01

    To measure the rapid disintegration of Oral Disintegrating Tablets (ODT), a new test (XCT) was developed using X-ray computing tomography (X-ray CT). Placebo ODT, rapid disintegration candy (RDC) and Gaster®-D-Tablets (GAS) were used as model samples. All these ODTs were used to measure oral disintegration time (DT) in distilled water at 37±2°C by XCT. DTs were affected by the width of mesh screens, and degree to which the tablet holder vibrated from air bubbles. An in-vivo tablet disintegration test was performed for RDC using 11 volunteers. DT by the in-vivo method was significantly longer than that using the conventional tester. The experimental conditions for XCT such as the width of the mesh screen and degree of vibration were adjusted to be consistent with human DT values. Since DTs by the XCT method were almost the same as the human data, this method was able to quantitatively evaluate the rapid disintegration of ODT under the same conditions as inside the oral cavity. The DTs of four commercially available ODTs were comparatively evaluated by the XCT method, conventional tablet disintegration test and in-vivo method.

  20. Designing User-Computer Dialogues: Basic Principles and Guidelines.

    Science.gov (United States)

    Harrell, Thomas H.

    This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…

  1. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  2. PyCSP - Communicating Sequential Processes for Python

    DEFF Research Database (Denmark)

    Vinter, Brian; Bjørndalen, John Markus; Anshus, Otto Johan

    CSP presently supports the core CSP abstractions. We introduce the PyCSP library, its implementation, a few performance benchmarks, and show example code using PyCSP. An early prototype of PyCSP has been used in this year's Extreme Multiprogramming Class at the CS department, university of Copenhagen......The Python programming language is effective for rapidly specifying programs and experimenting with them. It is increasingly being used in computational sciences, and in teaching computer science. CSP is effective for describing concurrency. It has become especially relevant with the emergence...... of commodity multi-core architectures. We are interested in exploring how a combination of Python and CSP can benefit both the computational sciences and the hands-on teaching of distributed and parallel computing in computer science. To make this possible, we have developed PyCSP, a CSP library for Python. Py...

  3. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

    Science.gov (United States)

    Landsberger, David; Galvin, John J

    2011-09-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

  4. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  5. Sequential Injection Method for Rapid and Simultaneous Determination of 236U, 237Np, and Pu Isotopes in Seawater

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Steier, Peter

    2013-01-01

    An automated analytical method implemented in a novel dual-column tandem sequential injection (SI) system was developed for simultaneous determination of 236U, 237Np, 239Pu, and 240Pu in seawater samples. A combination of TEVA and UTEVA extraction chromatography was exploited to separate and purify...... target analytes, whereupon plutonium and neptunium were simultaneously isolated and purified on TEVA, while uranium was collected on UTEVA. The separation behavior of U, Np, and Pu on TEVA–UTEVA columns was investigated in detail in order to achieve high chemical yields and complete purification...

  6. The effect of sequential dual-gas testing on laser-induced breakdown spectroscopy-based discrimination: Application to brass samples and bacterial strains

    International Nuclear Information System (INIS)

    Rehse, Steven J.; Mohaidat, Qassem I.

    2009-01-01

    Four Cu-Zn brass alloys with different stoichiometries and compositions have been analyzed by laser-induced breakdown spectroscopy (LIBS) using nanosecond laser pulses. The intensities of 15 emission lines of copper, zinc, lead, carbon, and aluminum (as well as the environmental contaminants sodium and calcium) were normalized and analyzed with a discriminant function analysis (DFA) to rapidly categorize the samples by alloy. The alloys were tested sequentially in two different noble gases (argon and helium) to enhance discrimination between them. When emission intensities from samples tested sequentially in both gases were combined to form a single 30-spectral line 'fingerprint' of the alloy, an overall 100% correct identification was achieved. This was a modest improvement over using emission intensities acquired in argon gas alone. A similar study was performed to demonstrate an enhanced discrimination between two strains of Escherichia coli (a Gram-negative bacterium) and a Gram-positive bacterium. When emission intensities from bacteria sequentially ablated in two different gas environments were combined, the DFA achieved a 100% categorization accuracy. This result showed the benefit of sequentially testing highly similar samples in two different ambient gases to enhance discrimination between the samples.

  7. Enhanced efficacy of sequential administration of Albendazole for the clearance of Wuchereria bancrofti infection: Double blind RCT.

    Science.gov (United States)

    De Britto, R L; Vanamail, P; Sankari, T; Vijayalakshmi, G; Das, L K; Pani, S P

    2015-06-01

    Till today, there is no effective treatment protocol for the complete clearance of Wuchereria bancrofti (W.b) infection that causes secondary lymphoedema. In a double blind randomized control trial (RCT), 146 asymptomatic W. b infected individuals were randomly assigned to one of the four regimens for 12 days, DEC 300 mg + Doxycycline 100 mg coadministration or DEC 300 mg + Albendazole 400 mg co-administration or DEC 300 mg + Albendazole 400 mg sequential administration or control regimen DEC 300 mg and were followed up at 13, 26 and 52 weeks post-treatment for the clearance of infection. At intake, there was no significant variation in mf counts (F(3,137)=0.044; P=0.988) and antigen levels (F(3,137)=1.433; P=0.236) between the regimens. Primary outcome analysis showed that DEC + Albendazole sequential administration has an enhanced efficacy over DEC + Albendazole co-administration (80.6 Vs 64.7%), and this regimen is significantly different when compared to DEC + doxycycline co-administration and control (PAlbendazole sequential administration appears to be a better option for rapid clearance of W. b microfilariae in 13 weeks time. (Clinical trials.gov identifier - NCT02005653).

  8. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

  9. Rapid Reconstitution Packages (RRPs) implemented by integration of computational fluid dynamics (CFD) and 3D printed microfluidics.

    Science.gov (United States)

    Chi, Albert; Curi, Sebastian; Clayton, Kevin; Luciano, David; Klauber, Kameron; Alexander-Katz, Alfredo; D'hers, Sebastian; Elman, Noel M

    2014-08-01

    Rapid Reconstitution Packages (RRPs) are portable platforms that integrate microfluidics for rapid reconstitution of lyophilized drugs. Rapid reconstitution of lyophilized drugs using standard vials and syringes is an error-prone process. RRPs were designed using computational fluid dynamics (CFD) techniques to optimize fluidic structures for rapid mixing and integrating physical properties of targeted drugs and diluents. Devices were manufactured using stereo lithography 3D printing for micrometer structural precision and rapid prototyping. Tissue plasminogen activator (tPA) was selected as the initial model drug to test the RRPs as it is unstable in solution. tPA is a thrombolytic drug, stored in lyophilized form, required in emergency settings for which rapid reconstitution is of critical importance. RRP performance and drug stability were evaluated by high-performance liquid chromatography (HPLC) to characterize release kinetics. In addition, enzyme-linked immunosorbent assays (ELISAs) were performed to test for drug activity after the RRPs were exposed to various controlled temperature conditions. Experimental results showed that RRPs provided effective reconstitution of tPA that strongly correlated with CFD results. Simulation and experimental results show that release kinetics can be adjusted by tuning the device structural dimensions and diluent drug physical parameters. The design of RRPs can be tailored for a number of applications by taking into account physical parameters of the active pharmaceutical ingredients (APIs), excipients, and diluents. RRPs are portable platforms that can be utilized for reconstitution of emergency drugs in time-critical therapies.

  10. Programming Cell Adhesion for On-Chip Sequential Boolean Logic Functions.

    Science.gov (United States)

    Qu, Xiangmeng; Wang, Shaopeng; Ge, Zhilei; Wang, Jianbang; Yao, Guangbao; Li, Jiang; Zuo, Xiaolei; Shi, Jiye; Song, Shiping; Wang, Lihua; Li, Li; Pei, Hao; Fan, Chunhai

    2017-08-02

    Programmable remodelling of cell surfaces enables high-precision regulation of cell behavior. In this work, we developed in vitro constructed DNA-based chemical reaction networks (CRNs) to program on-chip cell adhesion. We found that the RGD-functionalized DNA CRNs are entirely noninvasive when interfaced with the fluidic mosaic membrane of living cells. DNA toehold with different lengths could tunably alter the release kinetics of cells, which shows rapid release in minutes with the use of a 6-base toehold. We further demonstrated the realization of Boolean logic functions by using DNA strand displacement reactions, which include multi-input and sequential cell logic gates (AND, OR, XOR, and AND-OR). This study provides a highly generic tool for self-organization of biological systems.

  11. Accelerating Sequential Gaussian Simulation with a constant path

    Science.gov (United States)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  12. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  13. Computer language evaluation for MFTF SCDS

    International Nuclear Information System (INIS)

    Anderson, R.E.; McGoldrick, P.R.; Wyman, R.H.

    1979-01-01

    The computer languages available for the systems and application implementation on the Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) were surveyed and evaluated. Four language processors, CAL (Common Assembly Language), Extended FORTRAN, CORAL 66, and Sequential Pascal (SPASCAL, a subset of Concurrent Pascal [CPASCAL]) are commercially available for the Interdata 7/32 and 8/32 computers that constitute the SCDS. Of these, the Sequential Pascal available from Kansas State University appears best for the job in terms of minimizing the implementation time, debugging time, and maintenance time. This improvement in programming productivity is due to the availability of a high-level, block-structured language that includes many compile-time and run-time checks to detect errors. In addition, the advanced data-types in language allow easy description of the program variables. 1 table

  14. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  15. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  16. Transparent thin-film transistor exploratory development via sequential layer deposition and thermal annealing

    International Nuclear Information System (INIS)

    Hong, David; Chiang, Hai Q.; Presley, Rick E.; Dehuff, Nicole L.; Bender, Jeffrey P.; Park, Cheol-Hee; Wager, John F.; Keszler, Douglas A.

    2006-01-01

    A novel deposition methodology is employed for exploratory development of a class of high-performance transparent thin-film transistor (TTFT) channel materials involving oxides composed of heavy-metal cations with (n - 1)d 10 ns 0 (n ≥ 4) electronic configurations. The method involves sequential radio-frequency sputter deposition of thin, single cation oxide layers and subsequent post-deposition annealing in order to obtain a multi-component oxide thin film. The viability of this rapid materials development methodology is demonstrated through the realization of high-performance TTFTs with channel layers composed of zinc oxide/tin oxide, and tin oxide/indium oxide

  17. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  18. Generalized infimum and sequential product of quantum effects

    International Nuclear Information System (INIS)

    Li Yuan; Sun Xiuhong; Chen Zhengli

    2007-01-01

    The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

  19. Rapid and Automated Determination of Plutonium and Neptunium in Environmental Samples

    DEFF Research Database (Denmark)

    Qiao, Jixin

    This thesis presents improved analytical methods for rapid and automated determination of plutonium and neptunium in environmental samples using sequential injection (SI) based chromatography and inductively coupled plasma mass spectrometry (ICP-MS). The progress of methodology development...... and optimization for rapid determination of plutonium in environmental samples using SIextraction chromatography prior to inductively coupled plasma mass spectrometry (Paper III); (3) Development of an SI-chromatographic method for simultaneous determination of plutonium and neptunium in environmental samples...... for rapid and simultaneous determination of plutonium and neptunium within an SI system (Paper VI). The results demonstrate that the developed methods in this study are reliable and efficient for accurate assays of trace levels of plutonium and neptunium as demanded in different situations including...

  20. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    Science.gov (United States)

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  1. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Directory of Open Access Journals (Sweden)

    Noor M. Khan

    2017-01-01

    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  2. Sequential analysis in neonatal research-systematic review.

    Science.gov (United States)

    Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

    2018-05-01

    As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

  3. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

  4. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    Science.gov (United States)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  5. Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator

    Science.gov (United States)

    Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.

    2018-04-01

    The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.

  6. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

    Science.gov (United States)

    Lin, Jane-Ming; Tsai, Yi-Yu

    2005-01-01

    To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

  7. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  8. A Survey of Multi-Objective Sequential Decision-Making

    NARCIS (Netherlands)

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential

  9. Sequential lineups: shift in criterion or decision strategy?

    Science.gov (United States)

    Gronlund, Scott D

    2004-04-01

    R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

  10. BCR-701: A review of 10-years of sequential extraction analyses

    International Nuclear Information System (INIS)

    Sutherland, Ross A.

    2010-01-01

    A detailed quantitative analysis was performed on data presented in the literature that focused on the sequential extraction of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni), lead (Pb) and zinc (Zn) from the certified reference material BCR-701 (lake sediment) using the three-step harmonized BCR procedure. The accuracy of data reported in the literature, including precision and different measures of trueness, was assessed relative to the certified values for BCR-701. Forty data sets were accepted following extreme outlier removal, and statistically summarized with measures of central tendency, dispersion, and distribution form. In general, literature data were similar in their measurement precision to the expert laboratories used to certify the trace element contents in BCR-701. The overall median precision for literature reported data was 10% (range 6-19%), compared to certifying laboratories of 9% (range 4-33%). One measure of literature data trueness was assessed via a confirmatory approach using a robust bootstrap method. Only 22% of the comparisons indicated significantly different (all were lower) concentrations reported in the literature compared to certified values. The question of whether the differences are practically significant for environmental studies is raised. Bias was computed as a measure of trueness, and literature data were more frequently negatively biased, indicating lower concentrations reported in the literature for the six trace elements for the three-step sequential procedure compared to the certified values. However, 95% confidence intervals about the average bias for the 18 comparisons indicated only four instances when a mean bias of 0 (i.e., measured = certified) was not incorporated-suggesting statistical difference. Finally, Z-scores incorporating a Horwitz-type function were used to assess the general trueness of laboratory data. Of the 468 laboratory Z-score values computed, 92% were considered to be satisfactory, 5% were

  11. Sequential x-ray diffraction topography at 1-BM x-ray optics testing beamline at the advanced photon source

    Energy Technology Data Exchange (ETDEWEB)

    Stoupin, Stanislav, E-mail: sstoupin@aps.anl.gov; Shvyd’ko, Yuri; Trakhtenberg, Emil; Liu, Zunping; Lang, Keenan; Huang, Xianrong; Wieczorek, Michael; Kasman, Elina; Hammonds, John; Macrander, Albert; Assoufid, Lahsen [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States)

    2016-07-27

    We report progress on implementation and commissioning of sequential X-ray diffraction topography at 1-BM Optics Testing Beamline of the Advanced Photon Source to accommodate growing needs of strain characterization in diffractive crystal optics and other semiconductor single crystals. The setup enables evaluation of strain in single crystals in the nearly-nondispersive double-crystal geometry. Si asymmetric collimator crystals of different crystallographic orientations were designed, fabricated and characterized using in-house capabilities. Imaging the exit beam using digital area detectors permits rapid sequential acquisition of X-ray topographs at different angular positions on the rocking curve of a crystal under investigation. Results on sensitivity and spatial resolution are reported based on experiments with high-quality Si and diamond crystals. The new setup complements laboratory-based X-ray topography capabilities of the Optics group at the Advanced Photon Source.

  12. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  13. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  14. Rapid isolation of plutonium in environmental solid samples using sequential injection anion exchange chromatography followed by detection with inductively coupled plasma mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Qiao Jixin, E-mail: jixin.qiao@risoe.d [Radiation Research Division, Riso National Laboratory for Sustainable Energy, Technical University of Denmark, DK-4000 Roskilde (Denmark); Hou Xiaolin; Roos, Per [Radiation Research Division, Riso National Laboratory for Sustainable Energy, Technical University of Denmark, DK-4000 Roskilde (Denmark); Miro, Manuel [Department of Chemistry, Faculty of Sciences, University of the Balearic Islands, Carretera de Valldemossa km. 7.5, E-07122 Palma de Mallorca, Illes Balears (Spain)

    2011-01-31

    This paper reports an automated analytical method for rapid determination of plutonium isotopes ({sup 239}Pu and {sup 240}Pu) in environmental solid extracts. Anion exchange chromatographic columns were incorporated in a sequential injection (SI) system to undertake the automated separation of plutonium from matrix and interfering elements. The analytical results most distinctly demonstrated that the crosslinkage of the anion exchanger is a key parameter controlling the separation efficiency. AG 1-x4 type resin was selected as the most suitable sorbent material for analyte separation. Investigation of column size effect upon the separation efficiency revealed that small-sized (2 mL) columns sufficed to handle up to 50 g of environmental soil samples. Under the optimum conditions, chemical yields of plutonium exceeded 90% and the decontamination factors for uranium, thorium and lead ranged from 10{sup 3} to 10{sup 4}. The determination of plutonium isotopes in three standard/certified reference materials (IAEA-375 soil, IAEA-135 sediment and NIST-4359 seaweed) and two reference samples (Irish Sea sediment and Danish soil) revealed a good agreement with reference/certified values. The SI column-separation method is straightforward and less labor intensive as compared with batch-wise anion exchange chromatographic procedures. Besides, the automated method features low consumption of ion-exchanger and reagents for column washing and elution, with the consequent decrease in the generation of acidic waste, thus bearing green chemical credentials.

  15. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  16. Multichannel, sequential or combined X-ray spectrometry

    International Nuclear Information System (INIS)

    Florestan, J.

    1979-01-01

    X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr

  17. The NEA computer program library: a possible GDMS application

    International Nuclear Information System (INIS)

    Schuler, W.

    1978-01-01

    NEA Computer Program library maintains a series of eleven sequential computer files, used for linked applications in managing their stock of computer codes for nuclear reactor calculations, storing index and program abstract information, and administering their service to requesters. The high data redundancy beween the files suggests that a data base approach would be valid and this paper suggests a possible 'schema' for an CODASYL GDMS

  18. Induction of simultaneous and sequential malolactic fermentation in durian wine.

    Science.gov (United States)

    Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan

    2016-08-02

    This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)

    2017-06-01

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.

  20. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  1. Equivalence between quantum simultaneous games and quantum sequential games

    OpenAIRE

    Kobayashi, Naoki

    2007-01-01

    A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...

  2. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

    NARCIS (Netherlands)

    Zamarro, G.

    2006-01-01

    This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

  3. Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.

    Science.gov (United States)

    Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A

    The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.

  4. Minimal computational-space implementation of multiround quantum protocols

    International Nuclear Information System (INIS)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Chiribella, Giulio

    2011-01-01

    A single-party strategy in a multiround quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here, we provide an efficient realization in terms of computational-space resources.

  5. Simultaneous capture and sequential detection of two malarial biomarkers on magnetic microparticles.

    Science.gov (United States)

    Markwalter, Christine F; Ricks, Keersten M; Bitting, Anna L; Mudenda, Lwiindi; Wright, David W

    2016-12-01

    We have developed a rapid magnetic microparticle-based detection strategy for malarial biomarkers Plasmodium lactate dehydrogenase (pLDH) and Plasmodium falciparum histidine-rich protein II (PfHRPII). In this assay, magnetic particles functionalized with antibodies specific for pLDH and PfHRPII as well as detection antibodies with distinct enzymes for each biomarker are added to parasitized lysed blood samples. Sandwich complexes for pLDH and PfHRPII form on the surface of the magnetic beads, which are washed and sequentially re-suspended in detection enzyme substrate for each antigen. The developed simultaneous capture and sequential detection (SCSD) assay detects both biomarkers in samples as low as 2.0parasites/µl, an order of magnitude below commercially available ELISA kits, has a total incubation time of 35min, and was found to be reproducible between users over time. This assay provides a simple and efficient alternative to traditional 96-well plate ELISAs, which take 5-8h to complete and are limited to one analyte. Further, the modularity of the magnetic bead-based SCSD ELISA format could serve as a platform for application to other diseases for which multi-biomarker detection is advantageous. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Simultaneous or Early Sequential Rupture of Multiple Intracranial Aneurysms: A Rare and Insufficiently Understood Entity.

    Science.gov (United States)

    Hou, Kun; Zhao, Jinchuan; Zhang, Yang; Zhu, Xiaobo; Zhao, Yan; Li, Guichen

    2016-05-01

    Simultaneous or early sequential rupture of multiple intracranial aneurysms (MIAs) is encountered rarely, with no more than 10 cases having been reported. As a result of its rarity, there are a lot of questions concerning this entity need to be answered. A 67-year-old woman was admitted to the First Hospital of Jilin University (Eastern Division) from a local hospital after a sudden onset of severe headache, nausea, and vomiting. Head computed tomography (CT) at the local hospital revealed diffuse subarachnoid hemorrhage (SAH) that was concentrated predominately in the suprasellar cistern and interhemispheric fissure. During her transfer to our hospital, she experienced another episode of sudden headache. CT on admission to our hospital revealed that the SAH was increased with 2 isolated hematomas both in the interhemispheric fissure and the left paramedian frontal lobe. Further CT angiography and intraoperative findings were in favor of early sequential rupture of 2 intracranial aneurysms. To further elucidate the characteristics, mechanism, management, and prognosis of this specific entity, we conducted a comprehensive review of the literature. The mechanism of simultaneous or early sequential rupture of MIAs is still obscure. Transient elevation of blood pressure might play a role in the process, and preventing the sudden elevation of blood pressure might be beneficial for patients with aneurysmal SAH and MIAs. The management of simultaneously or early sequentially ruptured aneurysms is more complex for its difficulty in responsible aneurysm determination, urgency in treatment, toughness in intraoperative manipulation and poorness in prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Sequential enzymatic synthesis and separation of 13N-L-glutamic acid and 13N-L-alanine

    International Nuclear Information System (INIS)

    Cohen, M.B.; Spolter, L.; MacDonald, M.; Chang, C.C.; Takahashi, J.

    1975-01-01

    The sequential enzymatic synthesis and separation of 13 N-L-glutamic acid and 13 N-L-alanine are described. Basically, that involves the synthesis of 13 N-L-glutamic acid by one enzyme, the transamination of the labeled glutamic acid to form 13 N-L-alanine by a second enzyme, and the separation of the two amino acids by rapid column chromatography. The 13 N-L-alanine was evaluated in animals by imaging and tissue distribution studies and showed good potential as a pancreatic imaging agent

  8. Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte

    2004-01-01

    are proposed, but all remain applicable to other domain problems. The well-known multi-resolution AAM optimisation is extended to include sequential relaxations on texture resolution, model coverage and model parameter constraints. Fully unsupervised analysis is obtained by exploiting model parameter...... that show that the method produces accurate, robust and rapid segmentations in a cross sectional study of 17 subjects, establishing its feasibility as a fully automated clinical tool for analysis and segmentation.......This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case...

  9. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  10. RAPID TRANSFER ALIGNMENT USING FEDERATED KALMAN FILTER

    Institute of Scientific and Technical Information of China (English)

    GUDong-qing; QINYong-yuan; PENGRong; LIXin

    2005-01-01

    The dimension number of the centralized Kalman filter (CKF) for the rapid transfer alignment (TA) is as high as 21 if the aircraft wing flexure motion is considered in the rapid TA. The 21-dimensional CKF brings the calculation burden on the computer and the difficulty to meet a high filtering updating rate desired by rapid TA. The federated Kalman filter (FKF) for the rapid TA is proposed to solve the dilemma. The structure and the algorithm of the FKF, which can perform parallel computation and has less calculation burden, are designed.The wing flexure motion is modeled, and then the 12-order velocity matching local filter and the 15-order attitud ematching local filter are devised. Simulation results show that the proposed EKE for the rapid TA almost has the same performance as the CKF. Thus the calculation burden of the proposed FKF for the rapid TA is markedly decreased.

  11. Selective condensation drives partitioning and sequential secretion of cyst wall proteins in differentiating Giardia lamblia.

    Directory of Open Access Journals (Sweden)

    Christian Konrad

    2010-04-01

    Full Text Available Controlled secretion of a protective extracellular matrix is required for transmission of the infective stage of a large number of protozoan and metazoan parasites. Differentiating trophozoites of the highly minimized protozoan parasite Giardia lamblia secrete the proteinaceous portion of the cyst wall material (CWM consisting of three paralogous cyst wall proteins (CWP1-3 via organelles termed encystation-specific vesicles (ESVs. Phylogenetic and molecular data indicate that Diplomonads have lost a classical Golgi during reductive evolution. However, neogenesis of ESVs in encysting Giardia trophozoites transiently provides basic Golgi functions by accumulating presorted CWM exported from the ER for maturation. Based on this "minimal Golgi" hypothesis we predicted maturation of ESVs to a trans Golgi-like stage, which would manifest as a sorting event before regulated secretion of the CWM. Here we show that proteolytic processing of pro-CWP2 in maturing ESVs coincides with partitioning of CWM into two fractions, which are sorted and secreted sequentially with different kinetics. This novel sorting function leads to rapid assembly of a structurally defined outer cyst wall, followed by slow secretion of the remaining components. Using live cell microscopy we find direct evidence for condensed core formation in maturing ESVs. Core formation suggests that a mechanism controlled by phase transitions of the CWM from fluid to condensed and back likely drives CWM partitioning and makes sorting and sequential secretion possible. Blocking of CWP2 processing by a protease inhibitor leads to mis-sorting of a CWP2 reporter. Nevertheless, partitioning and sequential secretion of two portions of the CWM are unaffected in these cells. Although these cysts have a normal appearance they are not water resistant and therefore not infective. Our findings suggest that sequential assembly is a basic architectural principle of protective wall formation and requires

  12. Reading Remediation Based on Sequential and Simultaneous Processing.

    Science.gov (United States)

    Gunnison, Judy; And Others

    1982-01-01

    The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…

  13. Field-scale multi-phase LNAPL remediation: Validating a new computational framework against sequential field pilot trials.

    Science.gov (United States)

    Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B

    2018-03-05

    Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.

  14. A study on computer-aided diagnosis based on temporal subtraction of sequential chest radiographs (in Japanese)

    International Nuclear Information System (INIS)

    Kano, Akiko

    2001-01-01

    An automated digital image subtraction technique for use with pairs of temporally sequential chest radiographs has been developed to aid radiologists in the detection of interval changes. Automated image registration based on nonlinear geometric warping is performed prior to subtraction in order to deal with complicated radiographic misregistration. Processing includes global matching, to achieve rough registration between the entire lung fields in the two images, and local matching, based on a cross-correlation method, to determine local shift values for a number of small regions. A proper warping of x,y-coordinates is determined by fitting two-dimensional polynomials to the distributions of the shift values. One image is warped and then subtracted from the other. The resultant subtraction images were able to enhance the conspicuity of various types of interval changes. Improved global matching based on a weighted template matching method achieved robust registration even with photofluorographs taken in chest mass screening programs, which had previously presented us with a relatively large number of poor-registration images. The new method was applied to 129 pairs of chest mass screening images, and offered registration accuracy as good as manual global matching. An observer test using 114 cases including 57 lung cancer cases presented better sensitivity and specificity on average compared to conventional comparison readings. In addition, newly developed image processing that eliminates the rib edge artifacts in subtraction images was applied to 26 images having pathological interval changes; results showed the potential for application to automated schemes for the detection of interval change patterns. With its capacity to improve the diagnostic accuracy of chest radiographs, the chest temporal subtraction technique promises to become an important element of computer-aided diagnosis (CAD) systems

  15. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    International Nuclear Information System (INIS)

    Lopes, Antonio Augusto; Miranda, Rogerio dos Anjos; Goncalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-01-01

    In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P <.001) and between-methods ( P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. (author)

  16. Acute hydrocarbon pneumonia after white spirit aspiration: sequential HRCT findings

    Energy Technology Data Exchange (ETDEWEB)

    Facon, David; Coumbaras, Jean; Bigot, Emmanuelle; Bahlouli, Fouad; Bellin, Marie-France [Universite Paris 11, Department of Radiology, Hopital Paul-Brousse, AP-HP, Villejuif Cedex (France); Boissonnas, Alain [Universite Paris 11, Department of Internal Medicine, Hopital Paul-Brousse, AP-HP, Villejuif Cedex (France)

    2005-01-01

    Hydrocarbon pneumonia is a very uncommon condition resulting from aspiration of mineral oil into the lung. We report the first description of early and sequential high-resolution computed tomographic (HRCT) findings of hydrocarbon pneumonia following attempted suicide by white spirit aspiration. Initial HRCT showed patchy opacities of coalescing masses with well-defined walls. They were visible in the middle lobe, lingula and lower lobes. Follow-up CT showed regression of the alveolar opacities, the presence of pneumatoceles and right asymptomatic pneumothorax. After 23 months of follow-up, the patient remained asymptomatic, and the follow-up CT scan was considered normal. The radiological features and a review of the relevant literature are briefly discussed. (orig.)

  17. "Ultra-rapid" sequential treatment in cholecystocholedocholithiasis: alternative same-day approach to laparoendoscopic rendezvous.

    Science.gov (United States)

    Borreca, Dario; Bona, Alberto; Bellomo, Maria Paola; Borasi, Andrea; De Paolis, Paolo

    2015-12-01

    There is still no consensus about timing of laparoscopic cholecystectomy after endoscopic retrograde cholangiopancreatography in the treatment of cholecystocholedocholithiasis. The aim of our retrospective study is to analyze the optimal timing of surgical treatment in patients presenting concurrent choledocholithiasis, choosing to perform a sequential endoscopic plus surgical approach, introducing a same-day two-stage alternative. All cases of cholecystocholedocholithiasis occurred between January 2007 and December 2014 in "Gradenigo" Hospital (Turin-Italy) were reviewed. Patients were divided into three groups, based on the timing of cholecystectomy after endoscopic retrograde cholangiopancreatography, and compared. Out of 2233 cholecystectomies performed in the mentioned time interval, have been identified 93 patients that fulfill the selection criteria. 36 patients were treated with a same-day approach, while 29 within first 72 h and 28 with delayed surgery. The overall length of stay was significantly lower in patients that were treated with a same-day approach (4.7 days), compared with other groups (p = 0.001), while no significant differences were found in terms of length of surgical intervention, intraoperative complications and conversions to open procedure, postoperative stay, morbidity and mortality. Patients treated with delayed surgery had a 18 % recurrency rate of biliary events, with an odds ratio of 14.13 (p = 0.018). Same-day two-stage approach should be performed in suitable patients at the index admission, reducing overall risks, improving the patients' quality-of-life, preventing recurrency, leading to a significant cost abatement; furthermore, this approach allows same outcomes of laparoendoscopic rendezvous, avoiding technical and organizational troubles.

  18. Modeling two-phase ferroelectric composites by sequential laminates

    International Nuclear Information System (INIS)

    Idiart, Martín I

    2014-01-01

    Theoretical estimates are given for the overall dissipative response of two-phase ferroelectric composites with complex particulate microstructures under arbitrary loading histories. The ferroelectric behavior of the constituent phases is described via a stored energy density and a dissipation potential in accordance with the theory of generalized standard materials. An implicit time-discretization scheme is used to generate a variational representation of the overall response in terms of a single incremental potential. Estimates are then generated by constructing sequentially laminated microgeometries of particulate type whose overall incremental potential can be computed exactly. Because they are realizable, by construction, these estimates are guaranteed to conform with any material constraints, to satisfy all pertinent bounds and to exhibit the required convexity properties with no duality gap. Predictions for representative composite and porous systems are reported and discussed in the light of existing experimental data. (paper)

  19. Cone-beam computed tomography evaluation of dentoskeletal changes after asymmetric rapid maxillary expansion.

    Science.gov (United States)

    Baka, Zeliha Muge; Akin, Mehmet; Ucar, Faruk Izzet; Ileri, Zehra

    2015-01-01

    The aims of this study were to quantitatively evaluate the changes in arch widths and buccolingual inclinations of the posterior teeth after asymmetric rapid maxillary expansion (ARME) and to compare the measurements between the crossbite and the noncrossbite sides with cone-beam computed tomography (CBCT). From our clinic archives, we selected the CBCT records of 30 patients with unilateral skeletal crossbite (13 boys, 14.2 ± 1.3 years old; 17 girls, 13.8 ± 1.3 years old) who underwent ARME treatment. A modified acrylic bonded rapid maxillary expansion appliance including an occlusal locking mechanism was used in all patients. CBCT records had been taken before ARME treatment and after a 3-month retention period. Fourteen angular and 80 linear measurements were taken for the maxilla and the mandible. Frontally clipped CBCT images were used for the evaluation. Paired sample and independent sample t tests were used for statistical comparisons. Comparisons of the before-treatment and after-retention measurements showed that the arch widths and buccolingual inclinations of the posterior teeth increased significantly on the crossbite side of the maxilla and on the noncrossbite side of the mandible (P ARME treatment, the crossbite side of the maxilla and the noncrossbite side of the mandible were more affected than were the opposite sides. Copyright © 2015. Published by Elsevier Inc.

  20. C-quence: a tool for analyzing qualitative sequential data.

    Science.gov (United States)

    Duncan, Starkey; Collier, Nicholson T

    2002-02-01

    C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

  1. A high level language for a high performance computer

    Science.gov (United States)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  2. Further Developments on Optimum Structural Design Using MSC/Nastran and Sequential Quadratic Programming

    DEFF Research Database (Denmark)

    Holzleitner, Ludwig

    1996-01-01

    , here the shape of two dimensional parts with different thickness areas will be optimized. As in the previos paper, a methodology for structural optimization using the commercial finite element package MSC/NASTRAN for structural analysis is described. Three different methods for design sensitivity......This work is closely connected to the paper: K.G. MAHMOUD, H.W. ENGL and HOLZLEITNER: "OPTIMUM STRUCTURAL DESIGN USING MSC/NASTRAN AND SEQUENTIAL QUADRATIC PROGRAMMING", Computers & Structures, Vol. 52, No. 3, pp. 437-447, (1994). In contrast to that paper, where thickness optimization is described...

  3. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  4. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chao [FuZhou University, FuZhou (China); Vu, Quoc Dong; Li, Pu [Ilmenau University of Technology, Ilmenau (Germany)

    2013-02-15

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach.

  5. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    International Nuclear Information System (INIS)

    Zhao, Chao; Vu, Quoc Dong; Li, Pu

    2013-01-01

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach

  6. Digital computer structure and design

    CERN Document Server

    Townsend, R

    2014-01-01

    Digital Computer Structure and Design, Second Edition discusses switching theory, counters, sequential circuits, number representation, and arithmetic functions The book also describes computer memories, the processor, data flow system of the processor, the processor control system, and the input-output system. Switching theory, which is purely a mathematical concept, centers on the properties of interconnected networks of ""gates."" The theory deals with binary functions of 1 and 0 which can change instantaneously from one to the other without intermediate values. The binary number system is

  7. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  8. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  9. Rapid, computer vision-enabled murine screening system identifies neuropharmacological potential of two new mechanisms

    Directory of Open Access Journals (Sweden)

    Steven L Roberds

    2011-09-01

    Full Text Available The lack of predictive in vitro models for behavioral phenotypes impedes rapid advancement in neuropharmacology and psychopharmacology. In vivo behavioral assays are more predictive of activity in human disorders, but such assays are often highly resource-intensive. Here we describe the successful application of a computer vision-enabled system to identify potential neuropharmacological activity of two new mechanisms. The analytical system was trained using multiple drugs that are used clinically to treat depression, schizophrenia, anxiety, and other psychiatric or behavioral disorders. During blinded testing the PDE10 inhibitor TP-10 produced a signature of activity suggesting potential antipsychotic activity. This finding is consistent with TP-10’s activity in multiple rodent models that is similar to that of clinically used antipsychotic drugs. The CK1ε inhibitor PF-670462 produced a signature consistent with anxiolytic activity and, at the highest dose tested, behavioral effects similar to that of opiate analgesics. Neither TP-10 nor PF-670462 was included in the training set. Thus, computer vision-based behavioral analysis can facilitate drug discovery by identifying neuropharmacological effects of compounds acting through new mechanisms.

  10. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    Science.gov (United States)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  11. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  12. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    Science.gov (United States)

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  13. [COMPUTER ASSISTED DESIGN AND ELECTRON BEAMMELTING RAPID PROTOTYPING METAL THREE-DIMENSIONAL PRINTING TECHNOLOGY FOR PREPARATION OF INDIVIDUALIZED FEMORAL PROSTHESIS].

    Science.gov (United States)

    Liu, Hongwei; Weng, Yiping; Zhang, Yunkun; Xu, Nanwei; Tong, Jing; Wang, Caimei

    2015-09-01

    To study the feasibility of preparation of the individualized femoral prosthesis through computer assisted design and electron beammelting rapid prototyping (EBM-RP) metal three-dimensional (3D) printing technology. One adult male left femur specimen was used for scanning with 64-slice spiral CT; tomographic image data were imported into Mimics15.0 software to reconstruct femoral 3D model, then the 3D model of individualized femoral prosthesis was designed through UG8.0 software. Finally the 3D model data were imported into EBM-RP metal 3D printer to print the individualized sleeve. According to the 3D model of individualized prosthesis, customized sleeve was successfully prepared through the EBM-RP metal 3D printing technology, assembled with the standard handle component of SR modular femoral prosthesis to make the individualized femoral prosthesis. Customized femoral prosthesis accurately matching with metaphyseal cavity can be designed through the thin slice CT scanning and computer assisted design technology. Titanium alloy personalized prosthesis with complex 3D shape, pore surface, and good matching with metaphyseal cavity can be manufactured by the technology of EBM-RP metal 3D printing, and the technology has convenient, rapid, and accurate advantages.

  14. The sequential structure of brain activation predicts skill.

    Science.gov (United States)

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A one-sided sequential test

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-04-16

    The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

  16. Mining Emerging Sequential Patterns for Activity Recognition in Body Sensor Networks

    DEFF Research Database (Denmark)

    Gu, Tao; Wang, Liang; Chen, Hanhua

    2010-01-01

    Body Sensor Networks oer many applications in healthcare, well-being and entertainment. One of the emerging applications is recognizing activities of daily living. In this paper, we introduce a novel knowledge pattern named Emerging Sequential Pattern (ESP)|a sequential pattern that discovers...... signicant class dierences|to recognize both simple (i.e., sequential) and complex (i.e., interleaved and concurrent) activities. Based on ESPs, we build our complex activity models directly upon the sequential model to recognize both activity types. We conduct comprehensive empirical studies to evaluate...

  17. Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism

    Science.gov (United States)

    Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter

    2011-01-01

    The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…

  18. Discrimination between sequential and simultaneous virtual channels with electrical hearing

    OpenAIRE

    Landsberger, David; Galvin, John J.

    2011-01-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation mode...

  19. Sequential Combination of Electro-Fenton and Electrochemical Chlorination Processes for the Treatment of Anaerobically-Digested Food Wastewater.

    Science.gov (United States)

    Shin, Yong-Uk; Yoo, Ha-Young; Kim, Seonghun; Chung, Kyung-Mi; Park, Yong-Gyun; Hwang, Kwang-Hyun; Hong, Seok Won; Park, Hyunwoong; Cho, Kangwoo; Lee, Jaesang

    2017-09-19

    A two-stage sequential electro-Fenton (E-Fenton) oxidation followed by electrochemical chlorination (EC) was demonstrated to concomitantly treat high concentrations of organic carbon and ammonium nitrogen (NH 4 + -N) in real anaerobically digested food wastewater (ADFW). The anodic Fenton process caused the rapid mineralization of phenol as a model substrate through the production of hydroxyl radical as the main oxidant. The electrochemical oxidation of NH 4 + by a dimensionally stable anode (DSA) resulted in temporal concentration profiles of combined and free chlorine species that were analogous to those during the conventional breakpoint chlorination of NH 4 + . Together with the minimal production of nitrate, this confirmed that the conversion of NH 4 + to nitrogen gas was electrochemically achievable. The monitoring of treatment performance with varying key parameters (e.g., current density, H 2 O 2 feeding rate, pH, NaCl loading, and DSA type) led to the optimization of two component systems. The comparative evaluation of two sequentially combined systems (i.e., the E-Fenton-EC system versus the EC-E-Fenton system) using the mixture of phenol and NH 4 + under the predetermined optimal conditions suggested the superiority of the E-Fenton-EC system in terms of treatment efficiency and energy consumption. Finally, the sequential E-Fenton-EC process effectively mineralized organic carbon and decomposed NH 4 + -N in the real ADFW without external supply of NaCl.

  20. Sequential dependencies in magnitude scaling of loudness

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2013-01-01

    Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...

  1. Visual short-term memory for sequential arrays.

    Science.gov (United States)

    Kumar, Arjun; Jiang, Yuhong

    2005-04-01

    The capacity of visual short-term memory (VSTM) for a single visual display has been investigated in past research, but VSTM for multiple sequential arrays has been explored only recently. In this study, we investigate the capacity of VSTM across two sequential arrays separated by a variable stimulus onset asynchrony (SOA). VSTM for spatial locations (Experiment 1), colors (Experiments 2-4), orientations (Experiments 3 and 4), and conjunction of color and orientation (Experiment 4) were tested, with the SOA across the two sequential arrays varying from 100 to 1,500 msec. We find that VSTM for the trailing array is much better than VSTM for the leading array, but when averaged across the two arrays VSTM has a constant capacity independent of the SOA. We suggest that multiple displays compete for retention in VSTM and that separating information into two temporally discrete groups does not enhance the overall capacity of VSTM.

  2. Group sequential and confirmatory adaptive designs in clinical trials

    CERN Document Server

    Wassmer, Gernot

    2016-01-01

    This book provides an up-to-date review of the general principles of and techniques for confirmatory adaptive designs. Confirmatory adaptive designs are a generalization of group sequential designs. With these designs, interim analyses are performed in order to stop the trial prematurely under control of the Type I error rate. In adaptive designs, it is also permissible to perform a data-driven change of relevant aspects of the study design at interim stages. This includes, for example, a sample-size reassessment, a treatment-arm selection or a selection of a pre-specified sub-population. Essentially, this adaptive methodology was introduced in the 1990s. Since then, it has become popular and the object of intense discussion and still represents a rapidly growing field of statistical research. This book describes adaptive design methodology at an elementary level, while also considering designing and planning issues as well as methods for analyzing an adaptively planned trial. This includes estimation methods...

  3. The target-to-foils shift in simultaneous and sequential lineups.

    Science.gov (United States)

    Clark, Steven E; Davey, Sherrie L

    2005-04-01

    A theoretical cornerstone in eyewitness identification research is the proposition that witnesses, in making decisions from standard simultaneous lineups, make relative judgments. The present research considers two sources of support for this proposal. An experiment by G. L. Wells (1993) showed that if the target is removed from a lineup, witnesses shift their responses to pick foils, rather than rejecting the lineups, a result we will term a target-to-foils shift. Additional empirical support is provided by results from sequential lineups which typically show higher accuracy than simultaneous lineups, presumably because of a decrease in the use of relative judgments in making identification decisions. The combination of these two lines of research suggests that the target-to-foils shift should be reduced in sequential lineups relative to simultaneous lineups. Results of two experiments showed an overall advantage for sequential lineups, but also showed a target-to-foils shift equal in size for simultaneous and sequential lineups. Additional analyses indicated that the target-to-foils shift in sequential lineups was moderated in part by an order effect and was produced with (Experiment 2) or without (Experiment 1) a shift in decision criterion. This complex pattern of results suggests that more work is needed to understand the processes which underlie decisions in simultaneous and sequential lineups.

  4. SU-F-J-102: Lower Esophagus Margin Implications Based On Rapid Computational Algorithm for SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Cardenas, M; Mazur, T; Li, H; Mutic, S; Bradley, J; Tsien, C; Green, O [Washington University School of Medicine, Saint Louis, MO (United States)

    2016-06-15

    Purpose: To quantify inter-fraction esophagus-variation. Methods: Computed tomography and daily on-treatment 0.3-T MRI data sets for 7 patients were analyzed using a novel Matlab-based (Mathworks, Natick, MA) rapid computational method. Rigid registration was performed from the cricoid to the gastro-esophageal junction. CT and MR-based contours were compared at slice intervals of 3mm. Variation was quantified by “expansion,” defined as additional length in any radial direction from CT contour to MR contour. Expansion computations were performed with 360° of freedom in each axial slice. We partitioned expansions into left anterior, right anterior, right posterior, and left posterior quadrants (LA, RA, RP, and LP, respectively). Sample means were compared by analysis of variance (ANOVA) and Fisher’s Protected Least Significant Difference test. Results: Fifteen fractions and 1121 axial slices from 7 patients undergoing SBRT for primary lung cancer (3) and metastatic lung disease (4) were analyzed, generating 41,970 measurements. Mean LA, RA, RP, and LP expansions were 4.30±0.05 mm, 3.71±0.05mm, 3.17±0.07, and 3.98±0.06mm, respectively. 50.13% of all axial slices showed variation > 5 mm in one or more directions. Variation was greatest in lower esophagus with mean LA, RA, RP, and LP expansion (5.98±0.09 mm, 4.59±0.09 mm, 4.04±0.16 mm, and 5.41±0.16 mm, respectively). The difference was significant compared to mid and upper esophagus (p<.0001). The 95th percentiles of expansion for LA, RA, RP, LP were 13.36 mm, 9.97 mm, 11.29 mm, and 12.19 mm, respectively. Conclusion: Analysis of on-treatment MR imaging of the lower esophagus during thoracic SBRT suggests margin expansions of 13.36 mm LA, 9.97 mm RA, 11.29 mm RP, 12.19 mm LP would account for 95% of measurements. Our novel algorithm for rapid assessment of margin expansion for critical structures with 360° of freedom in each axial slice enables continuously adaptive patient-specific margins which may

  5. Computerbasiert prüfen [Computer-based Assessment

    Directory of Open Access Journals (Sweden)

    Frey, Peter

    2006-08-01

    Full Text Available [english] Computer-based testing in medical education offers new perspectives. Advantages are sequential or adaptive testing, integration of movies or sound, rapid feedback to candidates and management of web-based question banks. Computer-based testing can also be implemented in an OSCE examination. In e-learning environments formative self-assessment are often implemented and gives helpful feedbacks to learners. Disadvantages in high-stake exams are the high requirements as well for the quality of testing (e.g. standard setting as additionally for the information technology and especially for security. [german] Computerbasierte Prüfungen im Medizinstudium eröffnen neue Möglichkeiten. Vorteile solcher Prüfungen liegen im sequentiellen oder adaptiven Prüfen, in der Integration von Bewegtbildern oder Ton, der raschen Auswertung und zentraler Verwaltung der Prüfungsfragen via Internet. Ein Einsatzgebiet mit vertretbarem Aufwand sind Prüfungen mit mehreren Stationen wie beispielsweise die OSCE-Prüfung. Computerbasierte formative Selbsttests werden im Bereiche e-learning häufig angeboten. Das hilft den Lernenden ihren Wissensstand besser einzuschätzen oder sich mit den Leistungen anderer zu vergleichen. Grenzen zeigen sich bei den summativen Prüfungen beim Prüfungsort, da zuhause Betrug möglich ist. Höhere ärztliche Kompetenzen wie Untersuchungstechnik oder Kommunikation eigenen sich kaum für rechnergestützte Prüfungen.

  6. Dynamics-based sequential memory: Winnerless competition of patterns

    International Nuclear Information System (INIS)

    Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.

    2003-01-01

    We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns

  7. Sequential, progressive, equal-power, reflective beam-splitter arrays

    Science.gov (United States)

    Manhart, Paul K.

    2017-11-01

    The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.

  8. Basal ganglia and cortical networks for sequential ordering and rhythm of complex movements

    Directory of Open Access Journals (Sweden)

    Jeffery G. Bednark

    2015-07-01

    Full Text Available Voluntary actions require the concurrent engagement and coordinated control of complex temporal (e.g. rhythm and ordinal motor processes. Using high-resolution functional magnetic resonance imaging (fMRI and multi-voxel pattern analysis (MVPA, we sought to determine the degree to which these complex motor processes are dissociable in basal ganglia and cortical networks. We employed three different finger-tapping tasks that differed in the demand on the sequential temporal rhythm or sequential ordering of submovements. Our results demonstrate that sequential rhythm and sequential order tasks were partially dissociable based on activation differences. The sequential rhythm task activated a widespread network centered around the SMA and basal-ganglia regions including the dorsomedial putamen and caudate nucleus, while the sequential order task preferentially activated a fronto-parietal network. There was also extensive overlap between sequential rhythm and sequential order tasks, with both tasks commonly activating bilateral premotor, supplementary motor, and superior/inferior parietal cortical regions, as well as regions of the caudate/putamen of the basal ganglia and the ventro-lateral thalamus. Importantly, within the cortical regions that were active for both complex movements, MVPA could accurately classify different patterns of activation for the sequential rhythm and sequential order tasks. In the basal ganglia, however, overlapping activation for the sequential rhythm and sequential order tasks, which was found in classic motor circuits of the putamen and ventro-lateral thalamus, could not be accurately differentiated by MVPA. Overall, our results highlight the convergent architecture of the motor system, where complex motor information that is spatially distributed in the cortex converges into a more compact representation in the basal ganglia.

  9. The sequential price of anarchy for atomic congestion games

    NARCIS (Netherlands)

    de Jong, Jasper; Uetz, Marc Jochen; Liu, Tie-Yan; Qi, Qi; Ye, Yinyu

    2014-01-01

    In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, we consider games where players choose their actions sequentially. The sequential

  10. Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation

    Science.gov (United States)

    Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.

    2018-03-01

    A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.

  11. Campbell and moment measures for finite sequential spatial processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2006-01-01

    textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector

  12. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  13. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  14. OVERVIEW OF DEVELOPMENT OF P-CARES: PROBABILISTIC COMPUTER ANALYSIS FOR RAPID EVALUATION OF STRUCTURES

    International Nuclear Information System (INIS)

    NIE, J.; XU, J.; COSTANTINO, C.; THOMAS, V.

    2007-01-01

    Brookhaven National Laboratory (BNL) undertook an effort to revise the CARES (Computer Analysis for Rapid Evaluation of Structures) program under the auspices of the US Nuclear Regulatory Commission (NRC). The CARES program provided the NRC staff a capability to quickly check the validity and/or accuracy of the soil-structure interaction (SSI) models and associated data received from various applicants. The aim of the current revision was to implement various probabilistic simulation algorithms in CARES (referred hereinafter as P-CARES [1]) for performing the probabilistic site response and soil-structure interaction (SSI) analyses. This paper provides an overview of the development process of P-CARES, including the various probabilistic simulation techniques used to incorporate the effect of site soil uncertainties into the seismic site response and SSI analyses and an improved graphical user interface (GUI)

  15. Unconventional Quantum Computing Devices

    OpenAIRE

    Lloyd, Seth

    2000-01-01

    This paper investigates a variety of unconventional quantum computation devices, including fermionic quantum computers and computers that exploit nonlinear quantum mechanics. It is shown that unconventional quantum computing devices can in principle compute some quantities more rapidly than `conventional' quantum computers.

  16. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  17. Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test

    Directory of Open Access Journals (Sweden)

    Closas Pau

    2012-10-01

    be applied to other data sets to quickly detect influenza outbreaks. The sequential structure of the test makes it suitable for implementation in many platforms at a low computational cost without requiring to store large data sets.

  18. Rapid genetic algorithm optimization of a mouse computational model: Benefits for anthropomorphization of neonatal mouse cardiomyocytes

    Directory of Open Access Journals (Sweden)

    Corina Teodora Bot

    2012-11-01

    Full Text Available While the mouse presents an invaluable experimental model organism in biology, its usefulness in cardiac arrhythmia research is limited in some aspects due to major electrophysiological differences between murine and human action potentials (APs. As previously described, these species-specific traits can be partly overcome by application of a cell-type transforming clamp (CTC to anthropomorphize the murine cardiac AP. CTC is a hybrid experimental-computational dynamic clamp technique, in which a computationally calculated time-dependent current is inserted into a cell in real time, to compensate for the differences between sarcolemmal currents of that cell (e.g., murine and the desired species (e.g., human. For effective CTC performance, mismatch between the measured cell and a mathematical model used to mimic the measured AP must be minimal. We have developed a genetic algorithm (GA approach that rapidly tunes a mathematical model to reproduce the AP of the murine cardiac myocyte under study. Compared to a prior implementation that used a template-based model selection approach, we show that GA optimization to a cell-specific model results in a much better recapitulation of the desired AP morphology with CTC. This improvement was more pronounced when anthropomorphizing neonatal mouse cardiomyocytes to human-like APs than to guinea pig APs. CTC may be useful for a wide range of applications, from screening effects of pharmaceutical compounds on ion channel activity, to exploring variations in the mouse or human genome. Rapid GA optimization of a cell-specific mathematical model improves CTC performance and may therefore expand the applicability and usage of the CTC technique.

  19. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  20. A survey of computational physics introductory computational science

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2008-01-01

    Computational physics is a rapidly growing subfield of computational science, in large part because computers can solve previously intractable problems or simulate natural processes that do not have analytic solutions. The next step beyond Landau's First Course in Scientific Computing and a follow-up to Landau and Páez's Computational Physics, this text presents a broad survey of key topics in computational physics for advanced undergraduates and beginning graduate students, including new discussions of visualization tools, wavelet analysis, molecular dynamics, and computational fluid dynamics

  1. A Survey of Multi-Objective Sequential Decision-Making

    OpenAIRE

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...

  2. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Science.gov (United States)

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  3. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Directory of Open Access Journals (Sweden)

    Fangjun Qin

    2018-05-01

    Full Text Available In this paper, a sequential multiplicative extended Kalman filter (SMEKF is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  4. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  5. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Science.gov (United States)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  6. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  7. Impact of Diagrams on Recalling Sequential Elements in Expository Texts.

    Science.gov (United States)

    Guri-Rozenblit, Sarah

    1988-01-01

    Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)

  8. A Sequential Quadratically Constrained Quadratic Programming Method of Feasible Directions

    International Nuclear Information System (INIS)

    Jian Jinbao; Hu Qingjie; Tang Chunming; Zheng Haiyan

    2007-01-01

    In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported

  9. Quantum Probability Zero-One Law for Sequential Terminal Events

    Science.gov (United States)

    Rehder, Wulf

    1980-07-01

    On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.

  10. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  11. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  12. Lineup Composition, Suspect Position, and the Sequential Lineup Advantage

    Science.gov (United States)

    Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.

    2008-01-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…

  13. Trial Sequential Analysis in systematic reviews with meta-analysis

    Directory of Open Access Journals (Sweden)

    Jørn Wetterslev

    2017-03-01

    Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in

  14. A Sequential, Implicit, Wavelet-Based Solver for Multi-Scale Time-Dependent Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Donald A. McLaren

    2013-04-01

    Full Text Available This paper describes and tests a wavelet-based implicit numerical method for solving partial differential equations. Intended for problems with localized small-scale interactions, the method exploits the form of the wavelet decomposition to divide the implicit system created by the time-discretization into multiple smaller systems that can be solved sequentially. Included is a test on a basic non-linear problem, with both the results of the test, and the time required to calculate them, compared with control results based on a single system with fine resolution. The method is then tested on a non-trivial problem, its computational time and accuracy checked against control results. In both tests, it was found that the method requires less computational expense than the control. Furthermore, the method showed convergence towards the fine resolution control results.

  15. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    Science.gov (United States)

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  16. Heat accumulation during sequential cortical bone drilling.

    Science.gov (United States)

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  17. Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.

    Science.gov (United States)

    Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M

    2011-02-01

    To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.

  18. Exploring data with RapidMiner

    CERN Document Server

    Chisholm, Andrew

    2013-01-01

    A step-by-step tutorial style using examples so that users of different levels will benefit from the facilities offered by RapidMiner.If you are a computer scientist or an engineer who has real data from which you want to extract value, this book is ideal for you. You will need to have at least a basic awareness of data mining techniques and some exposure to RapidMiner.

  19. Dihydroazulene photoswitch operating in sequential tunneling regime

    DEFF Research Database (Denmark)

    Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg

    2012-01-01

    to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...

  20. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  1. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

  2. Introduction of Sequential Inactivated Polio Vaccine–Oral Polio Vaccine Schedule for Routine Infant Immunization in Brazil’s National Immunization Program

    Science.gov (United States)

    Domingues, Carla Magda Allan S.; de Fátima Pereira, Sirlene; Marreiros, Ana Carolina Cunha; Menezes, Nair; Flannery, Brendan

    2015-01-01

    In August 2012, the Brazilian Ministry of Health introduced inactivated polio vaccine (IPV) as part of sequential polio vaccination schedule for all infants beginning their primary vaccination series. The revised childhood immunization schedule included 2 doses of IPV at 2 and 4 months of age followed by 2 doses of oral polio vaccine (OPV) at 6 and 15 months of age. One annual national polio immunization day was maintained to provide OPV to all children aged 6 to 59 months. The decision to introduce IPV was based on preventing rare cases of vaccine-associated paralytic polio, financially sustaining IPV introduction, ensuring equitable access to IPV, and preparing for future OPV cessation following global eradication. Introducing IPV during a national multivaccination campaign led to rapid uptake, despite challenges with local vaccine supply due to high wastage rates. Continuous monitoring is required to achieve high coverage with the sequential polio vaccine schedule. PMID:25316829

  3. Computationally designed libraries for rapid enzyme stabilization

    NARCIS (Netherlands)

    Wijma, Hein J.; Floor, Robert J.; Jekel, Peter A.; Baker, David; Marrink, Siewert J.; Janssen, Dick B.

    The ability to engineer enzymes and other proteins to any desired stability would have wide-ranging applications. Here, we demonstrate that computational design of a library with chemically diverse stabilizing mutations allows the engineering of drastically stabilized and fully functional variants

  4. Evaluation Using Sequential Trials Methods.

    Science.gov (United States)

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  5. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  6. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data

    Science.gov (United States)

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-01

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  7. The impact of eyewitness identifications from simultaneous and sequential lineups.

    Science.gov (United States)

    Wright, Daniel B

    2007-10-01

    Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

  8. Properties of simultaneous and sequential two-nucleon transfer

    International Nuclear Information System (INIS)

    Pinkston, W.T.; Satchler, G.R.

    1982-01-01

    Approximate forms of the first- and second-order distorted-wave Born amplitudes are used to study the overall structure, particularly the selection rules, of the amplitudes for simultaneous and sequential transfer of two nucleons. The role of the spin-state assumed for the intermediate deuterons in sequential (t, p) reactions is stressed. The similarity of one-step and two-step amplitudes for (α, d) reactions is exhibited, and the consequent absence of any obvious J-dependence in their interference is noted. (orig.)

  9. Sequential contrast-enhanced MR imaging of the penis.

    Science.gov (United States)

    Kaneko, K; De Mouy, E H; Lee, B E

    1994-04-01

    To determine the enhancement patterns of the penis at magnetic resonance (MR) imaging. Sequential contrast material-enhanced MR images of the penis in a flaccid state were obtained in 16 volunteers (12 with normal penile function and four with erectile dysfunction). Subjects with normal erectile function showed gradual and centrifugal enhancement of the corpora cavernosa, while those with erectile dysfunction showed poor enhancement with abnormal progression. Sequential contrast-enhanced MR imaging provides additional morphologic information for the evaluation of erectile dysfunction.

  10. Sequential weak continuity of null Lagrangians at the boundary

    Czech Academy of Sciences Publication Activity Database

    Kalamajska, A.; Kraemer, S.; Kružík, Martin

    2014-01-01

    Roč. 49, 3/4 (2014), s. 1263-1278 ISSN 0944-2669 R&D Projects: GA ČR GAP201/10/0357 Institutional support: RVO:67985556 Keywords : null Lagrangians * nonhomogeneous nonlinear mappings * sequential weak/in measure continuity Subject RIV: BA - General Mathematics Impact factor: 1.518, year: 2014 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-sequential weak continuity of null lagrangians at the boundary.pdf

  11. One-way quantum computing in superconducting circuits

    Science.gov (United States)

    Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.

    2018-03-01

    We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.

  12. Droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling for simpler and faster PCR assay using wire-guided manipulations.

    Science.gov (United States)

    You, David J; Yoon, Jeong-Yeol

    2012-09-04

    A computer numerical control (CNC) apparatus was used to perform droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling on a single superhydrophobic surface and a multi-chambered PCB heater. Droplets were manipulated using "wire-guided" method (a pipette tip was used in this study). This methodology can be easily adapted to existing commercial robotic pipetting system, while demonstrated added capabilities such as vibrational mixing, high-speed centrifuging of droplets, simple DNA extraction utilizing the hydrophobicity difference between the tip and the superhydrophobic surface, and rapid thermocycling with a moving droplet, all with wire-guided droplet manipulations on a superhydrophobic surface and a multi-chambered PCB heater (i.e., not on a 96-well plate). Serial dilutions were demonstrated for diluting sample matrix. Centrifuging was demonstrated by rotating a 10 μL droplet at 2300 round per minute, concentrating E. coli by more than 3-fold within 3 min. DNA extraction was demonstrated from E. coli sample utilizing the disposable pipette tip to cleverly attract the extracted DNA from the droplet residing on a superhydrophobic surface, which took less than 10 min. Following extraction, the 1500 bp sequence of Peptidase D from E. coli was amplified using rapid droplet thermocycling, which took 10 min for 30 cycles. The total assay time was 23 min, including droplet centrifugation, droplet DNA extraction and rapid droplet thermocycling. Evaporation from of 10 μL droplets was not significant during these procedures, since the longest time exposure to air and the vibrations was less than 5 min (during DNA extraction). The results of these sequentially executed processes were analyzed using gel electrophoresis. Thus, this work demonstrates the adaptability of the system to replace many common laboratory tasks on a single platform (through re-programmability), in rapid succession (using droplets), and with a high level of

  13. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  14. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  15. Sequential Extraction Versus Comprehensive Characterization of Heavy Metal Species in Brownfield Soils

    Energy Technology Data Exchange (ETDEWEB)

    Dahlin, Cheryl L.; Williamson, Connie A.; Collins, W. Keith; Dahlin, David C.

    2002-06-01

    The applicability of sequential extraction as a means to determine species of heavy-metals was examined by a study on soil samples from two Superfund sites: the National Lead Company site in Pedricktown, NJ, and the Roebling Steel, Inc., site in Florence, NJ. Data from a standard sequential extraction procedure were compared to those from a comprehensive study that combined optical- and scanning-electron microscopy, X-ray diffraction, and chemical analyses. The study shows that larger particles of contaminants, encapsulated contaminants, and/or man-made materials such as slags, coke, metals, and plastics are subject to incasement, non-selectivity, and redistribution in the sequential extraction process. The results indicate that standard sequential extraction procedures that were developed for characterizing species of contaminants in river sediments may be unsuitable for stand-alone determinative evaluations of contaminant species in industrial-site materials. However, if employed as part of a comprehensive, site-specific characterization study, sequential extraction could be a very useful tool.

  16. Imitation of the sequential structure of actions by chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Whiten, A

    1998-09-01

    Imitation was studied experimentally by allowing chimpanzees (Pan troglodytes) to observe alternative patterns of actions for opening a specially designed "artificial fruit." Like problematic foods primates deal with naturally, with the test fruit several defenses had to be removed to gain access to an edible core, but the sequential order and method of defense removal could be systematically varied. Each subject repeatedly observed 1 of 2 alternative techniques for removing each defense and 1 of 2 alternative sequential patterns of defense removal. Imitation of sequential organization emerged after repeated cycles of demonstration and attempts at opening the fruit. Imitation in chimpanzees may thus have some power to produce cultural convergence, counter to the supposition that individual learning processes corrupt copied actions. Imitation of sequential organization was accompanied by imitation of some aspects of the techniques that made up the sequence.

  17. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task

    Directory of Open Access Journals (Sweden)

    Cristóbal Moënne-Loccoz

    2017-09-01

    Full Text Available Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  18. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

    International Nuclear Information System (INIS)

    Bilohuscin, J.

    2016-01-01

    In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

  19. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  20. Reuse, Recycle, Reweigh: Combating Influenza through Efficient Sequential Bayesian Computation for Massive Data

    OpenAIRE

    Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.

    2010-01-01

    Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, of...

  1. Documentscape: Intertextuality, Sequentiality & Autonomy at Work

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Bjørn, Pernille

    2014-01-01

    On the basis of an ethnographic field study, this article introduces the concept of documentscape to the analysis of document-centric work practices. The concept of documentscape refers to the entire ensemble of documents in their mutual intertextual interlocking. Providing empirical data from...... a global software development case, we show how hierarchical structures and sequentiality across the interlocked documents are critical to how actors make sense of the work of others and what to do next in a geographically distributed setting. Furthermore, we found that while each document is created...... as part of a quasi-sequential order, this characteristic does not make the document, as a single entity, into a stable object. Instead, we found that the documents were malleable and dynamic while suspended in intertextual structures. Our concept of documentscape points to how the hierarchical structure...

  2. Construction of computational program of aging in insulating materials for searching reversed sequential test conditions to give damage equivalent to simultaneous exposure of heat and radiation

    International Nuclear Information System (INIS)

    Fuse, Norikazu; Homma, Hiroya; Okamoto, Tatsuki

    2013-01-01

    Two consecutive numerical calculations on degradation of polymeric insulations under thermal and radiation environment are carried out to simulate so-called reversal sequential acceleration test. The aim of the calculation is to search testing conditions which provide material damage equivalent to the case of simultaneous exposure of heat and radiation. At least following four parameters are needed to be considered in the sequential method; dose rate and exposure time in radiation, as well as temperature and aging time in heating. The present paper discusses the handling of these parameters and shows some trial calculation results. (author)

  3. Sequential series for nuclear reactions

    International Nuclear Information System (INIS)

    Izumo, Ko

    1975-01-01

    A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)

  4. One-stage sequential bilateral thoracic expansion for asphyxiating thoracic dystrophy (Jeune syndrome).

    Science.gov (United States)

    Muthialu, Nagarajan; Mussa, Shafi; Owens, Catherine M; Bulstrode, Neil; Elliott, Martin J

    2014-10-01

    Jeune syndrome (asphyxiating thoracic dystrophy) is a rare disorder characterized by skeletal dysplasia, reduced diameter of the thoracic cage and extrathoracic organ involvement. Fatal, early respiratory insufficiency may occur. Two-stage lateral thoracic expansion has been reported, addressing each side sequentially over 3-12 months. While staged repair theoretically provides less invasive surgery in a small child with respiratory distress, we utilized a single stage, bilateral procedure aiming to rapidly maximize lung development. Combined bilateral surgery also offered the chance of rapid recovery, and reduced hospital stay. We present our early experience of this modification of existing surgical treatment for an extremely rare condition, thought to be generally fatal in early childhood. Nine children (6 males, 3 females; median age 30 months [3.5-75]) underwent thoracic expansion for Jeune syndrome in our centre. All patients required preoperative respiratory support (5 with tracheostomy, 8 requiring positive pressure ventilation regularly within each day/night cycle). Two children underwent sequential unilateral (2-month interval between stages) and 7 children bilateral thoracic expansion by means of staggered osteotomies of third to eighth ribs and plate fixation of fourth to fifth rib and sixth to seventh rib, leaving the remaining ribs floating. There was no operative mortality. There were 2 deaths within 3 months of surgery, due to pulmonary hypertension (1 following two-stage and 1 following single-stage thoracic expansion). At the median follow-up of 11 months (1-15), 3 children have been discharged home from their referring unit and 2 have significantly reduced respiratory support. One child remains on non-invasive ventilation and another is still ventilated with a high oxygen requirement. Jeune syndrome is a difficult condition to manage, but bilateral thoracic expansion offers an effective reduction in ventilator requirements in these children

  5. A node linkage approach for sequential pattern mining.

    Directory of Open Access Journals (Sweden)

    Osvaldo Navarro

    Full Text Available Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT, has better performance and scalability in comparison with state of the art algorithms.

  6. Sequential Change-Point Detection via Online Convex Optimization

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2018-02-01

    Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

  7. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    Energy Technology Data Exchange (ETDEWEB)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient

  8. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

    2014-01-01

    the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

  9. Parameter sampling capabilities of sequential and simultaneous data assimilation: II. Statistical analysis of numerical results

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess and compare parameter sampling capabilities of one sequential and one simultaneous Bayesian, ensemble-based, joint state-parameter (JS) estimation method. In the companion paper, part I (Fossum and Mannseth 2014 Inverse Problems 30 114002), analytical investigations lead us to propose three claims, essentially stating that the sequential method can be expected to outperform the simultaneous method for weakly nonlinear forward models. Here, we assess the reliability and robustness of these claims through statistical analysis of results from a range of numerical experiments. Samples generated by the two approximate JS methods are compared to samples from the posterior distribution generated by a Markov chain Monte Carlo method, using four approximate measures of distance between probability distributions. Forward-model nonlinearity is assessed from a stochastic nonlinearity measure allowing for sufficiently large model dimensions. Both toy models (with low computational complexity, and where the nonlinearity is fairly easy to control) and two-phase porous-media flow models (corresponding to down-scaled versions of problems to which the JS methods have been frequently applied recently) are considered in the numerical experiments. Results from the statistical analysis show strong support of all three claims stated in part I. (paper)

  10. Operational reliability evaluation of restructured power systems with wind power penetration utilizing reliability network equivalent and time-sequential simulation approaches

    DEFF Research Database (Denmark)

    Ding, Yi; Cheng, Lin; Zhang, Yonghong

    2014-01-01

    In the last two decades, the wind power generation has been rapidly and widely developed in many regions and countries for tackling the problems of environmental pollution and sustainability of energy supply. However, the high share of intermittent and fluctuating wind power production has also...... and reserve provides, fast reserve providers and transmission network in restructured power systems. A contingency management schema for real time operation considering its coupling with the day-ahead market is proposed. The time-sequential Monte Carlo simulation is used to model the chronological...

  11. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  12. Comment on: "Cell Therapy for Heart Disease: Trial Sequential Analyses of Two Cochrane Reviews"

    DEFF Research Database (Denmark)

    Castellini, Greta; Nielsen, Emil Eik; Gluud, Christian

    2017-01-01

    Trial Sequential Analysis is a frequentist method to help researchers control the risks of random errors in meta-analyses (1). Fisher and colleagues used Trial Sequential Analysis on cell therapy for heart diseases (2). The present article discusses the usefulness of Trial Sequential Analysis and...

  13. Computer Assisted Instructional Design for Computer-Based Instruction. Final Report. Working Papers.

    Science.gov (United States)

    Russell, Daniel M.; Pirolli, Peter

    Recent advances in artificial intelligence and the cognitive sciences have made it possible to develop successful intelligent computer-aided instructional systems for technical and scientific training. In addition, computer-aided design (CAD) environments that support the rapid development of such computer-based instruction have also been recently…

  14. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  15. Fault detection in multiply-redundant measurement systems via sequential testing

    International Nuclear Information System (INIS)

    Ray, A.

    1988-01-01

    The theory and application of a sequential test procedure for fault detection and isolation. The test procedure is suited for development of intelligent instrumentation in strategic processes like aircraft and nuclear plants where redundant measurements are usually available for individual critical variables. The test procedure consists of: (1) a generic redundancy management procedure which is essentially independent of the fault detection strategy and measurement noise statistics, and (2) a modified version of sequential probability ratio test algorithm for fault detection and isolation, which functions within the framework of this redundancy management procedure. The sequential test procedure is suitable for real-time applications using commercially available microcomputers and its efficacy has been verified by online fault detection in an operating nuclear reactor. 15 references

  16. Sequential method for the assessment of innovations in computer assisted industrial processes; Metodo secuencial para evaluacion de innovaciones en procesos industriales asistido por computadora

    Energy Technology Data Exchange (ETDEWEB)

    Suarez Antola, R [Universidad Catolica del Uruguay, Montevideo (Uruguay); Artucio, G [Ministerio de Industria Energia y Mineria. Direccion Nacional de Tecnologia Nuclear, Montevideo (Uruguay)

    1995-08-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs.

  17. General-purpose parallel simulator for quantum computing

    International Nuclear Information System (INIS)

    Niwa, Jumpei; Matsumoto, Keiji; Imai, Hiroshi

    2002-01-01

    With current technologies, it seems to be very difficult to implement quantum computers with many qubits. It is therefore of importance to simulate quantum algorithms and circuits on the existing computers. However, for a large-size problem, the simulation often requires more computational power than is available from sequential processing. Therefore, simulation methods for parallel processors are required. We have developed a general-purpose simulator for quantum algorithms/circuits on the parallel computer (Sun Enterprise4500). It can simulate algorithms/circuits with up to 30 qubits. In order to test efficiency of our proposed methods, we have simulated Shor's factorization algorithm and Grover's database search, and we have analyzed robustness of the corresponding quantum circuits in the presence of both decoherence and operational errors. The corresponding results, statistics, and analyses are presented in this paper

  18. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  19. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  20. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  1. Time-Sequential Working Wavelength-Selective Filter for Flat Autostereoscopic Displays

    Directory of Open Access Journals (Sweden)

    René de la Barré

    2017-02-01

    Full Text Available A time-sequential working, spatially-multiplexed autostereoscopic 3D display design consisting of a fast switchable RGB-color filter array and a fast color display is presented. The newly-introduced 3D display design is usable as a multi-user display, as well as a single-user system. The wavelength-selective filter barrier emits the light from a larger aperture than common autostereoscopic barrier displays with similar barrier pitch and ascent. Measurements on a demonstrator with commercial display components, simulations and computational evaluations have been carried out to describe the proposed wavelength-selective display design in static states and to show the weak spots of display filters in commercial displays. An optical modelling of wavelength-selective barriers has been used for instance to calculate the light ray distribution properties of that arrangement. In the time-sequential implementation, it is important to avoid that quick eye or eyelid movement leads to visible color artifacts. Therefore, color filter cells, switching faster than conventional LC display cells, must distribute directed light from different primaries at the same time, to create a 3D presentation. For that, electric tunable liquid crystal Fabry–Pérot color filters are presented. They switch on-off the colors red, green and blue in the millisecond regime. Their active areas consist of a sub-micrometer-thick nematic layer sandwiched between dielectric mirrors and indium tin oxide (ITO-electrodes. These cells shall switch narrowband light of red, green or blue. A barrier filter array for a high resolution, glasses-free 3D display has to be equipped with several thousand switchable filter elements having different color apertures.

  2. Marker-controlled watershed for lymphoma segmentation in sequential CT images

    International Nuclear Information System (INIS)

    Yan Jiayong; Zhao Binsheng; Wang, Liang; Zelenetz, Andrew; Schwartz, Lawrence H.

    2006-01-01

    Segmentation of lymphoma containing lymph nodes is a difficult task because of multiple variables associated with the tumor's location, intensity distribution, and contrast to its surrounding tissues. In this paper, we present a reliable and practical marker-controlled watershed algorithm for semi-automated segmentation of lymphoma in sequential CT images. Robust determination of internal and external markers is the key to successful use of the marker-controlled watershed transform in the segmentation of lymphoma and is the focus of this work. The external marker in our algorithm is the circle enclosing the lymphoma in a single slice. The internal marker, however, is determined automatically by combining techniques including Canny edge detection, thresholding, morphological operation, and distance map estimation. To obtain tumor volume, the segmented lymphoma in the current slice needs to be propagated to the adjacent slice to help determine the external and internal markers for delineation of the lymphoma in that slice. The algorithm was applied to 29 lymphomas (size range, 9-53 mm in diameter; mean, 23 mm) in nine patients. A blinded radiologist manually delineated all lymphomas on all slices. The manual result served as the ''gold standard'' for comparison. Several quantitative methods were applied to objectively evaluate the performance of the segmentation algorithm. The algorithm received a mean overlap, overestimation, and underestimation ratios of 83.2%, 13.5%, and 5.5%, respectively. The mean average boundary distance and Hausdorff boundary distance were 0.7 and 3.7 mm. Preliminary results have shown the potential of this computer algorithm to allow reliable segmentation and quantification of lymphomas on sequential CT images

  3. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); V. Capasso

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

  4. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  5. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

    Directory of Open Access Journals (Sweden)

    Huber-Wagner S

    2010-05-01

    Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

  6. Sequential Foreign Investments, Regional Technology Platforms and the Evolution of Japanese Multinationals in East Asia

    OpenAIRE

    Song, Jaeyong

    2001-01-01

    IVABSTRACTIn this paper, we investigate the firm-level mechanisms that underlie the sequential foreign direct investment (FDI) decisions of multinational corporations (MNCs). To understand inter-firm heterogeneity in the sequential FDI behaviors of MNCs, we develop a firm capability-based model of sequential FDI decisions. In the setting of Japanese electronics MNCs in East Asia, we empirically examine how prior investments in firm capabilities affect sequential investments into existingprodu...

  7. Development of a rapid method for the sequential extraction and subsequent quantification of fatty acids and sugars from avocado mesocarp tissue.

    Science.gov (United States)

    Meyer, Marjolaine D; Terry, Leon A

    2008-08-27

    Methods devised for oil extraction from avocado (Persea americana Mill.) mesocarp (e.g., Soxhlet) are usually lengthy and require operation at high temperature. Moreover, methods for extracting sugars from avocado tissue (e.g., 80% ethanol, v/v) do not allow for lipids to be easily measured from the same sample. This study describes a new simple method that enabled sequential extraction and subsequent quantification of both fatty acids and sugars from the same avocado mesocarp tissue sample. Freeze-dried mesocarp samples of avocado cv. Hass fruit of different ripening stages were extracted by homogenization with hexane and the oil extracts quantified for fatty acid composition by GC. The resulting filter residues were readily usable for sugar extraction with methanol (62.5%, v/v). For comparison, oil was also extracted using the standard Soxhlet technique and the resulting thimble residue extracted for sugars as before. An additional experiment was carried out whereby filter residues were also extracted using ethanol. Average oil yield using the Soxhlet technique was significantly (P < 0.05) higher than that obtained by homogenization with hexane, although the difference remained very slight, and fatty acid profiles of the oil extracts following both methods were very similar. Oil recovery improved with increasing ripeness of the fruit with minor differences observed in the fatty acid composition during postharvest ripening. After lipid removal, methanolic extraction was superior in recovering sucrose and perseitol as compared to 80% ethanol (v/v), whereas mannoheptulose recovery was not affected by solvent used. The method presented has the benefits of shorter extraction time, lower extraction temperature, and reduced amount of solvent and can be used for sequential extraction of fatty acids and sugars from the same sample.

  8. Retrieval of sea surface velocities using sequential ocean colour monitor (OCM) data

    Digital Repository Service at National Institute of Oceanography (India)

    Prasad, J.S.; Rajawat, A.S.; Pradhan, Y.; Chauhan, O.S.; Nayak, S.R.

    velocities has been developed. The method is based on matching suspended sediment dispersion patterns, in sequential two time lapsed images. The pattern matching is performed on atmospherically corrected and geo-referenced sequential pair of images by Maximum...

  9. Rapid and automated determination of plutonium and neptunium in environmental samples

    International Nuclear Information System (INIS)

    Qiao, J.

    2011-03-01

    This thesis presents improved analytical methods for rapid and automated determination of plutonium and neptunium in environmental samples using sequential injection (SI) based chromatography and inductively coupled plasma mass spectrometry (ICP-MS). The progress of methodology development in this work consists of 5 subjects stated as follows: 1) Development and optimization of an SI-anion exchange chromatographic method for rapid determination of plutonium in environmental samples in combination of inductively coupled plasma mass spectrometry detection (Paper II); (2) Methodology development and optimization for rapid determination of plutonium in environmental samples using SI-extraction chromatography prior to inductively coupled plasma mass spectrometry (Paper III); (3) Development of an SI-chromatographic method for simultaneous determination of plutonium and neptunium in environmental samples (Paper IV); (4) Investigation of the suitability and applicability of 242 Pu as a tracer for rapid neptunium determination using anion exchange chromatography in an SI-network coupled with inductively coupled plasma mass spectrometry (Paper V); (5) Exploration of macro-porous anion exchange chromatography for rapid and simultaneous determination of plutonium and neptunium within an SI system (Paper VI). The results demonstrate that the developed methods in this study are reliable and efficient for accurate assays of trace levels of plutonium and neptunium as demanded in different situations including environmental risk monitoring and assessment, emergency preparedness and surveillance of contaminated areas. (Author)

  10. Rapid and automated determination of plutonium and neptunium in environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, J.

    2011-03-15

    This thesis presents improved analytical methods for rapid and automated determination of plutonium and neptunium in environmental samples using sequential injection (SI) based chromatography and inductively coupled plasma mass spectrometry (ICP-MS). The progress of methodology development in this work consists of 5 subjects stated as follows: 1) Development and optimization of an SI-anion exchange chromatographic method for rapid determination of plutonium in environmental samples in combination of inductively coupled plasma mass spectrometry detection (Paper II); (2) Methodology development and optimization for rapid determination of plutonium in environmental samples using SI-extraction chromatography prior to inductively coupled plasma mass spectrometry (Paper III); (3) Development of an SI-chromatographic method for simultaneous determination of plutonium and neptunium in environmental samples (Paper IV); (4) Investigation of the suitability and applicability of 242Pu as a tracer for rapid neptunium determination using anion exchange chromatography in an SI-network coupled with inductively coupled plasma mass spectrometry (Paper V); (5) Exploration of macro-porous anion exchange chromatography for rapid and simultaneous determination of plutonium and neptunium within an SI system (Paper VI). The results demonstrate that the developed methods in this study are reliable and efficient for accurate assays of trace levels of plutonium and neptunium as demanded in different situations including environmental risk monitoring and assessment, emergency preparedness and surveillance of contaminated areas. (Author)

  11. A fast and accurate online sequential learning algorithm for feedforward networks.

    Science.gov (United States)

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  12. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Capasso, V.

    2009-01-01

    We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video

  13. Rapid Prototyping Enters Mainstream Manufacturing.

    Science.gov (United States)

    Winek, Gary

    1996-01-01

    Explains rapid prototyping, a process that uses computer-assisted design files to create a three-dimensional object automatically, speeding the industrial design process. Five commercially available systems and two emerging types--the 3-D printing process and repetitive masking and depositing--are described. (SK)

  14. Polarization control of direct (non-sequential) two-photon double ionization of He

    International Nuclear Information System (INIS)

    Pronin, E A; Manakov, N L; Marmo, S I; Starace, Anthony F

    2007-01-01

    An ab initio parametrization of the doubly-differential cross section (DDCS) for two-photon double ionization (TPDI) from an s 2 subshell of an atom in a 1 S 0 -state is presented. Analysis of the elliptic dichroism (ED) effect in the DDCS for TPDI of He and its comparison with the same effect in the concurrent process of sequential double ionization shows their qualitative and quantitative differences, thus providing a means to control and to distinguish sequential and non-sequential processes by measuring the relative ED parameter

  15. A Bayesian sequential design using alpha spending function to control type I error.

    Science.gov (United States)

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  16. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  17. The pursuit of balance in sequential randomized trials

    Directory of Open Access Journals (Sweden)

    Raymond P. Guiteras

    2016-06-01

    Full Text Available In many randomized trials, subjects enter the sample sequentially. Because the covariates for all units are not known in advance, standard methods of stratification do not apply. We describe and assess the method of DA-optimal sequential allocation (Atkinson, 1982 for balancing stratification covariates across treatment arms. We provide simulation evidence that the method can provide substantial improvements in precision over commonly employed alternatives. We also describe our experience implementing the method in a field trial of a clean water and handwashing intervention in Dhaka, Bangladesh, the first time the method has been used. We provide advice and software for future researchers.

  18. Event-shape analysis: Sequential versus simultaneous multifragment emission

    International Nuclear Information System (INIS)

    Cebra, D.A.; Howden, S.; Karn, J.; Nadasen, A.; Ogilvie, C.A.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S.; Norbeck, E.

    1990-01-01

    The Michigan State University 4π array has been used to select central-impact-parameter events from the reaction 40 Ar+ 51 V at incident energies from 35 to 85 MeV/nucleon. The event shape in momentum space is an observable which is shown to be sensitive to the dynamics of the fragmentation process. A comparison of the experimental event-shape distribution to sequential- and simultaneous-decay predictions suggests that a transition in the breakup process may have occurred. At 35 MeV/nucleon, a sequential-decay simulation reproduces the data. For the higher energies, the experimental distributions fall between the two contrasting predictions

  19. Sequential approach to Colombeau's theory of generalized functions

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1987-07-01

    J.F. Colombeau's generalized functions are constructed as equivalence classes of the elements of a specially chosen ultrapower of the class of the C ∞ -functions. The elements of this ultrapower are considered as sequences of C ∞ -functions, so in a sense, the sequential construction presented here refers to the original Colombeau theory just as, for example, the Mikusinski sequential approach to the distribution theory refers to the original Schwartz theory of distributions. The paper could be used as an elementary introduction to the Colombeau theory in which recently a solution was found to the problem of multiplication of Schwartz distributions. (author). Refs

  20. Configural and component processing in simultaneous and sequential lineup procedures

    OpenAIRE

    Flowe, HD; Smith, HMJ; Karoğlu, N; Onwuegbusi, TO; Rai, L

    2015-01-01

    Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences...

  1. New layer-based imaging and rapid prototyping techniques for computer-aided design and manufacture of custom dental restoration.

    Science.gov (United States)

    Lee, M-Y; Chang, C-C; Ku, Y C

    2008-01-01

    Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.

  2. Simulation of skill acquisition in sequential learning of a computer game

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Nielsen, Finn Ravnsbjerg; Rasmussen, Jens

    1995-01-01

    The paper presents some theoretical assumptions about the cognitive control mechanisms of subjects learning to play a computer game. A simulation model has been developed to investigate these assumptions. The model is an automaton, reacting to instruction-like cue action rules. The prototypical...... performances of 23 experimental subjects at succeeding levels of training are compared to the performance of the model. The findings are interpreted in terms of a general taxonomy for cognitive task analysis....

  3. Standardized method for reproducing the sequential X-rays flap

    International Nuclear Information System (INIS)

    Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia

    2009-01-01

    A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es

  4. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  5. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  6. Sensing of chlorpheniramine in pharmaceutical applications by sequential injector coupled with potentiometer

    Directory of Open Access Journals (Sweden)

    Tawfik A. Saleh

    2011-11-01

    Full Text Available This paper reports on development of a system consisting of a portable sequential injector coupled with potentiometric unit for sensing of chlorpheniramine (CPA, based on the reaction of CPA with potassium permanganate in acidic media. Various experimental conditions affecting the potential intensity were studied and incorporated into the procedure. Under the optimum conditions, linear relationship between the CPA concentration and peak area was obtained for the concentration range of 0.1–50 ppm. The method reflects good recovery with relative standard deviation (RSD<3%. The detection limit was 0.05 ppm. The developed method was successfully applied for determination of CPA in pure form and in pharmaceutical dosage forms. The results, obtained using the method, are in accord with the results of the British pharmacopoeia method. In addition to its accuracy and precision, the method has the advantages of being simple, inexpensive and rapid. Keywords: Sensing, Flow injection, Chlorpheniramine, Potentiometry

  7. Alternatives to the sequential lineup: the importance of controlling the pictures.

    Science.gov (United States)

    Lindsay, R C; Bellinger, K

    1999-06-01

    Because sequential lineups reduce false-positive choices, their use has been recommended (R. C. L. Lindsay, 1999; R. C. L. Lindsay & G. L. Wells, 1985). Blind testing is included in the recommended procedures. Police, concerned about blind testing, devised alternative procedures, including self-administered sequential lineups, to reduce use of relative judgments (G. L. Wells, 1984) while permitting the investigating officer to conduct the procedure. Identification data from undergraduates exposed to a staged crime (N = 165) demonstrated that 4 alternative identification procedures tested were less effective than the original sequential lineup. Allowing witnesses to control the photographs resulted in higher rates of false-positive identification. Self-reports of using relative judgments were shown to be postdictive of decision accuracy.

  8. Managerial adjustment and its limits: sequential fault in comparative perspective

    Directory of Open Access Journals (Sweden)

    Flávio da Cunha Rezende

    2008-01-01

    Full Text Available This article focuses on explanations for sequential faults in administrative reform. It deals with the limits of managerial adjustment in an approach that attempts to connect theory and empirical data, articulating three levels of analysis. The first level presents comparative evidence of sequential fault within reforms in national governments through a set of indicators geared toward understanding changes in the role of the state. In light of analyses of a representative set of comparative studies on reform implementation, the second analytical level proceeds to identify four typical mechanisms that are present in explanations on managerial adjustment faults. In this way, we seek to configure an explanatory matrix for theories on sequential fault. Next we discuss the experience of management reform in the Brazilian context, conferring special attention on one of the mechanisms that creates fault: the control dilemma. The major hypotheses that guide our article are that reforms lead to sequential fault and that there are at least four causal mechanisms that produce reforms: a transactions costs involved in producing reforms; b performance legacy; c predominance of fiscal adjustment and d the control dilemma. These mechanisms act separately or in concert, and act to decrease chances for a transformation of State managerial patterns. Major evidence that is analyzed in these articles lend consistency to the general argument that reforms have failed in their attempts to reduce public expenses, alter patterns of resource allocation, reduce the labor force and change the role of the State. Our major conclusion is that reforms fail sequentially and managerial adjustment displays considerable limitations, particularly those of a political nature.

  9. SIMAC - A phosphoproteomic strategy for the rapid separation of mono-phosphorylated from multiply phosphorylated peptides

    DEFF Research Database (Denmark)

    Thingholm, Tine E; Jensen, Ole N; Robinson, Phillip J

    2008-01-01

    spectrometric analysis, such as immobilized metal affinity chromatography or titanium dioxide the coverage of the phosphoproteome of a given sample is limited. Here we report a simple and rapid strategy - SIMAC - for sequential separation of mono-phosphorylated peptides and multiply phosphorylated peptides from...... and an optimized titanium dioxide chromatographic method. More than double the total number of identified phosphorylation sites was obtained with SIMAC, primarily from a three-fold increase in recovery of multiply phosphorylated peptides....

  10. Monte Carlo simulation of the sequential probability ratio test for radiation monitoring

    International Nuclear Information System (INIS)

    Coop, K.L.

    1984-01-01

    A computer program simulates the Sequential Probability Ratio Test (SPRT) using Monte Carlo techniques. The program, SEQTEST, performs random-number sampling of either a Poisson or normal distribution to simulate radiation monitoring data. The results are in terms of the detection probabilities and the average time required for a trial. The computed SPRT results can be compared with tabulated single interval test (SIT) values to determine the better statistical test for particular monitoring applications. Use of the SPRT in a hand-and-foot alpha monitor shows that the SPRT provides better detection probabilities while generally requiring less counting time. Calculations are also performed for a monitor where the SPRT is not permitted to the take longer than the single interval test. Although the performance of the SPRT is degraded by this restriction, the detection probabilities are still similar to the SIT values, and average counting times are always less than 75% of the SIT time. Some optimal conditions for use of the SPRT are described. The SPRT should be the test of choice in many radiation monitoring situations. 6 references, 8 figures, 1 table

  11. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  12. Sequential lineup presentation promotes less-biased criterion setting but does not improve discriminability.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil

    2012-06-01

    When compared with simultaneous lineup presentation, sequential presentation has been shown to reduce false identifications to a greater extent than it reduces correct identifications. However, there has been much debate about whether this difference in identification performance represents improved discriminability or more conservative responding. In this research, data from 22 experiments that compared sequential and simultaneous lineups were analyzed using a compound signal-detection model, which is specifically designed to describe decision-making performance on tasks such as eyewitness identification tests. Sequential (cf. simultaneous) presentation did not influence discriminability, but produced a conservative shift in response bias that resulted in less-biased choosing for sequential than simultaneous lineups. These results inform understanding of the effects of lineup presentation mode on eyewitness identification decisions.

  13. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

  14. Optimal Sequential Resource Sharing and Exchange in Multi-Agent Systems

    OpenAIRE

    Xiao, Yuanzhang

    2014-01-01

    Central to the design of many engineering systems and social networks is to solve the underlying resource sharing and exchange problems, in which multiple decentralized agents make sequential decisions over time to optimize some long-term performance metrics. It is challenging for the decentralized agents to make optimal sequential decisions because of the complicated coupling among the agents and across time. In this dissertation, we mainly focus on three important classes of multi-agent seq...

  15. S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.

    Science.gov (United States)

    CICIARELLI, V; LEONARD, JOSEPH

    A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…

  16. Comparative use of the computer-aided angiography and rapid prototyping technology versus conventional imaging in the management of the Tile C pelvic fractures.

    Science.gov (United States)

    Li, Baofeng; Chen, Bei; Zhang, Ying; Wang, Xinyu; Wang, Fei; Xia, Hong; Yin, Qingshui

    2016-01-01

    Computed tomography (CT) scan with three-dimensional (3D) reconstruction has been used to evaluate complex fractures in pre-operative planning. In this study, rapid prototyping of a life-size model based on 3D reconstructions including bone and vessel was applied to evaluate the feasibility and prospect of these new technologies in surgical therapy of Tile C pelvic fractures by observing intra- and perioperative outcomes. The authors conducted a retrospective study on a group of 157 consecutive patients with Tile C pelvic fractures. Seventy-six patients were treated with conventional pre-operative preparation (A group) and 81 patients were treated with the help of computer-aided angiography and rapid prototyping technology (B group). Assessment of the two groups considered the following perioperative parameters: length of surgical procedure, intra-operative complications, intra- and postoperative blood loss, postoperative pain, postoperative nausea and vomiting (PONV), length of stay, and type of discharge. The two groups were homogeneous when compared in relation to mean age, sex, body weight, injury severity score, associated injuries and pelvic fracture severity score. Group B was performed in less time (105 ± 19 minutes vs. 122 ± 23 minutes) and blood loss (31.0 ± 8.2 g/L vs. 36.2 ± 7.4 g/L) compared with group A. Patients in group B experienced less pain (2.5 ± 2.3 NRS score vs. 2.8 ± 2.0 NRS score), and PONV affected only 8 % versus 10 % of cases. Times to discharge were shorter (7.8 ± 2.0 days vs. 10.2 ± 3.1 days) in group B, and most of patients were discharged to home. In our study, patients of Tile C pelvic fractures treated with computer-aided angiography and rapid prototyping technology had a better perioperative outcome than patients treated with conventional pre-operative preparation. Further studies are necessary to investigate the advantages in terms of clinical results in the short and long run.

  17. Introduction of sequential inactivated polio vaccine-oral polio vaccine schedule for routine infant immunization in Brazil's National Immunization Program.

    Science.gov (United States)

    Domingues, Carla Magda Allan S; de Fátima Pereira, Sirlene; Cunha Marreiros, Ana Carolina; Menezes, Nair; Flannery, Brendan

    2014-11-01

    In August 2012, the Brazilian Ministry of Health introduced inactivated polio vaccine (IPV) as part of sequential polio vaccination schedule for all infants beginning their primary vaccination series. The revised childhood immunization schedule included 2 doses of IPV at 2 and 4 months of age followed by 2 doses of oral polio vaccine (OPV) at 6 and 15 months of age. One annual national polio immunization day was maintained to provide OPV to all children aged 6 to 59 months. The decision to introduce IPV was based on preventing rare cases of vaccine-associated paralytic polio, financially sustaining IPV introduction, ensuring equitable access to IPV, and preparing for future OPV cessation following global eradication. Introducing IPV during a national multivaccination campaign led to rapid uptake, despite challenges with local vaccine supply due to high wastage rates. Continuous monitoring is required to achieve high coverage with the sequential polio vaccine schedule. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  18. Computer-related standards for the petroleum industry

    International Nuclear Information System (INIS)

    Winczewski, L.M.

    1992-01-01

    Rapid application of the computer to all areas of the petroleum industry is straining the capabilities of corporations and vendors to efficiently integrate computer tools into the work environment. Barriers to this integration arose form decades of competitive development of proprietary applications formats, along with compilation of data bases in isolation. Rapidly emerging industry-wide standards relating to computer applications and data management are poised to topple these barriers. This paper identifies the most active players within a rapidly evolving group of cooperative standardization activities sponsored by the petroleum industry. Summarized are their objectives, achievements, current activities and relationships to each other. The trends of these activities are assessed and projected

  19. Sequential change in MRI in two cases with small brainstem infarctions

    International Nuclear Information System (INIS)

    Masuda, Ryoichi; Fukuda, Osamu; Endoh, Shunro; Takaku, Akira; Suzuki, Takashi; Satoh, Shuji

    1987-01-01

    Magnetic resonance imaging (MRI) has been found to be very useful for the diagnosis of a small brainstem infarction. However, most reported cases have shown the changes at only the chronic stage. In this report, sequential changes in the MRI in two cases with small brainstem infarctions are presented. In Case 1, a 67-year-old man with a pure sensory stroke on the right side, a small infarcted area was observed at the left medial side of the pontomedullary junction on MRI. In Case 2, a 62-year-old man with a pure motor hemiparesis of the left side, MRI revealed a small infarcted area on the right ventral of the middle pons. The initial changes were confirmed 5 days (Case 1) and 18 hours (Case 2) after the onset of the completed stroke. No abnormal findings could be found in the computed tomography in either case. (author)

  20. Site competition on metal surfaces: an electron spectroscopic study of sequential adsorption on W(110)

    International Nuclear Information System (INIS)

    Steinkilberg, M.; Menzel, D.

    1977-01-01

    Using UPS and XPS, the sequential adsorption of hydrogen + carbon monoxide, and of hydrogen + oxygen, on W(110) has been studied at room temperature. Adsorption of CO on a H-covered surface is rapid and leads to total displacement of hydrogen. The resulting CO layer however, is different from that formed on the clean surface under identical conditions, in that it consists of a higher percentage of virgin CO, while considerably more β-CO forms on the clean surface. Oxygen does not adsorb on a H-covered surface, nor displace hydrogen. It is concluded that hydrogen most probably occupies the same sites utilized by dissociative adsorption of CO and oxygen, while virgin CO can also occupy different sites; its adsorption can thus lead to interactional weakening of the H-surface bond. (Auth.)

  1. Forensic Computing (Dagstuhl Seminar 13482)

    OpenAIRE

    Freiling, Felix C.; Hornung, Gerrit; Polcák, Radim

    2014-01-01

    Forensic computing} (sometimes also called digital forensics, computer forensics or IT forensics) is a branch of forensic science pertaining to digital evidence, i.e., any legal evidence that is processed by digital computer systems or stored on digital storage media. Forensic computing is a new discipline evolving within the intersection of several established research areas such as computer science, computer engineering and law. Forensic computing is rapidly gaining importance since the...

  2. Sequential sputtered Co-HfO{sub 2} granular films

    Energy Technology Data Exchange (ETDEWEB)

    Chadha, M.; Ng, V.

    2017-03-15

    A systematic study of magnetic, magneto-transport and micro-structural properties of Co-HfO{sub 2} granular films fabricated by sequential sputtering is presented. We demonstrate reduction in ferromagnetic-oxide formation by using HfO{sub 2} as the insulting matrix. Microstructure evaluation of the films showed that the film structure consisted of discrete hcp-Co grains embedded in HfO{sub 2} matrix. Films with varying compositions were prepared and their macroscopic properties were studied. We correlate the variation in these properties to the variation in film microstructure. Our study shows that Co-HfO{sub 2} films with reduced cobalt oxide and varying properties can be prepared using sequential sputtering technique. - Highlights: • Co-HfO{sub 2} granular films were prepared using sequential sputtering. • A reduction in ferromagnetic-oxide formation is observed. • Co-HfO{sub 2} films display superparamagnetism and tunnelling magneto-resistance. • Varying macroscopic properties were achieved by changing film composition. • Applications can be found in moderate MR sensors and high –frequency RF devices.

  3. Involvement of Working Memory in College Students' Sequential Pattern Learning and Performance

    Science.gov (United States)

    Kundey, Shannon M. A.; De Los Reyes, Andres; Rowan, James D.; Lee, Bern; Delise, Justin; Molina, Sabrina; Cogdill, Lindsay

    2013-01-01

    When learning highly organized sequential patterns of information, humans and nonhuman animals learn rules regarding the hierarchical structures of these sequences. In three experiments, we explored the role of working memory in college students' sequential pattern learning and performance in a computerized task involving a sequential…

  4. Sequential Versus Simultaneous Market Delineation: The Relevant Antitrust Market for Salmon

    DEFF Research Database (Denmark)

    Haldrup, Niels; Peter, Møllgaard

    Delineation of the relevant market forms a pivotal part of most antitrust cases. The standard approach is sequential. First the product market is delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographical dimension will no...... and geographical markets. Using a unique data set for prices of Norwegian and Scottish salmon, we propose a methodology for simultaneous market delineation and we demonstrate that compared to a sequential approach conclusions will be reversed.......Delineation of the relevant market forms a pivotal part of most antitrust cases. The standard approach is sequential. First the product market is delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographical dimension...

  5. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    Science.gov (United States)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S

  6. Rapid Spontaneously Resolving Acute Subdural Hematoma

    Science.gov (United States)

    Gan, Qi; Zhao, Hexiang; Zhang, Hanmei; You, Chao

    2017-01-01

    Introduction: This study reports a rare patient of a rapid spontaneously resolving acute subdural hematoma. In addition, an analysis of potential clues for the phenomenon is presented with a review of the literature. Patient Presentation: A 1-year-and-2-month-old boy fell from a height of approximately 2 m. The patient was in a superficial coma with a Glasgow Coma Scale of 8 when he was transferred to the authors’ hospital. Computed tomography revealed the presence of an acute subdural hematoma with a midline shift beyond 1 cm. His guardians refused invasive interventions and chose conservative treatment. Repeat imaging after 15 hours showed the evident resolution of the hematoma and midline reversion. Progressive magnetic resonance imaging demonstrated the complete resolution of the hematoma, without redistribution to a remote site. Conclusions: Even though this phenomenon has a low incidence, the probability of a rapid spontaneously resolving acute subdural hematoma should be considered when patients present with the following characteristics: children or elderly individuals suffering from mild to moderate head trauma; stable or rapidly recovered consciousness; and simple acute subdural hematoma with a moderate thickness and a particularly low-density band in computed tomography scans. PMID:28468224

  7. Endogenous sequential cortical activity evoked by visual stimuli.

    Science.gov (United States)

    Carrillo-Reid, Luis; Miller, Jae-Eun Kang; Hamm, Jordan P; Jackson, Jesse; Yuste, Rafael

    2015-06-10

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. Copyright © 2015 Carrillo-Reid et al.

  8. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  9. Modeling sequential context effects in diagnostic interpretation of screening mammograms.

    Science.gov (United States)

    Alamudun, Folami; Paulus, Paige; Yoon, Hong-Jun; Tourassi, Georgia

    2018-07-01

    Prior research has shown that physicians' medical decisions can be influenced by sequential context, particularly in cases where successive stimuli exhibit similar characteristics when analyzing medical images. This type of systematic error is known to psychophysicists as sequential context effect as it indicates that judgments are influenced by features of and decisions about the preceding case in the sequence of examined cases, rather than being based solely on the peculiarities unique to the present case. We determine if radiologists experience some form of context bias, using screening mammography as the use case. To this end, we explore correlations between previous perceptual behavior and diagnostic decisions and current decisions. We hypothesize that a radiologist's visual search pattern and diagnostic decisions in previous cases are predictive of the radiologist's current diagnostic decisions. To test our hypothesis, we tasked 10 radiologists of varied experience to conduct blind reviews of 100 four-view screening mammograms. Eye-tracking data and diagnostic decisions were collected from each radiologist under conditions mimicking clinical practice. Perceptual behavior was quantified using the fractal dimension of gaze scanpath, which was computed using the Minkowski-Bouligand box-counting method. To test the effect of previous behavior and decisions, we conducted a multifactor fixed-effects ANOVA. Further, to examine the predictive value of previous perceptual behavior and decisions, we trained and evaluated a predictive model for radiologists' current diagnostic decisions. ANOVA tests showed that previous visual behavior, characterized by fractal analysis, previous diagnostic decisions, and image characteristics of previous cases are significant predictors of current diagnostic decisions. Additionally, predictive modeling of diagnostic decisions showed an overall improvement in prediction error when the model is trained on additional information about

  10. Efficacy of premixed versus sequential administration of ...

    African Journals Online (AJOL)

    sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

  11. [Sequential degradation of p-cresol by photochemical and biological methods].

    Science.gov (United States)

    Karetnikova, E A; Chaĭkovskaia, O N; Sokolova, I V; Nikitina, L I

    2008-01-01

    Sequential photo- and biodegradation of p-cresol was studied using a mercury lamp, as well as KrCl and XeCl excilamps. Preirradiation of p-cresol at a concentration of 10(-4) M did not affect the rate of its subsequent biodegradation. An increase in the concentration of p-cresol to 10(-3) M and in the duration preliminary UV irradiation inhibited subsequent biodegradation. Biodegradation of p-cresol was accompanied by the formation of a product with a fluorescence maximum at 365 nm (lambdaex 280 nm), and photodegradation yielded a compound fluorescing at 400 nm (lambdaex 330 nm). Sequential UV and biodegradation led to the appearance of bands in the fluorescence spectra that were ascribed to p-cresol and its photolysis products. It was shown that sequential use of biological and photochemical degradation results in degradation of not only the initial toxicant but also the metabolites formed during its biodegradation.

  12. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    Iooss, Christine

    1991-01-01

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

  13. Effect of sequential isoproturon pulse exposure on Scenedesmus vacuolatus.

    Science.gov (United States)

    Vallotton, Nathalie; Eggen, Rik Ilda Lambertus; Chèvre, Nathalie

    2009-04-01

    Aquatic organisms are typically exposed to fluctuating concentrations of herbicides in streams. To assess the effects on algae of repeated peak exposure to the herbicide isoproturon, we subjected the alga Scenedesmus vacuolatus to two sequential pulse exposure scenarios. Effects on growth and on the inhibition of the effective quantum yield of photosystem II (PSII) were measured. In the first scenario, algae were exposed to short, 5-h pulses at high isoproturon concentrations (400 and 1000 microg/l), each followed by a recovery period of 18 h, while the second scenario consisted of 22.5-h pulses at lower concentrations (60 and 120 microg/l), alternating with short recovery periods (1.5 h). In addition, any changes in the sensitivity of the algae to isoproturon following sequential pulses were examined by determining the growth rate-EC(50) prior to and following exposure. In both exposure scenarios, we found that algal growth and its effective quantum yield were systematically inhibited during the exposures and that these effects were reversible. Sequential pulses to isoproturon could be considered a sequence of independent events. Nevertheless, a consequence of inhibited growth during the repeated exposures is the cumulative decrease in biomass production. Furthermore, in the second scenario, when the sequence of long pulses began to approach a scenario of continuous exposure, a slight increase in the tolerance of the algae to isoproturon was observed. These findings indicated that sequential pulses do affect algae during each pulse exposure, even if algae recover between the exposures. These observations could support an improved risk assessment of fluctuating exposures to reversibly acting herbicides.

  14. Sequential-Simultaneous Analysis of Japanese Children's Performance on the Japanese McCarthy.

    Science.gov (United States)

    Ishikuma, Toshinori; And Others

    This study explored the hypothesis that Japanese children perform significantly better on simultaneous processing than on sequential processing. The Kaufman Assessment Battery for Children (K-ABC) served as the criterion of the two types of mental processing. Regression equations to predict Sequential and Simultaneous processing from McCarthy…

  15. Acquisition of Inductive Biconditional Reasoning Skills: Training of Simultaneous and Sequential Processing.

    Science.gov (United States)

    Lee, Seong-Soo

    1982-01-01

    Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…

  16. RapidRMSD: Rapid determination of RMSDs corresponding to motions of flexible molecules.

    Science.gov (United States)

    Neveu, Emilie; Popov, Petr; Hoffmann, Alexandre; Migliosi, Angelo; Besseron, Xavier; Danoy, Grégoire; Bouvry, Pascal; Grudinin, Sergei

    2018-03-15

    The root mean square deviation (RMSD) is one of the most used similarity criteria in structural biology and bioinformatics. Standard computation of the RMSD has a linear complexity with respect to the number of atoms in a molecule, making RMSD calculations time-consuming for the large-scale modeling applications, such as assessment of molecular docking predictions or clustering of spatially proximate molecular conformations. Previously we introduced the RigidRMSD algorithm to compute the RMSD corresponding to the rigid-body motion of a molecule. In this study we go beyond the limits of the rigid-body approximation by taking into account conformational flexibility of the molecule. We model the flexibility with a reduced set of collective motions computed with e.g. normal modes or principal component analysis. The initialization of our algorithm is linear in the number of atoms and all the subsequent evaluations of RMSD values between flexible molecular conformations depend only on the number of collective motions that are selected to model the flexibility. Therefore, our algorithm is much faster compared to the standard RMSD computation for large-scale modeling applications. We demonstrate the efficiency of our method on several clustering examples, including clustering of flexible docking results and molecular dynamics (MD) trajectories. We also demonstrate how to use the presented formalism to generate pseudo-random constant-RMSD structural molecular ensembles and how to use these in cross-docking. We provide the algorithm written in C ++ as the open-source RapidRMSD library governed by the BSD-compatible license, which is available at http://team.inria.fr/nano-d/software/RapidRMSD/. The constant-RMSD structural ensemble application and clustering of MD trajectories is available at http://team.inria.fr/nano-d/software/nolb-normal-modes/. sergei.grudinin@inria.fr. Supplementary data are available at Bioinformatics.

  17. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  18. Mining of high utility-probability sequential patterns from uncertain databases.

    Directory of Open Access Journals (Sweden)

    Binbin Zhang

    Full Text Available High-utility sequential pattern mining (HUSPM has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs. They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM for mining high utility-probability sequential patterns (HUPSPs in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds.

  19. The Computational Materials Repository

    DEFF Research Database (Denmark)

    Landis, David D.; Hummelshøj, Jens S.; Nestorov, Svetlozar

    2012-01-01

    The possibilities for designing new materials based on quantum physics calculations are rapidly growing, but these design efforts lead to a significant increase in the amount of computational data created. The Computational Materials Repository (CMR) addresses this data challenge and provides...

  20. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  1. Methodology for benzodiazepine receptor binding assays at physiological temperature. Rapid change in equilibrium with falling temperature

    International Nuclear Information System (INIS)

    Dawson, R.M.

    1986-01-01

    Benzodiazepine receptors of rat cerebellum were assayed with [ 3 H]-labeled flunitrazepam at 37 0 C, and assays were terminated by filtration in a cold room according to one of three protocols: keeping each sample at 37 degrees C until ready for filtration, taking the batch of samples (30) into the cold room and filtering sequentially in the order 1-30, and taking the batch of 30 samples into the cold room and filtering sequentially in the order 30-1. the results for each protocol were substantially different from each other, indicating that rapid disruption of equilibrium occurred as the samples cooled in the cold room while waiting to be filtered. Positive or negative cooperativity of binding was apparent, and misleading effects of gamma-aminobutyric acid on the affinity of diazepam were observed, unless each sample was kept at 37 0 C until just prior to filtration

  2. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan

    2016-01-01

    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  3. Heterotic quantum and classical computing on convergence spaces

    Science.gov (United States)

    Patten, D. R.; Jakel, D. W.; Irwin, R. J.; Blair, H. A.

    2015-05-01

    Category-theoretic characterizations of heterotic models of computation, introduced by Stepney et al., combine computational models such as classical/quantum, digital/analog, synchronous/asynchronous, etc. to obtain increased computational power. A highly informative classical/quantum heterotic model of computation is represented by Abramsky's simple sequential imperative quantum programming language which extends the classical simple imperative programming language to encompass quantum computation. The mathematical (denotational) semantics of this classical language serves as a basic foundation upon which formal verification methods can be developed. We present a more comprehensive heterotic classical/quantum model of computation based on heterotic dynamical systems on convergence spaces. Convergence spaces subsume topological spaces but admit finer structure from which, in prior work, we obtained differential calculi in the cartesian closed category of convergence spaces allowing us to define heterotic dynamical systems, given by coupled systems of first order differential equations whose variables are functions from the reals to convergence spaces.

  4. Sequential voluntary cough and aspiration or aspiration risk in Parkinson's disease.

    Science.gov (United States)

    Hegland, Karen Wheeler; Okun, Michael S; Troche, Michelle S

    2014-08-01

    Disordered swallowing, or dysphagia, is almost always present to some degree in people with Parkinson's disease (PD), either causing aspiration or greatly increasing the risk for aspiration during swallowing. This likely contributes to aspiration pneumonia, a leading cause of death in this patient population. Effective airway protection is dependent upon multiple behaviors, including cough and swallowing. Single voluntary cough function is disordered in people with PD and dysphagia. However, the appropriate response to aspirate material is more than one cough, or sequential cough. The goal of this study was to examine voluntary sequential coughing in people with PD, with and without dysphagia. Forty adults diagnosed with idiopathic PD produced two trials of sequential voluntary cough. The cough airflows were obtained using pneumotachograph and facemask and subsequently digitized and recorded. All participants received a modified barium swallow study as part of their clinical care, and the worst penetration-aspiration score observed was used to determine whether the patient had dysphagia. There were significant differences in the compression phase duration, peak expiratory flow rates, and amount of air expired of the sequential cough produced by participants with and without dysphagia. The presence of dysphagia in people with PD is associated with disordered cough function. Sequential cough, which is important in removing aspirate material from large- and smaller-diameter airways, is also impaired in people with PD and dysphagia compared with those without dysphagia. There may be common neuroanatomical substrates for cough and swallowing impairment in PD leading to the co-occurrence of these dysfunctions.

  5. Droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling for simpler and faster PCR assay using wire-guided manipulations

    Directory of Open Access Journals (Sweden)

    You David J

    2012-09-01

    Full Text Available Abstract A computer numerical control (CNC apparatus was used to perform droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling on a single superhydrophobic surface and a multi-chambered PCB heater. Droplets were manipulated using “wire-guided” method (a pipette tip was used in this study. This methodology can be easily adapted to existing commercial robotic pipetting system, while demonstrated added capabilities such as vibrational mixing, high-speed centrifuging of droplets, simple DNA extraction utilizing the hydrophobicity difference between the tip and the superhydrophobic surface, and rapid thermocycling with a moving droplet, all with wire-guided droplet manipulations on a superhydrophobic surface and a multi-chambered PCB heater (i.e., not on a 96-well plate. Serial dilutions were demonstrated for diluting sample matrix. Centrifuging was demonstrated by rotating a 10 μL droplet at 2300 round per minute, concentrating E. coli by more than 3-fold within 3 min. DNA extraction was demonstrated from E. coli sample utilizing the disposable pipette tip to cleverly attract the extracted DNA from the droplet residing on a superhydrophobic surface, which took less than 10 min. Following extraction, the 1500 bp sequence of Peptidase D from E. coli was amplified using rapid droplet thermocycling, which took 10 min for 30 cycles. The total assay time was 23 min, including droplet centrifugation, droplet DNA extraction and rapid droplet thermocycling. Evaporation from of 10 μL droplets was not significant during these procedures, since the longest time exposure to air and the vibrations was less than 5 min (during DNA extraction. The results of these sequentially executed processes were analyzed using gel electrophoresis. Thus, this work demonstrates the adaptability of the system to replace many common laboratory tasks on a single platform (through re-programmability, in rapid succession (using droplets

  6. Computer determination of event maps with application to auxiliary supply systems

    International Nuclear Information System (INIS)

    Wredenberg, L.; Billinton, R.

    1975-01-01

    A method of evaluating the reliability of sequential operations in systems containing standby and alternate supply facilities is presented. The method is based upon the use of a digital computer for automatic development of event maps. The technique is illustrated by application to a nuclear power plant auxiliary supply system. (author)

  7. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    Science.gov (United States)

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  8. Computer-assisted imaging of the fetus with magnetic resonance imaging.

    Science.gov (United States)

    Colletti, P M

    1996-01-01

    The purpose of this paper is to review the use of magnetic resonance imaging (MRI) of the fetus and to propose future techniques and applications. Institutional review board approved MR images of the fetus were acquired in 66 patients with sonographically suspected fetal abnormalities. Axial, coronal, and sagittal short TR, short TE images were obtained. In addition, 12 studies were performed with rapid scans requiring 700-1200 ms using either GRASS or Spoiled GRASS techniques. Sequential studies demonstrating fetal motion were also performed. Three studies with 3D IR prepped GRASS were performed. These allowed for orthogonal and non-orthogonal reformatted views and 3D display. Normal fetal structures were shown with MRI, including brain, heart, liver, stomach, intestines, and bladder. Gross fetal anomalies could generally be demonstrated with MRI. MRI may give additional information to that of sonography in fetal anomalies, particularly those involving the central nervous system, and in the detection of fat, blood, and meconium. MRI of the fetus can demonstrate normal and abnormal structures. Newer techniques with faster imaging will allow for greater possibility of computer assisted manipulation of data.

  9. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm.

    Science.gov (United States)

    Arber, Madeleine M; Ireland, Michael J; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald

    2017-01-01

    Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.

  10. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm

    Directory of Open Access Journals (Sweden)

    Madeleine M. Arber

    2017-09-01

    Full Text Available Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration. Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.

  11. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  12. STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY

    Directory of Open Access Journals (Sweden)

    Damián Fernández

    2014-12-01

    Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

  13. Terminating Sequential Delphi Survey Data Collection

    Science.gov (United States)

    Kalaian, Sema A.; Kasim, Rafa M.

    2012-01-01

    The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…

  14. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  15. Configural and component processing in simultaneous and sequential lineup procedures.

    Science.gov (United States)

    Flowe, Heather D; Smith, Harriet M J; Karoğlu, Nilda; Onwuegbusi, Tochukwu O; Rai, Lovedeep

    2016-01-01

    Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences. We had participants view a crime video, and then they attempted to identify the perpetrator from a simultaneous or sequential lineup. The test faces were presented either upright or inverted, as previous research has shown that inverting test faces disrupts configural processing. The size of the inversion effect for faces was the same across lineup procedures, indicating that configural processing underlies face recognition in both procedures. Discrimination accuracy was comparable across lineup procedures in both the upright and inversion condition. Theoretical implications of the results are discussed.

  16. Mixing modes in a population-based interview survey: comparison of a sequential and a concurrent mixed-mode design for public health research.

    Science.gov (United States)

    Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia

    2018-01-01

    Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n  = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n  = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between

  17. An algorithm of discovering signatures from DNA databases on a computer cluster.

    Science.gov (United States)

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  18. Defining Biological Networks for Noise Buffering and Signaling Sensitivity Using Approximate Bayesian Computation

    Directory of Open Access Journals (Sweden)

    Shuqiang Wang

    2014-01-01

    Full Text Available Reliable information processing in cells requires high sensitivity to changes in the input signal but low sensitivity to random fluctuations in the transmitted signal. There are often many alternative biological circuits qualifying for this biological function. Distinguishing theses biological models and finding the most suitable one are essential, as such model ranking, by experimental evidence, will help to judge the support of the working hypotheses forming each model. Here, we employ the approximate Bayesian computation (ABC method based on sequential Monte Carlo (SMC to search for biological circuits that can maintain signaling sensitivity while minimizing noise propagation, focusing on cases where the noise is characterized by rapid fluctuations. By systematically analyzing three-component circuits, we rank these biological circuits and identify three-basic-biological-motif buffering noise while maintaining sensitivity to long-term changes in input signals. We discuss in detail a particular implementation in control of nutrient homeostasis in yeast. The principal component analysis of the posterior provides insight into the nature of the reaction between nodes.

  19. Verification of computer system PROLOG - software tool for rapid assessments of consequences of short-term radioactive releases to the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Kiselev, Alexey A.; Krylov, Alexey L.; Bogatov, Sergey A. [Nuclear Safety Institute (IBRAE), Bolshaya Tulskaya st. 52, 115191, Moscow (Russian Federation)

    2014-07-01

    In case of nuclear and radiation accidents emergency response authorities require a tool for rapid assessments of possible consequences. One of the most significant problems is lack of data on initial state of an accident. The lack can be especially critical in case the accident occurred in a location that was not thoroughly studied beforehand (during transportation of radioactive materials for example). One of possible solutions is the hybrid method when a model that enables rapid assessments with the use of reasonable minimum of input data is used conjointly with an observed data that can be collected shortly after accidents. The model is used to estimate parameters of the source and uncertain meteorological parameters on the base of some observed data. For example, field of fallout density can be observed and measured within hours after an accident. After that the same model with the use of estimated parameters is used to assess doses and necessity of recommended and mandatory countermeasures. The computer system PROLOG was designed to solve the problem. It is based on the widely used Gaussian model. The standard Gaussian model is supplemented with several sub-models that allow to take into account: polydisperse aerosols, aerodynamic shade from buildings in the vicinity of the place of accident, terrain orography, initial size of the radioactive cloud, effective height of the release, influence of countermeasures on the doses of radioactive exposure of humans. It uses modern GIS technologies and can use web map services. To verify ability of PROLOG to solve the problem it is necessary to test its ability to assess necessary parameters of real accidents in the past. Verification of the computer system on the data of Chazhma Bay accident (Russian Far East, 1985) was published previously. In this work verification was implemented on the base of observed contamination from the Kyshtym disaster (PA Mayak, 1957) and the Tomsk accident (1993). Observations of Sr-90

  20. Rapid and reagent-saving immunoassay using innovative stirring actions of magnetic beads in microreactors in the sequential injection mode.

    Science.gov (United States)

    Tanaka, K; Imagawa, H

    2005-12-15

    We developed new ELISA techniques in sequential injection analysis (SIA) mode using microreactors with content of a few microliters. We immobilized antibodies on magnetic beads 1.0mum in diameter, injected the beads into microreactors and applied rotating magnetic fields of several hundred gauss. Magnetic beads, suspended in liquid in density of approximately 10(9)-10(10) particles per millilitre, form a large number of thin rod clusters, whose length-wise axes are oriented in parallel with the magnetic field. We rotate the Nd magnets below the center of the microreactor by a tiny motor at about 2000-5000rpm. These rotating clusters remarkably accelerate the binding rate of the antibodies with antigens in the liquid. The beads are trapped around the center of the rotating magnetic field even in the flowing liquid. This newly found phenomenon enables easy bead handling in microreactors. Modification of reactor walls with selected blocking reagents was essential, because protein-coated beads often stick to the wall surface and cannot move freely. Washing steps were also shortened.

  1. Comparison of Coregistration Accuracy of Pelvic Structures Between Sequential and Simultaneous Imaging During Hybrid PET/MRI in Patients with Bladder Cancer.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Balar, Arjun V; Huang, William C; Jackson, Kimberly; Friedman, Kent P

    2015-08-01

    The aim of this study was to compare coregistration of the bladder wall, bladder masses, and pelvic lymph nodes between sequential and simultaneous PET and MRI acquisitions obtained during hybrid (18)F-FDG PET/MRI performed using a diuresis protocol in bladder cancer patients. Six bladder cancer patients underwent (18)F-FDG hybrid PET/MRI, including IV Lasix administration and oral hydration, before imaging to achieve bladder clearance. Axial T2-weighted imaging (T2WI) was obtained approximately 40 minutes before PET ("sequential") and concurrently with PET ("simultaneous"). Three-dimensional spatial coordinates of the bladder wall, bladder masses, and pelvic lymph nodes were recorded for PET and T2WI. Distances between these locations on PET and T2WI sequences were computed and used to compare in-plane (x-y plane) and through-plane (z-axis) misregistration relative to PET between T2WI acquisitions. The bladder increased in volume between T2WI acquisitions (sequential, 176 [139] mL; simultaneous, 255 [146] mL). Four patients exhibited a bladder mass, all with increased activity (SUV, 9.5-38.4). Seven pelvic lymph nodes in 4 patients showed increased activity (SUV, 2.2-9.9). The bladder wall exhibited substantially less misregistration relative to PET for simultaneous, compared with sequential, acquisitions in in-plane (2.8 [3.1] mm vs 7.4 [9.1] mm) and through-plane (1.7 [2.2] mm vs 5.7 [9.6] mm) dimensions. Bladder masses exhibited slightly decreased misregistration for simultaneous, compared with sequential, acquisitions in in-plane (2.2 [1.4] mm vs 2.6 [1.9] mm) and through-plane (0.0 [0.0] mm vs 0.3 [0.8] mm) dimensions. FDG-avid lymph nodes exhibited slightly decreased in-plane misregistration (1.1 [0.8] mm vs 2.5 [0.6] mm), although identical through-plane misregistration (4.0 [1.9] mm vs 4.0 [2.8] mm). Using hybrid PET/MRI, simultaneous imaging substantially improved bladder wall coregistration and slightly improved coregistration of bladder masses and

  2. Functional programming for computer vision

    Science.gov (United States)

    Breuel, Thomas M.

    1992-04-01

    Functional programming is a style of programming that avoids the use of side effects (like assignment) and uses functions as first class data objects. Compared with imperative programs, functional programs can be parallelized better, and provide better encapsulation, type checking, and abstractions. This is important for building and integrating large vision software systems. In the past, efficiency has been an obstacle to the application of functional programming techniques in computationally intensive areas such as computer vision. We discuss and evaluate several 'functional' data structures for representing efficiently data structures and objects common in computer vision. In particular, we will address: automatic storage allocation and reclamation issues; abstraction of control structures; efficient sequential update of large data structures; representing images as functions; and object-oriented programming. Our experience suggests that functional techniques are feasible for high- performance vision systems, and that a functional approach simplifies the implementation and integration of vision systems greatly. Examples in C++ and SML are given.

  3. Development Of A Data Assimilation Capability For RAPID

    Science.gov (United States)

    Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.

    2017-12-01

    The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.

  4. Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.

    Science.gov (United States)

    Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric

    2017-02-01

    Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.

  5. Polymeric microchip for the simultaneous determination of anions and cations by hydrodynamic injection using a dual-channel sequential injection microchip electrophoresis system.

    Science.gov (United States)

    Gaudry, Adam J; Nai, Yi Heng; Guijt, Rosanne M; Breadmore, Michael C

    2014-04-01

    A dual-channel sequential injection microchip capillary electrophoresis system with pressure-driven injection is demonstrated for simultaneous separations of anions and cations from a single sample. The poly(methyl methacrylate) (PMMA) microchips feature integral in-plane contactless conductivity detection electrodes. A novel, hydrodynamic "split-injection" method utilizes background electrolyte (BGE) sheathing to gate the sample flows, while control over the injection volume is achieved by balancing hydrodynamic resistances using external hydrodynamic resistors. Injection is realized by a unique flow-through interface, allowing for automated, continuous sampling for sequential injection analysis by microchip electrophoresis. The developed system was very robust, with individual microchips used for up to 2000 analyses with lifetimes limited by irreversible blockages of the microchannels. The unique dual-channel geometry was demonstrated by the simultaneous separation of three cations and three anions in individual microchannels in under 40 s with limits of detection (LODs) ranging from 1.5 to 24 μM. From a series of 100 sequential injections the %RSDs were determined for every fifth run, resulting in %RSDs for migration times that ranged from 0.3 to 0.7 (n = 20) and 2.3 to 4.5 for peak area (n = 20). This system offers low LODs and a high degree of reproducibility and robustness while the hydrodynamic injection eliminates electrokinetic bias during injection, making it attractive for a wide range of rapid, sensitive, and quantitative online analytical applications.

  6. Sequential motor skill: cognition, perception and action

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.

    2013-01-01

    Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role

  7. Zips : mining compressing sequential patterns in streams

    NARCIS (Netherlands)

    Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

    2013-01-01

    We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

  8. Rapid and efficient radiosynthesis of [123I]I-PK11195, a single photon emission computed tomography tracer for peripheral benzodiazepine receptors

    International Nuclear Information System (INIS)

    Pimlott, Sally L.; Stevenson, Louise; Wyper, David J.; Sutherland, Andrew

    2008-01-01

    Introduction: [ 123 I]I-PK11195 is a high-affinity single photon emission computed tomography radiotracer for peripheral benzodiazepine receptors that has previously been used to measure activated microglia and to assess neuroinflammation in the living human brain. This study investigates the radiosynthesis of [ 123 I]I-PK11195 in order to develop a rapid and efficient method that obtains [ 123 I]I-PK11195 with a high specific activity for in vivo animal and human imaging studies. Methods: The synthesis of [ 123 I]I-PK11195 was evaluated using a solid-state interhalogen exchange method and an electrophilic iododestannylation method, where bromine and trimethylstannyl derivatives were used as precursors, respectively. In the electrophilic iododestannylation method, the oxidants peracetic acid and chloramine-T were both investigated. Results: Electrophilic iododestannylation produced [ 123 I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than achievable using the halogen exchange method investigated. Using chloramine-T as oxidant provided a rapid and efficient method of choice for the synthesis of [ 123 I]I-PK11195. Conclusions: [ 123 I]I-PK11195 has been successfully synthesized via a rapid and efficient electrophilic iododestannylation method, producing [ 123 I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than previously achieved

  9. Improving the identification accuracy of senior witnesses: do prelineup questions and sequential testing help?

    Science.gov (United States)

    Memon, Amina; Gabbert, Fiona

    2003-04-01

    Eyewitness research has identified sequential lineup testing as a way of reducing false lineup choices while maintaining accurate identifications. The authors examined the usefulness of this procedure for reducing false choices in older adults. Young and senior witnesses viewed a crime video and were later presented with target present orabsent lineups in a simultaneous or sequential format. In addition, some participants received prelineup questions about their memory for a perpetrator's face and about their confidence in their ability to identify the culprit or to correctly reject the lineup. The sequential lineup reduced false choosing rates among young and older adults in target-absent conditions. In target-present conditions, sequential testing significantly reduced the correct identification rate in both age groups.

  10. Simultaneous versus Sequential Intratympanic Steroid Treatment for Severe-to-Profound Sudden Sensorineural Hearing Loss.

    Science.gov (United States)

    Yoo, Myung Hoon; Lim, Won Sub; Park, Joo Hyun; Kwon, Joong Keun; Lee, Tae-Hoon; An, Yong-Hwi; Kim, Young-Jin; Kim, Jong Yang; Lim, Hyun Woo; Park, Hong Ju

    2016-01-01

    Severe-to-profound sudden sensorineural hearing loss (SSNHL) has a poor prognosis. We aimed to compare the efficacy of simultaneous and sequential oral and intratympanic steroids for this condition. Fifty patients with severe-to-profound SSNHL (>70 dB HL) were included from 7 centers. The simultaneous group (27 patients) received oral and intratympanic steroid injections for 2 weeks. The sequential group (23 patients) was treated with oral steroids for 2 weeks and intratympanic steroids for the subsequent 2 weeks. Pure-tone averages (PTA) and word discrimination scores (WDS) were compared before treatment and 2 weeks and 1 and 2 months after treatment. Treatment outcomes according to the modified American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) criteria were also analyzed. The improvement in PTA and WDS at the 2-week follow-up was 23 ± 21 dB HL and 20 ± 39% in the simultaneous group and 31 ± 29 dB HL and 37 ± 42% in the sequential group; this was not statistically significant. Complete or partial recovery at the 2-week follow-up was observed in 26% of the simultaneous group and 30% of the sequential group; this was also not significant. The improvement in PTA and WDS at the 2-month follow-up was 40 ± 20 dB HL and 37 ± 35% in the simultaneous group and 41 ± 25 dB HL and 48 ± 41% in the sequential group; this was not statistically significant. Complete or partial recovery at the 2-month follow-up was observed in 33% of the simultaneous group and 35% of the sequential group; this was also not significant. Seven patients in the sequential group did not need intratympanic steroid injections for sufficient improvement after oral steroids alone. Simultaneous oral/intratympanic steroid treatment yielded a recovery similar to that produced by sequential treatment. Because the addition of intratympanic steroids can be decided upon based on the improvement after an oral steroid, the sequential regimen can be recommended to avoid unnecessary

  11. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Directory of Open Access Journals (Sweden)

    Shigang Zhang

    2015-10-01

    Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.

  12. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Science.gov (United States)

    Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin

    2015-01-01

    Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709

  13. Inhibitory effect of sequential combined chemotherapy and radiotherapy on growth of implanted tumor in mice

    International Nuclear Information System (INIS)

    Okada, Kouji

    1983-01-01

    Sequential chemotherapy using FT-207, adriamycin and mitomycin C followed by radiotherapy was attempted to achieve effective inhibition against implanted tumor in C57BL/6 black mice bearing YM-12 tumors. Sequential combined chemotherapy was more effective than single drug chemotherapy or combined chemotherapy of other drugs. Addition of radiotherapy to the sequential combined chemotherapy was successful for enhancing therapeutic effect. (author)

  14. Structural and Functional Impacts of ER Coactivator Sequential Recruitment.

    Science.gov (United States)

    Yi, Ping; Wang, Zhao; Feng, Qin; Chou, Chao-Kai; Pintilie, Grigore D; Shen, Hong; Foulds, Charles E; Fan, Guizhen; Serysheva, Irina; Ludtke, Steven J; Schmid, Michael F; Hung, Mien-Chie; Chiu, Wah; O'Malley, Bert W

    2017-09-07

    Nuclear receptors recruit multiple coactivators sequentially to activate transcription. This "ordered" recruitment allows different coactivator activities to engage the nuclear receptor complex at different steps of transcription. Estrogen receptor (ER) recruits steroid receptor coactivator-3 (SRC-3) primary coactivator and secondary coactivators, p300/CBP and CARM1. CARM1 recruitment lags behind the binding of SRC-3 and p300 to ER. Combining cryo-electron microscopy (cryo-EM) structure analysis and biochemical approaches, we demonstrate that there is a close crosstalk between early- and late-recruited coactivators. The sequential recruitment of CARM1 not only adds a protein arginine methyltransferase activity to the ER-coactivator complex, it also alters the structural organization of the pre-existing ERE/ERα/SRC-3/p300 complex. It induces a p300 conformational change and significantly increases p300 HAT activity on histone H3K18 residues, which, in turn, promotes CARM1 methylation activity on H3R17 residues to enhance transcriptional activity. This study reveals a structural role for a coactivator sequential recruitment and biochemical process in ER-mediated transcription. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Sequential decay of Reggeons

    International Nuclear Information System (INIS)

    Yoshida, Toshihiro

    1981-01-01

    Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)

  16. [Professor GAO Yuchun's experience on "sequential acupuncture leads to smooth movement of qi"].

    Science.gov (United States)

    Wang, Yanjun; Xing, Xiao; Cui, Linhua

    2016-01-01

    Professor GAO Yuchun is considered as the key successor of GAO's academic school of acupuncture and moxibustion in Yanzhao region. Professor GAO's clinical experience of, "sequential acupuncture" is introduced in details in this article. In Professor GAO's opinions, appropriate acupuncture sequence is the key to satisfactory clinical effects during treatment. Based on different acupoints, sequential acupuncture can achieve the aim of qi following needles and needles leading qi; based on different symptoms, sequential acupuncture can regulate qi movement; based on different body positions, sequential acupuncture can harmonize qi-blood and reinforcing deficiency and reducing excess. In all, according to the differences of disease condition and constitution, based on the accurate acupoint selection and appropriate manipulation, it is essential to capture the nature of diseases and make the order of acupuncture, which can achieve the aim of regulating qi movement and reinforcing deficiency and reducing excess.

  17. A Fixpoint-Based Calculus for Graph-Shaped Computational Fields

    DEFF Research Database (Denmark)

    Lluch Lafuente, Alberto; Loreti, Michele; Montanari, Ugo

    2015-01-01

    topology is represented by a graph-shaped field, namely a network with attributes on both nodes and arcs, where arcs represent interaction capabilities between nodes. We propose a calculus where computation is strictly synchronous and corresponds to sequential computations of fixpoints in the graph......-shaped field. Under some conditions, those fixpoints can be computed by synchronised iterations, where in each iteration the attributes of a node is updated based on the attributes of the neighbours in the previous iteration. Basic constructs are reminiscent of the semiring μ-calculus, a semiring......-valued generalisation of the modal μ-calculus, which provides a flexible mechanism to specify the neighbourhood range (according to path formulae) and the way attributes should be combined (through semiring operators). Additional control-How constructs allow one to conveniently structure the fixpoint computations. We...

  18. TELEGRAPHS TO INCANDESCENT LAMPS: A SEQUENTIAL PROCESS OF INNOVATION

    Directory of Open Access Journals (Sweden)

    Laurence J. Malone

    2000-01-01

    Full Text Available This paper outlines a sequential process of technological innovation in the emergence of the electrical industry in the United States from 1830 to 1880. Successive inventions that realize the commercial possibilities of electricity provided the foundation for an industry where technical knowledge, invention and diffusion were ultimately consolidated within the managerial structure of new firms. The genesis of the industry is traced, sequentially, through the development of the telegraph, arc light and incandescent lamp. Exploring the origins of the telegraph and incandescent lamp reveals a process where a series of inventions and firms result from successful efforts touse scientific principles to create new commodities and markets.

  19. Results of simultaneous and sequential pediatric liver and kidney transplantation.

    Science.gov (United States)

    Rogers, J; Bueno, J; Shapiro, R; Scantlebury, V; Mazariegos, G; Fung, J; Reyes, J

    2001-11-27

    The indications for simultaneous and sequential pediatric liver (LTx) and kidney (KTx) transplantation have not been well defined. We herein report the results of our experience with these procedures in children with end-stage liver disease and/or subsequent end-stage renal disease. Between 1984 and 1995, 12 LTx recipients received 15 kidney allografts. Eight simultaneous and seven sequential LTx/KTx were performed. There were six males and six females, with a mean age of 10.9 years (1.5-23.7). One of the eight simultaneous LTx/KTx was part of a multivisceral allograft. Five KTx were performed at varied intervals after successful LTx, one KTx was performed after a previous simultaneous LTx/KTx, and one KTx was performed after previous sequential LTx/KTx. Immunosuppression was with tacrolimus or cyclosporine and steroids. Indications for LTx were oxalosis (four), congenital hepatic fibrosis (two), cystinosis (one), polycystic liver disease (one), A-1-A deficiency (one), Total Parenteral Nutrition (TPN)-related (one), cryptogenic cirrhosis (one), and hepatoblastoma (one). Indications for KTx were oxalosis (four), drug-induced (four), polycystic kidney disease (three), cystinosis (one), and glomerulonephritis (1). With a mean follow-up of 58 months (0.9-130), the overall patient survival rate was 58% (7/12). One-year and 5-year actuarial patient survival rates were 66% and 58%, respectively. Patient survival rates at 1 year after KTx according to United Network of Organ Sharing (liver) status were 100% for status 3, 50% for status 2, and 0% for status 1. The overall renal allograft survival rate was 47%. Actuarial renal allograft survival rates were 53% at 1 and 5 years. The overall hepatic allograft survival rate was equivalent to the overall patient survival rate (58%). Six of seven surviving patients have normal renal allograft function, and one patient has moderate chronic allograft nephropathy. All surviving patients have normal hepatic allograft function. Six

  20. Rapid Mental Сomputation System as a Tool for Algorithmic Thinking of Elementary School Students Development

    Directory of Open Access Journals (Sweden)

    Rushan Ziatdinov

    2012-07-01

    Full Text Available In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  1. Cloud Computing : Research Issues and Implications

    OpenAIRE

    Marupaka Rajenda Prasad; R. Lakshman Naik; V. Bapuji

    2013-01-01

    Cloud computing is a rapidly developing and excellent promising technology. It has aroused the concern of the computer society of whole world. Cloud computing is Internet-based computing, whereby shared information, resources, and software, are provided to terminals and portable devices on-demand, like the energy grid. Cloud computing is the product of the combination of grid computing, distributed computing, parallel computing, and ubiquitous computing. It aims to build and forecast sophisti...

  2. Multi-Stage Recognition of Speech Emotion Using Sequential Forward Feature Selection

    Directory of Open Access Journals (Sweden)

    Liogienė Tatjana

    2016-07-01

    Full Text Available The intensive research of speech emotion recognition introduced a huge collection of speech emotion features. Large feature sets complicate the speech emotion recognition task. Among various feature selection and transformation techniques for one-stage classification, multiple classifier systems were proposed. The main idea of multiple classifiers is to arrange the emotion classification process in stages. Besides parallel and serial cases, the hierarchical arrangement of multi-stage classification is most widely used for speech emotion recognition. In this paper, we present a sequential-forward-feature-selection-based multi-stage classification scheme. The Sequential Forward Selection (SFS and Sequential Floating Forward Selection (SFFS techniques were employed for every stage of the multi-stage classification scheme. Experimental testing of the proposed scheme was performed using the German and Lithuanian emotional speech datasets. Sequential-feature-selection-based multi-stage classification outperformed the single-stage scheme by 12–42 % for different emotion sets. The multi-stage scheme has shown higher robustness to the growth of emotion set. The decrease in recognition rate with the increase in emotion set for multi-stage scheme was lower by 10–20 % in comparison with the single-stage case. Differences in SFS and SFFS employment for feature selection were negligible.

  3. Evaluation of rapid HIV test kits on whole blood and development of rapid testing algorithm for voluntary testing and counseling centers in Ethiopia.

    Science.gov (United States)

    Tegbaru, Belete; Messele, Tsehaynesh; Wolday, Dawit; Meles, PhD Hailu; Tesema, Desalegn; Birhanu, Hiwot; Tesfaye, Girma; Bond, Kyle B; Martin, Robert; Rayfield, Mark A; Wuhib, Tadesse; Fekadu, Makonnen

    2004-10-01

    Five simple and rapid HIV antibody detection assays viz. Determine, Capillus, Oraquick, Unigold and Hemastrip were evaluated to examine their performance and to develop an alternative rapid test based testing algorithm for voluntary counseling and testing (VCT) in Ethiopia. All the kits were tested on whole blood, plasma and serum. The evaluation had three phases: Primary lab review, piloting at point of service and implementation. This report includes the results of the first two phases. A total of 2,693 specimens (both whole blood and plasma) were included in the evaluation. Results were compared to double Enzyme Linked Immuno-Sorbent Assay (ELISA) system. Discordant EIA results were resolved using Western Blot. The assays had very good sensitivities and specificities, 99-100%, at the two different phases of the evaluation. A 98-100% result agreement was obtained from those tested at VCT centers and National Referral Laboratory for AIDS (NRLA), in the quality control phase of the evaluation. A testing strategy yielding 100% [95% CI; 98.9-100.0] sensitivity was achieved by the sequential use of the three rapid test kits. Direct cost comparison showed serial testing algorithm reduces the cost of testing by over 30% compared to parallel testing in the current situation. Determine, Capillus/Oraquick (presence/absence of frefrigeration) and Unigold were recommended as screening, confirmation and tiebreaker tests, respectively.

  4. Tinnitus after Simultaneous and Sequential Bilateral Cochlear Implantation.

    Science.gov (United States)

    Ramakers, Geerte G J; Kraaijenga, Véronique J C; Smulders, Yvette E; van Zon, Alice; Stegeman, Inge; Stokroos, Robert J; Free, Rolien H; Frijns, Johan H M; Huinck, Wendy J; Van Zanten, Gijsbert A; Grolman, Wilko

    2017-01-01

    There is an ongoing global discussion on whether or not bilateral cochlear implantation should be standard care for bilateral deafness. Contrary to unilateral cochlear implantation, however, little is known about the effect of bilateral cochlear implantation on tinnitus. To investigate tinnitus outcomes 1 year after bilateral cochlear implantation. Secondarily, to compare tinnitus outcomes between simultaneous and sequential bilateral cochlear implantation and to investigate long-term follow-up (3 years). This study is a secondary analysis as part of a multicenter randomized controlled trial. Thirty-eight postlingually deafened adults were included in the original trial, in which the presence of tinnitus was not an inclusion criterion. All participants received cochlear implants (CIs) because of profound hearing loss. Nineteen participants received bilateral CIs simultaneously and 19 participants received bilateral CIs sequentially with an inter-implant interval of 2 years. The prevalence and severity of tinnitus before and after simultaneous and sequential bilateral cochlear implantation were measured preoperatively and each year after implantation with the Tinnitus Handicap Inventory (THI) and Tinnitus Questionnaire (TQ). The prevalence of preoperative tinnitus was 42% (16/38). One year after bilateral implantation, there was a median difference of -8 (inter-quartile range (IQR): -28 to 4) in THI score and -9 (IQR: -17 to -9) in TQ score in the participants with preoperative tinnitus. Induction of tinnitus occurred in five participants, all in the simultaneous group, in the year after bilateral implantation. Although the preoperative and also the postoperative median THI and TQ scores were higher in the simultaneous group, the median difference scores were equal in both groups. In the simultaneous group, tinnitus scores fluctuated in the 3 years after implantation. In the sequential group, four patients had an additional benefit of the second CI: a total

  5. Learning sequential control in a Neural Blackboard Architecture for in situ concept reasoning

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; Besold, Tarek R.; Lamb, Luis; Serafini, Luciano; Tabor, Whitney

    2016-01-01

    Simulations are presented and discussed of learning sequential control in a Neural Blackboard Architecture (NBA) for in situ concept-based reasoning. Sequential control is learned in a reservoir network, consisting of columns with neural circuits. This allows the reservoir to control the dynamics of

  6. Microstructural evolution of reduced-activation martensitic steel under single and sequential ion irradiations

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Fengfeng [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, Hubei Nuclear Solid Physics Key Laboratory and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Guo, Liping, E-mail: guolp@whu.edu.cn [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, Hubei Nuclear Solid Physics Key Laboratory and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Jin, Shuoxue; Li, Tiecheng; Zheng, Zhongcheng [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, Hubei Nuclear Solid Physics Key Laboratory and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Yang, Feng; Xiong, Xuesong; Suo, Jinping [State Key Laboratory of Mould Technology, Institute of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2013-07-15

    Microstructural evolution of super-clean reduced-activation martensitic steels irradiated with single-beam (Fe{sup +}) and sequential-beam (Fe{sup +} plus He{sup +}) at 350 °C and 550 °C was studied. Sequential-beam irradiation induced smaller size and larger number density of precipitates compared to single-beam irradiation at 350 °C. The largest size of cavities was observed after sequential-beam irradiation at 550 °C. The segregation of Cr and W and depletion of Fe in carbides were observed, and the maximum depletion of Fe and enrichment of Cr occurred under irradiation at 350 °C.

  7. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  8. Truly costly sequential search and oligopolistic pricing

    NARCIS (Netherlands)

    Janssen, Maarten C W; Moraga-González, José Luis; Wildenbeest, Matthijs R.

    We modify the paper of Stahl (1989) [Stahl, D.O., 1989. Oligopolistic pricing with sequential consumer search. American Economic Review 79, 700-12] by relaxing the assumption that consumers obtain the first price quotation for free. When all price quotations are costly to obtain, the unique

  9. Making Career Decisions--A Sequential Elimination Approach.

    Science.gov (United States)

    Gati, Itamar

    1986-01-01

    Presents a model for career decision making based on the sequential elimination of occupational alternatives, an adaptation for career decisions of Tversky's (1972) elimination-by-aspects theory of choice. The expected utility approach is reviewed as a representative compensatory model for career decisions. Advantages, disadvantages, and…

  10. Sequential Computerized Mastery Tests--Three Simulation Studies

    Science.gov (United States)

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  11. a method of gravity and seismic sequential inversion and its GPU implementation

    Science.gov (United States)

    Liu, G.; Meng, X.

    2011-12-01

    In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing

  12. Sequential Low Cost Interventions Double Hand Hygiene Rates ...

    African Journals Online (AJOL)

    Sequential Low Cost Interventions Double Hand Hygiene Rates Among Medical Teams in a Resource Limited Setting. Results of a Hand Hygiene Quality Improvement Project Conducted At University Teaching Hospital of Kigali (Chuk), Kigali, Rwanda.

  13. Sequential optimization of matrix chain multiplication relative to different cost functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    In this paper, we present a methodology to optimize matrix chain multiplication sequentially relative to different cost functions such as total number of scalar multiplications, communication overhead in a multiprocessor environment, etc. For n matrices our optimization procedure requires O(n 3) arithmetic operations per one cost function. This work is done in the framework of a dynamic programming extension that allows sequential optimization relative to different criteria. © 2011 Springer-Verlag Berlin Heidelberg.

  14. Hybrid parallel computing architecture for multiview phase shifting

    Science.gov (United States)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  15. Facing Two Rapidly Spreading Internet Worms

    CERN Multimedia

    IT Department

    2009-01-01

    The Internet is currently facing a growing number of computer infections due to two rapidly spreading worms. The "Conficker" and "Downadup" worms have infected an estimated 1.1 million PCs in a 24-hour period, bringing the total number of infected computers to 3.5 million [1]. Via a single USB stick, these worms were also responsible for the infection of about 40 laptops at the last EGEE conference in Istanbul. In order to reduce the impact of these worms on CERN Windows computers, the Computer Security Team has suggested several preventive measures described here. Disabling the Windows AutoRun and AutoPlay Features The Computer Security Team and the IT/IS group have decided to disable the "AutoRun" and "AutoPlay" functionality on all centrally-managed Windows computers at CERN. When inserting CDs, DVDs or USB sticks into a PC, "AutoRun" and "AutoPlay" are responsible for automatically playing music or films stored on these media, or ...

  16. Variation among heritage speakers: Sequential vs. simultaneous bilinguals

    Directory of Open Access Journals (Sweden)

    Teresa Lee

    2013-08-01

    Full Text Available This study examines the differences in the grammatical knowledge of two types of heritage speakers of Korean. Early simultaneous bilinguals are exposed to both English and the heritage language from birth, whereas early sequential bilinguals are exposed to the heritage language first and then to English upon schooling. A listening comprehension task involving relative clauses was conducted with 51 beginning-level Korean heritage speakers. The results showed that the early sequential bilinguals exhibited much more accurate knowledge than the early simultaneous bilinguals, who lacked rudimentary knowledge of Korean relative clauses. Drawing on the findings of adult and child Korean L1 data on the acquisition of relative clauses, the performance of each group is discussed with respect to attrition and incomplete acquisition of the heritage language.

  17. Innovative procedure for computer-assisted genioplasty: three-dimensional cephalometry, rapid-prototyping model and surgical splint.

    Science.gov (United States)

    Olszewski, R; Tranduy, K; Reychler, H

    2010-07-01

    The authors present a new procedure of computer-assisted genioplasty. They determined the anterior, posterior and inferior limits of the chin in relation to the skull and face with the newly developed and validated three-dimensional cephalometric planar analysis (ACRO 3D). Virtual planning of the osteotomy lines was carried out with Mimics (Materialize) software. The authors built a three-dimensional rapid-prototyping multi-position model of the chin area from a medical low-dose CT scan. The transfer of virtual information to the operating room consisted of two elements. First, the titanium plates on the 3D RP model were pre-bent. Second, a surgical guide for the transfer of the osteotomy lines and the positions of the screws to the operating room was manufactured. The authors present the first case of the use of this model on a patient. The postoperative results are promising, and the technique is fast and easy-to-use. More patients are needed for a definitive clinical validation of this procedure. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  18. Scalable Parallelization of Skyline Computation for Multi-core Processors

    DEFF Research Database (Denmark)

    Chester, Sean; Sidlauskas, Darius; Assent, Ira

    2015-01-01

    The skyline is an important query operator for multi-criteria decision making. It reduces a dataset to only those points that offer optimal trade-offs of dimensions. In general, it is very expensive to compute. Recently, multi-core CPU algorithms have been proposed to accelerate the computation...... of the skyline. However, they do not sufficiently minimize dominance tests and so are not competitive with state-of-the-art sequential algorithms. In this paper, we introduce a novel multi-core skyline algorithm, Hybrid, which processes points in blocks. It maintains a shared, global skyline among all threads...

  19. International assessment of functional computer abilities

    NARCIS (Netherlands)

    Anderson, Ronald E.; Collis, Betty

    1993-01-01

    After delineating the major rationale for computer education, data are presented from Stage 1 of the IEA Computers in Education Study showing international comparisons that may reflect differential priorities. Rapid technological change and the lack of consensus on goals of computer education

  20. pulver: an R package for parallel ultra-rapid p-value computation for linear regression interaction terms.

    Science.gov (United States)

    Molnos, Sophie; Baumbach, Clemens; Wahl, Simone; Müller-Nurasyid, Martina; Strauch, Konstantin; Wang-Sattler, Rui; Waldenberger, Melanie; Meitinger, Thomas; Adamski, Jerzy; Kastenmüller, Gabi; Suhre, Karsten; Peters, Annette; Grallert, Harald; Theis, Fabian J; Gieger, Christian

    2017-09-29

    Genome-wide association studies allow us to understand the genetics of complex diseases. Human metabolism provides information about the disease-causing mechanisms, so it is usual to investigate the associations between genetic variants and metabolite levels. However, only considering genetic variants and their effects on one trait ignores the possible interplay between different "omics" layers. Existing tools only consider single-nucleotide polymorphism (SNP)-SNP interactions, and no practical tool is available for large-scale investigations of the interactions between pairs of arbitrary quantitative variables. We developed an R package called pulver to compute p-values for the interaction term in a very large number of linear regression models. Comparisons based on simulated data showed that pulver is much faster than the existing tools. This is achieved by using the correlation coefficient to test the null-hypothesis, which avoids the costly computation of inversions. Additional tricks are a rearrangement of the order, when iterating through the different "omics" layers, and implementing this algorithm in the fast programming language C++. Furthermore, we applied our algorithm to data from the German KORA study to investigate a real-world problem involving the interplay among DNA methylation, genetic variants, and metabolite levels. The pulver package is a convenient and rapid tool for screening huge numbers of linear regression models for significant interaction terms in arbitrary pairs of quantitative variables. pulver is written in R and C++, and can be downloaded freely from CRAN at https://cran.r-project.org/web/packages/pulver/ .

  1. Computer Technology-Integrated Projects Should Not Supplant Craft Projects in Science Education

    Science.gov (United States)

    Klopp, Tabatha J.; Rule, Audrey C.; Schneider, Jean Suchsland; Boody, Robert M.

    2014-01-01

    The current emphasis on computer technology integration and narrowing of the curriculum has displaced arts and crafts. However, the hands-on, concrete nature of craft work in science modeling enables students to understand difficult concepts and to be engaged and motivated while learning spatial, logical, and sequential thinking skills. Analogy…

  2. Two-step sequential pretreatment for the enhanced enzymatic hydrolysis of coffee spent waste.

    Science.gov (United States)

    Ravindran, Rajeev; Jaiswal, Swarna; Abu-Ghannam, Nissreen; Jaiswal, Amit K

    2017-09-01

    In the present study, eight different pretreatments of varying nature (physical, chemical and physico-chemical) followed by a sequential, combinatorial pretreatment strategy was applied to spent coffee waste to attain maximum sugar yield. Pretreated samples were analysed for total reducing sugar, individual sugars and generation of inhibitory compounds such as furfural and hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity. Native spent coffee waste was high in hemicellulose content. Galactose was found to be the predominant sugar in spent coffee waste. Results showed that sequential pretreatment yielded 350.12mg of reducing sugar/g of substrate, which was 1.7-fold higher than in native spent coffee waste (203.4mg/g of substrate). Furthermore, extensive delignification was achieved using sequential pretreatment strategy. XRD, FTIR, and DSC profiles of the pretreated substrates were studied to analyse the various changes incurred in sequentially pretreated spent coffee waste as opposed to native spent coffee waste. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Competitive interactions affect working memory performance for both simultaneous and sequential stimulus presentation.

    Science.gov (United States)

    Ahmad, Jumana; Swan, Garrett; Bowman, Howard; Wyble, Brad; Nobre, Anna C; Shapiro, Kimron L; McNab, Fiona

    2017-07-06

    Competition between simultaneously presented visual stimuli lengthens reaction time and reduces both the BOLD response and neural firing. In contrast, conditions of sequential presentation have been assumed to be free from competition. Here we manipulated the spatial proximity of stimuli (Near versus Far conditions) to examine the effects of simultaneous and sequential competition on different measures of working memory (WM) for colour. With simultaneous presentation, the measure of WM precision was significantly lower for Near items, and participants reported the colour of the wrong item more often. These effects were preserved when the second stimulus immediately followed the first, disappeared when they were separated by 500 ms, and were partly recovered (evident for our measure of mis-binding but not WM precision) when the task was altered to encourage participants to maintain the sequentially presented items together in WM. Our results show, for the first time, that competition affects the measure of WM precision, and challenge the assumption that sequential presentation removes competition.

  4. Integrating Remote Sensing Data, Hybrid-Cloud Computing, and Event Notifications for Advanced Rapid Imaging & Analysis (Invited)

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.

  5. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  6. A framework for sequential multiblock component methods

    NARCIS (Netherlands)

    Smilde, A.K.; Westerhuis, J.A.; Jong, S.de

    2003-01-01

    Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework

  7. Interpretability degrees of finitely axiomatized sequential theories

    NARCIS (Netherlands)

    Visser, Albert

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed

  8. Interpretability Degrees of Finitely Axiomatized Sequential Theories

    NARCIS (Netherlands)

    Visser, Albert

    2012-01-01

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question

  9. Adult Word Recognition and Visual Sequential Memory

    Science.gov (United States)

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  10. The subtyping of primary aldosteronism by adrenal vein sampling: sequential blood sampling causes factitious lateralization.

    Science.gov (United States)

    Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo

    2018-02-01

    The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.

  11. Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.

    Science.gov (United States)

    Domenger, D; Schwarting, R K W

    2008-10-31

    Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired

  12. Portable computers - portable operating systems

    International Nuclear Information System (INIS)

    Wiegandt, D.

    1985-01-01

    Hardware development has made rapid progress over the past decade. Computers used to have attributes like ''general purpose'' or ''universal'', nowadays they are labelled ''personal'' and ''portable''. Recently, a major manufacturing company started marketing a portable version of their personal computer. But even for these small computers the old truth still holds that the biggest disadvantage of a computer is that it must be programmed, hardware by itself does not make a computer. (orig.)

  13. Eyewitness identification in simultaneous and sequential lineups: an investigation of position effects using receiver operating characteristics.

    Science.gov (United States)

    Meisters, Julia; Diedenhofen, Birk; Musch, Jochen

    2018-04-20

    For decades, sequential lineups have been considered superior to simultaneous lineups in the context of eyewitness identification. However, most of the research leading to this conclusion was based on the analysis of diagnosticity ratios that do not control for the respondent's response criterion. Recent research based on the analysis of ROC curves has found either equal discriminability for sequential and simultaneous lineups, or higher discriminability for simultaneous lineups. Some evidence for potential position effects and for criterion shifts in sequential lineups has also been reported. Using ROC curve analysis, we investigated the effects of the suspect's position on discriminability and response criteria in both simultaneous and sequential lineups. We found that sequential lineups suffered from an unwanted position effect. Respondents employed a strict criterion for the earliest lineup positions, and shifted to a more liberal criterion for later positions. No position effects and no criterion shifts were observed in simultaneous lineups. This result suggests that sequential lineups are not superior to simultaneous lineups, and may give rise to unwanted position effects that have to be considered when conducting police lineups.

  14. Sequential test procedures for inventory differences

    International Nuclear Information System (INIS)

    Goldman, A.S.; Kern, E.A.; Emeigh, C.W.

    1985-01-01

    By means of a simulation study, we investigated the appropriateness of Page's and power-one sequential tests on sequences of inventory differences obtained from an example materials control unit, a sub-area of a hypothetical UF 6 -to-U 3 O 8 conversion process. The study examined detection probability and run length curves obtained from different loss scenarios. 12 refs., 10 figs., 2 tabs

  15. Plasmon-driven sequential chemical reactions in an aqueous environment.

    Science.gov (United States)

    Zhang, Xin; Wang, Peijie; Zhang, Zhenglong; Fang, Yurui; Sun, Mengtao

    2014-06-24

    Plasmon-driven sequential chemical reactions were successfully realized in an aqueous environment. In an electrochemical environment, sequential chemical reactions were driven by an applied potential and laser irradiation. Furthermore, the rate of the chemical reaction was controlled via pH, which provides indirect evidence that the hot electrons generated from plasmon decay play an important role in plasmon-driven chemical reactions. In acidic conditions, the hot electrons were captured by the abundant H(+) in the aqueous environment, which prevented the chemical reaction. The developed plasmon-driven chemical reactions in an aqueous environment will significantly expand the applications of plasmon chemistry and may provide a promising avenue for green chemistry using plasmon catalysis in aqueous environments under irradiation by sunlight.

  16. Better together: reduced compliance after sequential versus simultaneous bilateral hearing aids fitting.

    Science.gov (United States)

    Lavie, Limor; Banai, Karen; Attias, Joseph; Karni, Avi

    2014-03-01

    The purpose of this study was to determine the effects of sequential versus simultaneous bilateral hearing aids fitting on patient compliance. Thirty-six older adults with hearing impairment participated in this study. Twelve were fitted with bilateral hearing aids simultaneously. The remaining participants were fitted sequentially: One hearing aid (to the left or to the right ear) was used initially; 1 month later, the other ear was also fitted with a hearing aid for bilateral use. Self-reports on usefulness and compliance were elicited after the first and second months of hearing aid use. In addition, the number of hours the hearing aids were used was extracted from the data loggings of each device. Simultaneous fitting resulted in high levels of compliance and consistent usage throughout the study period. Sequential fitting resulted in abrupt reduction in compliance and hours of use once the second hearing aid was added, both in the clinical scoring and in the data loggings. Simultaneous fitting of bilateral hearing aids results in better compliance compared with sequential fitting. The addition of a second hearing aid after a relatively short period of monaural use may lead to inconsistent use of both hearing aids.

  17. Cloud Computing Law

    CERN Document Server

    Millard, Christopher

    2013-01-01

    This book is about the legal implications of cloud computing. In essence, ‘the cloud’ is a way of delivering computing resources as a utility service via the internet. It is evolving very rapidly with substantial investments being made in infrastructure, platforms and applications, all delivered ‘as a service’. The demand for cloud resources is enormous, driven by such developments as the deployment on a vast scale of mobile apps and the rapid emergence of ‘Big Data’. Part I of this book explains what cloud computing is and how it works. Part II analyses contractual relationships between cloud service providers and their customers, as well as the complex roles of intermediaries. Drawing on primary research conducted by the Cloud Legal Project at Queen Mary University of London, cloud contracts are analysed in detail, including the appropriateness and enforceability of ‘take it or leave it’ terms of service, as well as the scope for negotiating cloud deals. Specific arrangements for public sect...

  18. Adjuvant sequential chemo and radiotherapy improves the oncological outcome in high risk endometrial cancer

    Science.gov (United States)

    Signorelli, Mauro; Lissoni, Andrea Alberto; De Ponti, Elena; Grassi, Tommaso; Ponti, Serena

    2015-01-01

    Objective Evaluation of the impact of sequential chemoradiotherapy in high risk endometrial cancer (EC). Methods Two hundred fifty-four women with stage IB grade 3, II and III EC (2009 FIGO staging), were included in this retrospective study. Results Stage I, II, and III was 24%, 28.7%, and 47.3%, respectively. Grade 3 tumor was 53.2% and 71.3% had deep myometrial invasion. One hundred sixty-five women (65%) underwent pelvic (+/- aortic) lymphadenectomy and 58 (22.8%) had nodal metastases. Ninety-eight women (38.6%) underwent radiotherapy, 59 (23.2%) chemotherapy, 42 (16.5%) sequential chemoradiotherapy, and 55 (21.7%) were only observed. After a median follow-up of 101 months, 78 women (30.7%) relapsed and 91 women (35.8%) died. Sequential chemoradiotherapy improved survival rates in women who did not undergo nodal evaluation (disease-free survival [DFS], p=0.040; overall survival [OS], p=0.024) or pelvic (+/- aortic) lymphadenectomy (DFS, p=0.008; OS, p=0.021). Sequential chemoradiotherapy improved both DFS (p=0.015) and OS (p=0.014) in stage III, while only a trend was found for DFS (p=0.210) and OS (p=0.102) in stage I-II EC. In the multivariate analysis, only age (≤65 years) and sequential chemoradiotherapy were statistically related to the prognosis. Conclusion Sequential chemoradiotherapy improves survival rates in high risk EC compared with chemotherapy or radiotherapy alone, in particular in stage III. PMID:26197768

  19. Eyewitness decisions in simultaneous and sequential lineups: a dual-process signal detection theory analysis.

    Science.gov (United States)

    Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H

    2005-07-01

    Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.

  20. ADVANCEMENT OF RAPID PROTOTYPING IN AEROSPACE INDUSTRY -A REVIEW

    OpenAIRE

    Vineet Kumar Vashishtha,; Rahul Makade,; Neeraj Mehla

    2011-01-01

    Rapid prototyping technology have emerged a new innovation to reduced the time cost of moulds fabrication by creating 3D product directly from computer aided design thus the designer is able to perform design validation and accuracy analysis easily in a virtual environment as if using a physical model. The primary aim of this paper is to give the reader an overview of the current state of the art in rapid prototyping technology .The paper also deal with feature’s of rapid prototyping in Aeros...

  1. Sequential and Simultaneous Processing Abilities of High-Functioning Autistic and Language-Impaired Children.

    Science.gov (United States)

    Allen, Mark H.; And Others

    1991-01-01

    This study found that a group of 20 children (ages 6-12) with autism and a group of 20 children with developmental receptive language disorder both manifested a relative sequential processing deficit. The groups did not differ significantly on overall sequential and simultaneous processing capabilities relative to their degree of language…

  2. Sequential boundaries approach in clinical trials with unequal allocation ratios

    Directory of Open Access Journals (Sweden)

    Ayatollahi Seyyed

    2006-01-01

    Full Text Available Abstract Background In clinical trials, both unequal randomization design and sequential analyses have ethical and economic advantages. In the single-stage-design (SSD, however, if the sample size is not adjusted based on unequal randomization, the power of the trial will decrease, whereas with sequential analysis the power will always remain constant. Our aim was to compare sequential boundaries approach with the SSD when the allocation ratio (R was not equal. Methods We evaluated the influence of R, the ratio of the patients in experimental group to the standard group, on the statistical properties of two-sided tests, including the two-sided single triangular test (TT, double triangular test (DTT and SSD by multiple simulations. The average sample size numbers (ASNs and power (1-β were evaluated for all tests. Results Our simulation study showed that choosing R = 2 instead of R = 1 increases the sample size of SSD by 12% and the ASN of the TT and DTT by the same proportion. Moreover, when R = 2, compared to the adjusted SSD, using the TT or DTT allows to retrieve the well known reductions of ASN observed when R = 1, compared to SSD. In addition, when R = 2, compared to SSD, using the TT and DTT allows to obtain smaller reductions of ASN than when R = 1, but maintains the power of the test to its planned value. Conclusion This study indicates that when the allocation ratio is not equal among the treatment groups, sequential analysis could indeed serve as a compromise between ethicists, economists and statisticians.

  3. Immediately sequential bilateral cataract surgery: advantages and disadvantages.

    Science.gov (United States)

    Singh, Ranjodh; Dohlman, Thomas H; Sun, Grace

    2017-01-01

    The number of cataract surgeries performed globally will continue to rise to meet the needs of an aging population. This increased demand will require healthcare systems and providers to find new surgical efficiencies while maintaining excellent surgical outcomes. Immediately sequential bilateral cataract surgery (ISBCS) has been proposed as a solution and is increasingly being performed worldwide. The purpose of this review is to discuss the advantages and disadvantages of ISBCS. When appropriate patient selection occurs and guidelines are followed, ISBCS is comparable with delayed sequential bilateral cataract surgery in long-term patient satisfaction, visual acuity and complication rates. In addition, the risk of bilateral postoperative endophthalmitis and concerns of poorer refractive outcomes have not been supported by the literature. ISBCS is cost-effective for the patient, healthcare payors and society, but current reimbursement models in many countries create significant financial barriers for facilities and surgeons. As demand for cataract surgery rises worldwide, ISBCS will become increasingly important as an alternative to delayed sequential bilateral cataract surgery. Advantages include potentially decreased wait times for surgery, patient convenience and cost savings for healthcare payors. Although they are comparable in visual acuity and complication rates, hurdles that prevent wide adoption include liability concerns as ISBCS is not an established standard of care, economic constraints for facilities and surgeons and inability to fine-tune intraocular lens selection in the second eye. Given these considerations, an open discussion regarding the advantages and disadvantages of ISBCS is important for appropriate patient selection.

  4. Accurately controlled sequential self-folding structures by polystyrene film

    Science.gov (United States)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  5. Sequential capillary electrophoresis analysis using optically gated sample injection and UV/vis detection.

    Science.gov (United States)

    Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li

    2015-10-01

    We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Simultaneous Versus Sequential Side-by-Side Bilateral Metal Stent Placement for Malignant Hilar Biliary Obstructions.

    Science.gov (United States)

    Inoue, Tadahisa; Ishii, Norimitsu; Kobayashi, Yuji; Kitano, Rena; Sakamoto, Kazumasa; Ohashi, Tomohiko; Nakade, Yukiomi; Sumida, Yoshio; Ito, Kiyoaki; Nakao, Haruhisa; Yoneda, Masashi

    2017-09-01

    Endoscopic bilateral self-expandable metallic stent (SEMS) placement for malignant hilar biliary obstructions (MHBOs) is technically demanding, and a second SEMS insertion is particularly challenging. A simultaneous side-by-side (SBS) placement technique using a thinner delivery system may mitigate these issues. We aimed to examine the feasibility and efficacy of simultaneous SBS SEMS placement for treating MHBOs using a novel SEMS that has a 5.7-Fr ultra-thin delivery system. Thirty-four patients with MHBOs underwent SBS SEMS placement between 2010 and 2016. We divided the patient cohort into those who underwent sequential (conventional) SBS placement between 2010 and 2014 (sequential group) and those who underwent simultaneous SBS placement between 2015 and 2016 (simultaneous group), and compared the groups with respect to the clinical outcomes. The technical success rates were 71% (12/17) and 100% (17/17) in the sequential and simultaneous groups, respectively, a difference that was significant (P = .045). The median procedure time was significantly shorter in the simultaneous group (22 min) than in the sequential group (52 min) (P = .017). There were no significant group differences in the time to recurrent biliary obstruction (sequential group: 113 days; simultaneous group: 140 days) or other adverse event rates (sequential group: 12%; simultaneous group: 12%). Simultaneous SBS placement using the novel 5.7-Fr SEMS delivery system may be more straightforward and have a higher success rate compared to that with sequential SBS placement. This new method may be useful for bilateral stenting to treat MHBOs.

  7. Volume-based predictive biomarkers of sequential FDG-PET/CT for sunitinib in cancer of unknown primary: identification of the best benefited patients

    International Nuclear Information System (INIS)

    Ma, Yifei; Xu, Wei; Xiao, Jianru; Bai, Ruojing; Li, Yiming; Yu, Hongyu; Yang, Chunshan; Shi, Huazheng; Zhang, Jian; Li, Jidong; Wang, Chenguang

    2017-01-01

    To test the performance of sequential "1"8F-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) in predicting survival after sunitinib therapies in patients with cancer of unknown primary (CUP). CUP patients were enrolled for sequential PET/CT scanning for sunitinib and a control group. Univariate and multivariate analysis were applied to test the efficacy of sunitinib therapy in CUP patients. Next, sequential analyses involving PET/CT parameters were performed to identify and validate sensitive PET/CT biomarkers for sunitinib therapy. Finally, time-dependent receiver operating characteristic (TDROC) analyses were performed to compare the predictive accuracy. Multivariate analysis proved that sunitinib group had significantly improved survival (p < 0.01) as compared to control group. After cycle 2 of therapy, multivariate analysis identified volume-based PET/CT parameters as sensitive biomarkers for sunitinib (p < 0.01). TDROC curves demonstrated whole-body total lesion glycolysis reduction (Δ WTLG) and follow-up WTLG to have good accuracy for efficacy prediction. This evidence was validated after cycle 4 of therapy with the same method. Sunitinib therapy proved effective in treatment of CUP. PET/CT volume-based parameters may help predict outcome of sunitinib therapy, in which Δ WTLG and follow-up WTLG seem to be sensitive biomarkers for sunitinib efficacy. Patients with greater reduction and lower WTLG at follow-up seem to have better survival outcome. (orig.)

  8. Volume-based predictive biomarkers of sequential FDG-PET/CT for sunitinib in cancer of unknown primary: identification of the best benefited patients

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Yifei [Second Military Medical University, Department of Orthorpedic Oncology, Changzheng Hospital, Shanghai (China); Second Military Medical University, Department of Pathology, Changzheng Hospital, Shanghai (China); Xu, Wei; Xiao, Jianru [Second Military Medical University, Department of Orthorpedic Oncology, Changzheng Hospital, Shanghai (China); Bai, Ruojing [Geriatrics Institute, Department of Geriatrics, Tianjin Medical University General Hospital, Laboratory of Neuro-Trauma and Neurodegenerative Disorder, Tianjin (China); Li, Yiming [Neurosurgery Institute, Department of Neuro-oncology, Beijing (China); Yu, Hongyu [Second Military Medical University, Department of Pathology, Changzheng Hospital, Shanghai (China); Yang, Chunshan [Panorama Medical Imaging Center, Department of PET/CT Radiology, Shanghai (China); Department of PET/CT Radiology Center, Shanghai (China); Shi, Huazheng; Zhang, Jian [Department of PET/CT Radiology Center, Shanghai (China); Li, Jidong [The First People' s Hospital of Shangqiu, Department of Stomatology, Shangqiu, Henan Province (China); Wang, Chenguang [Second Military Medical University, Department of Radiology, Changzheng Hospital, Shanghai (China)

    2017-02-15

    To test the performance of sequential {sup 18}F-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) in predicting survival after sunitinib therapies in patients with cancer of unknown primary (CUP). CUP patients were enrolled for sequential PET/CT scanning for sunitinib and a control group. Univariate and multivariate analysis were applied to test the efficacy of sunitinib therapy in CUP patients. Next, sequential analyses involving PET/CT parameters were performed to identify and validate sensitive PET/CT biomarkers for sunitinib therapy. Finally, time-dependent receiver operating characteristic (TDROC) analyses were performed to compare the predictive accuracy. Multivariate analysis proved that sunitinib group had significantly improved survival (p < 0.01) as compared to control group. After cycle 2 of therapy, multivariate analysis identified volume-based PET/CT parameters as sensitive biomarkers for sunitinib (p < 0.01). TDROC curves demonstrated whole-body total lesion glycolysis reduction (Δ WTLG) and follow-up WTLG to have good accuracy for efficacy prediction. This evidence was validated after cycle 4 of therapy with the same method. Sunitinib therapy proved effective in treatment of CUP. PET/CT volume-based parameters may help predict outcome of sunitinib therapy, in which Δ WTLG and follow-up WTLG seem to be sensitive biomarkers for sunitinib efficacy. Patients with greater reduction and lower WTLG at follow-up seem to have better survival outcome. (orig.)

  9. Bidding in sequential electricity markets: The Nordic case

    DEFF Research Database (Denmark)

    Boomsma, Trine Krogh; Juul, Nina; Fleten, Stein-Erik

    2014-01-01

    problem as a multi-stage stochastic program. We investigate whether higher risk exposure can explain the hesitation, often observed in practice, to bid into the balancing market, even in cases of higher expected price levels. Furthermore, we quantify the gain from coordinated bidding, and by deriving......For electricity market participants trading in sequential markets with differences in price levels and risk exposure, coordinated bidding is highly relevant. We consider a Nordic power producer who engages in the day-ahead spot market and the near real-time balancing market. In both markets......, clearing prices and dispatched volumes are unknown at the time of bidding. However, in the balancing market, the agent faces an additional risk of not being dispatched. Taking into account the sequential clearing of these markets and the gradual realization of market prices, we formulate the bidding...

  10. A sequential/parallel track selector

    CERN Document Server

    Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A

    1980-01-01

    A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).

  11. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Directory of Open Access Journals (Sweden)

    Wei Wang

    Full Text Available Recent developments in modern computational accelerators like Graphics Processing Units (GPUs and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other

  12. General Atomic HTGR fuel reprocessing pilot plant: results of initial sequential equipment operation

    International Nuclear Information System (INIS)

    1978-09-01

    In September 1977, the processing of 20 large high-temperature gas-cooled reactor (LHTGR) fuel elements was completed sequentially through the head-end cold pilot plant equipment. This report gives a brief description of the equipment and summarizes the results of the sequential operation of the pilot plant. 32 figures, 15 tables

  13. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Guiseppe, Cavaliere; Rahbæk, Anders; Taylor, A.M. Robert

    with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....

  14. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  15. Evaluation of RAPID for a UNF cask benchmark problem

    Science.gov (United States)

    Mascolino, Valerio; Haghighat, Alireza; Roskoff, Nathan J.

    2017-09-01

    This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system for the simulation of a used nuclear fuel (UNF) cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days). A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.

  16. Comparison of human embryomorphokinetic parameters in sequential or global culture media.

    Science.gov (United States)

    Kazdar, Nadia; Brugnon, Florence; Bouche, Cyril; Jouve, Guilhem; Veau, Ségolène; Drapier, Hortense; Rousseau, Chloé; Pimentel, Céline; Viard, Patricia; Belaud-Rotureau, Marc-Antoine; Ravel, Célia

    2017-08-01

    A prospective study on randomized patients was conducted to determine how morphokinetic parameters are altered in embryos grown in sequential versus global culture media. Eleven morphokinetic parameters of 160 single embryos transferred were analyzed by time lapse imaging involving two University-affiliated in vitro fertilization (IVF) centers. We found that the fading of the two pronuclei occurred earlier in global (22.56±2.15 hpi) versus sequential media (23.63±2.71 hpi; p=0.0297). Likewise, the first cleavage started earlier at 24.52±2.33 hpi vs 25.76±2.95 hpi (p=0.0158). Also, the first cytokinesis was shorter in global medium, lasting 18±10.2 minutes in global versus 36±37.8 minutes in sequential culture medium (p culture medium. Our study highlights the need to adapt morphokinetic analysis accordingly to the type of media used to best support human early embryo development.

  17. Probing finite coarse-grained virtual Feynman histories with sequential weak values

    Science.gov (United States)

    Georgiev, Danko; Cohen, Eliahu

    2018-05-01

    Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories.

  18. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  19. Sequential activation of CD8+ T cells in the draining lymph nodes in response to pulmonary virus infection.

    Science.gov (United States)

    Yoon, Heesik; Legge, Kevin L; Sung, Sun-sang J; Braciale, Thomas J

    2007-07-01

    We have used a TCR-transgenic CD8+ T cell adoptive transfer model to examine the tempo of T cell activation and proliferation in the draining lymph nodes (DLN) in response to respiratory virus infection. The T cell response in the DLN differed for mice infected with different type A influenza strains with the onset of T cell activation/proliferation to the A/JAPAN virus infection preceding the A/PR8 response by 12-24 h. This difference in T cell activation/proliferation correlated with the tempo of accelerated respiratory DC (RDC) migration from the infected lungs to the DLN in response to influenza virus infection, with the migrant RDC responding to the A/JAPAN infection exhibiting a more rapid accumulation in the lymph nodes (i.e., peak migration for A/JAPAN at 18 h, A/PR8 at 24-36 h). Furthermore, in vivo administration of blocking anti-CD62L Ab at various time points before/after infection revealed that the virus-specific CD8+ T cells entered the DLN and activated in a sequential "conveyor belt"-like fashion. These results indicate that the tempo of CD8+ T cell activation/proliferation after viral infection is dependent on the tempo of RDC migration to the DLN and that T cell activation occurs in an ordered sequential fashion.

  20. Sequential accelerated tests: Improving the correlation of accelerated tests to module performance in the field

    Science.gov (United States)

    Felder, Thomas; Gambogi, William; Stika, Katherine; Yu, Bao-Ling; Bradley, Alex; Hu, Hongjie; Garreau-Iles, Lucie; Trout, T. John

    2016-09-01

    DuPont has been working steadily to develop accelerated backsheet tests that correlate with solar panels observations in the field. This report updates efforts in sequential testing. Single exposure tests are more commonly used and can be completed more quickly, and certain tests provide helpful predictions of certain backsheet failure modes. DuPont recommendations for single exposure tests are based on 25-year exposure levels for UV and humidity/temperature, and form a good basis for sequential test development. We recommend a sequential exposure of damp heat followed by UV then repetitions of thermal cycling and UVA. This sequence preserves 25-year exposure levels for humidity/temperature and UV, and correlates well with a large body of field observations. Measurements can be taken at intervals in the test, although the full test runs 10 months. A second, shorter sequential test based on damp heat and thermal cycling tests mechanical durability and correlates with loss of mechanical properties seen in the field. Ongoing work is directed toward shorter sequential tests that preserve good correlation to field data.