WorldWideScience

Sample records for rapid sequential computed

  1. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  2. Computing Sequential Equilibria for Two-Player Games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...

  3. Computing sequential equilibria for two-player games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...

  4. Effects of sequential and discrete rapid naming on reading in Japanese children with reading difficulty.

    Science.gov (United States)

    Wakamiya, Eiji; Okumura, Tomohito; Nakanishi, Makoto; Takeshita, Takashi; Mizuta, Mekumi; Kurimoto, Naoko; Tamai, Hiroshi

    2011-06-01

    To clarify whether rapid naming ability itself is a main underpinning factor of rapid automatized naming tests (RAN) and how deep an influence the discrete decoding process has on reading, we performed discrete naming tasks and discrete hiragana reading tasks as well as sequential naming tasks and sequential hiragana reading tasks with 38 Japanese schoolchildren with reading difficulty. There were high correlations between both discrete and sequential hiragana reading and sentence reading, suggesting that some mechanism which automatizes hiragana reading makes sentence reading fluent. In object and color tasks, there were moderate correlations between sentence reading and sequential naming, and between sequential naming and discrete naming. But no correlation was found between reading tasks and discrete naming tasks. The influence of rapid naming ability of objects and colors upon reading seemed relatively small, and multi-item processing may work in relation to these. In contrast, in the digit naming task there was moderate correlation between sentence reading and discrete naming, while no correlation was seen between sequential naming and discrete naming. There was moderate correlation between reading tasks and sequential digit naming tasks. Digit rapid naming ability has more direct effect on reading while its effect on RAN is relatively limited. The ratio of how rapid naming ability influences RAN and reading seems to vary according to kind of the stimuli used. An assumption about components in RAN which influence reading is discussed in the context of both sequential processing and discrete naming speed. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  5. Sequential decision making in computational sustainability via adaptive submodularity

    Science.gov (United States)

    Krause, Andreas; Golovin, Daniel; Converse, Sarah J.

    2015-01-01

    Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.

  6. Sequential decisions: a computational comparison of observational and reinforcement accounts.

    Directory of Open Access Journals (Sweden)

    Nazanin Mohammadi Sepahvand

    Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.

  7. Precise Sequential DNA Ligation on A Solid Substrate: Solid-Based Rapid Sequential Ligation of Multiple DNA Molecules

    Science.gov (United States)

    Takita, Eiji; Kohda, Katsunori; Tomatsu, Hajime; Hanano, Shigeru; Moriya, Kanami; Hosouchi, Tsutomu; Sakurai, Nozomu; Suzuki, Hideyuki; Shinmyo, Atsuhiko; Shibata, Daisuke

    2013-01-01

    Ligation, the joining of DNA fragments, is a fundamental procedure in molecular cloning and is indispensable to the production of genetically modified organisms that can be used for basic research, the applied biosciences, or both. Given that many genes cooperate in various pathways, incorporating multiple gene cassettes in tandem in a transgenic DNA construct for the purpose of genetic modification is often necessary when generating organisms that produce multiple foreign gene products. Here, we describe a novel method, designated PRESSO (precise sequential DNA ligation on a solid substrate), for the tandem ligation of multiple DNA fragments. We amplified donor DNA fragments with non-palindromic ends, and ligated the fragment to acceptor DNA fragments on solid beads. After the final donor DNA fragments, which included vector sequences, were joined to the construct that contained the array of fragments, the ligation product (the construct) was thereby released from the beads via digestion with a rare-cut meganuclease; the freed linear construct was circularized via an intra-molecular ligation. PRESSO allowed us to rapidly and efficiently join multiple genes in an optimized order and orientation. This method can overcome many technical challenges in functional genomics during the post-sequencing generation. PMID:23897972

  8. Temporal characteristics of radiologists’ and novices’ lesion detection in viewing medical images presented rapidly and sequentially

    Directory of Open Access Journals (Sweden)

    Ryoichi Nakashima

    2016-10-01

    Full Text Available Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers’ attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy. This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.

  9. Computation of Stackelberg Equilibria of Finite Sequential Games

    DEFF Research Database (Denmark)

    Bosanski, Branislav; Branzei, Simina; Hansen, Kristoffer Arnsfelt

    2015-01-01

    The Stackelberg equilibrium is a solution concept that describes optimal strategies to commit to: Player~1 (the leader) first commits to a strategy that is publicly announced, then Player~2 (the follower) plays a best response to the leader's choice. We study Stackelberg equilibria in finite...... sequential (i.e., extensive-form) games and provide new exact algorithms, approximate algorithms, and hardness results for finding equilibria for several classes of such two-player games....

  10. Time sequential single photon emission computed tomography studies in brain tumour using thallium-201

    International Nuclear Information System (INIS)

    Ueda, Takashi; Kaji, Yasuhiro; Wakisaka, Shinichiro; Watanabe, Katsushi; Hoshi, Hiroaki; Jinnouchi, Seishi; Futami, Shigemi

    1993-01-01

    Time sequential single photon emission computed tomography (SPECT) studies using thallium-201 were performed in 25 patients with brain tumours to evaluate the kinetics of thallium in the tumour and the biological malignancy grade preoperatively. After acquisition and reconstruction of SPECT data from 1 min post injection to 48 h (1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 15-20 min, followed by 4-6, 24 and 48 h), the thallium uptake ratio in the tumour versus the homologous contralateral area of the brain was calculated and compared with findings of X-ray CT, magnetic resonance imaging, cerebral angiography and histological investigations. Early uptake of thallium in tumours was related to tumour vascularity and the disruption of the blood-brain barrier. High and rapid uptake and slow reduction of thallium indicated a hypervascular malignant tumour; however, high and rapid uptake but rapid reduction of thallium indicated a hypervascular benign tumour, such as meningioma. Hypovascular and benign tumours tended to show low uptake and slow reduction of thallium. Long-lasting retention or uptake of thallium indicates tumour malignancy. (orig.)

  11. Rapid Sequential in Situ Multiplexing with DNA Exchange Imaging in Neuronal Cells and Tissues.

    Science.gov (United States)

    Wang, Yu; Woehrstein, Johannes B; Donoghue, Noah; Dai, Mingjie; Avendaño, Maier S; Schackmann, Ron C J; Zoeller, Jason J; Wang, Shan Shan H; Tillberg, Paul W; Park, Demian; Lapan, Sylvain W; Boyden, Edward S; Brugge, Joan S; Kaeser, Pascal S; Church, George M; Agasti, Sarit S; Jungmann, Ralf; Yin, Peng

    2017-10-11

    To decipher the molecular mechanisms of biological function, it is critical to map the molecular composition of individual cells or even more importantly tissue samples in the context of their biological environment in situ. Immunofluorescence (IF) provides specific labeling for molecular profiling. However, conventional IF methods have finite multiplexing capabilities due to spectral overlap of the fluorophores. Various sequential imaging methods have been developed to circumvent this spectral limit but are not widely adopted due to the common limitation of requiring multirounds of slow (typically over 2 h at room temperature to overnight at 4 °C in practice) immunostaining. We present here a practical and robust method, which we call DNA Exchange Imaging (DEI), for rapid in situ spectrally unlimited multiplexing. This technique overcomes speed restrictions by allowing for single-round immunostaining with DNA-barcoded antibodies, followed by rapid (less than 10 min) buffer exchange of fluorophore-bearing DNA imager strands. The programmability of DEI allows us to apply it to diverse microscopy platforms (with Exchange Confocal, Exchange-SIM, Exchange-STED, and Exchange-PAINT demonstrated here) at multiple desired resolution scales (from ∼300 nm down to sub-20 nm). We optimized and validated the use of DEI in complex biological samples, including primary neuron cultures and tissue sections. These results collectively suggest DNA exchange as a versatile, practical platform for rapid, highly multiplexed in situ imaging, potentially enabling new applications ranging from basic science, to drug discovery, and to clinical pathology.

  12. Foreword to Special Issue on "The Difference between Concurrent and Sequential Computation'' of Mathematical Structures

    DEFF Research Database (Denmark)

    Aceto, Luca; Longo, Giuseppe; Victor, Björn

    2003-01-01

    tarpit’, and argued that some of the most crucial distinctions in computing methodology, such as sequential versus parallel, deterministic versus non-deterministic, local versus distributed disappear if all one sees in computation is pure symbol pushing. How can we express formally the difference between...

  13. Relative resilience to noise of standard and sequential approaches to measurement-based quantum computation

    Science.gov (United States)

    Gallagher, C. B.; Ferraro, A.

    2018-05-01

    A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.

  14. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  15. Catalysis of Silver catfish Major Hepatic Glutathione Transferase proceeds via rapid equilibrium sequential random Mechanism

    Directory of Open Access Journals (Sweden)

    Ayodele O. Kolawole

    Full Text Available Fish hepatic glutathione transferases are connected with the elimination of intracellular pollutants and detoxification of organic micro-pollutants in their aquatic ecosystem. The two-substrate steady state kinetic mechanism of Silver catfish (Synodontis eupterus major hepatic glutathione transferases purified to apparent homogeneity was explored. The enzyme was dimeric enzyme with a monomeric size of 25.6 kDa. Initial-velocity studies and Product inhibition patterns by methyl glutathione and chloride with respect to GSH-CDNB; GSH-ρ-nitrophenylacetate; and GSH-Ethacrynic acid all conforms to a rapid equilibrium sequential random Bi Bi kinetic mechanism rather than steady state sequential random Bi Bi kinetic. α was 2.96 ± 0.35 for the model. The pH profile of Vmax/KM (with saturating 1-chloro-2,4-dinitrobenzene and variable GSH concentrations showed apparent pKa value of 6.88 and 9.86. Inhibition studies as a function of inhibitor concentration show that the enzyme is a homodimer and near neutral GST. The enzyme poorly conjugates 4-hydroxylnonenal and cumene hydroperoxide and may not be involved in oxidative stress protection. The seGST is unique and overwhelmingly shows characteristics similar to those of homodimeric class Pi GSTs, as was indicated by its kinetic mechanism, substrate specificity and inhibition studies. The rate- limiting step, probably the product release, of the reaction is viscosity-dependent and is consequential if macro-viscosogen or micro-viscosogen. Keywords: Silver catfish, Glutathione transferase, Steady-state, Kinetic mechanism, Inhibition

  16. A Computer Program for Simplifying Incompletely Specified Sequential Machines Using the Paull and Unger Technique

    Science.gov (United States)

    Ebersole, M. M.; Lecoq, P. E.

    1968-01-01

    This report presents a description of a computer program mechanized to perform the Paull and Unger process of simplifying incompletely specified sequential machines. An understanding of the process, as given in Ref. 3, is a prerequisite to the use of the techniques presented in this report. This process has specific application in the design of asynchronous digital machines and was used in the design of operational support equipment for the Mariner 1966 central computer and sequencer. A typical sequential machine design problem is presented to show where the Paull and Unger process has application. A description of the Paull and Unger process together with a description of the computer algorithms used to develop the program mechanization are presented. Several examples are used to clarify the Paull and Unger process and the computer algorithms. Program flow diagrams, program listings, and a program user operating procedures are included as appendixes.

  17. Development of computer-aided software engineering tool for sequential control of JT-60U

    International Nuclear Information System (INIS)

    Shimono, M.; Akasaka, H.; Kurihara, K.; Kimura, T.

    1995-01-01

    Discharge sequential control (DSC) is an essential control function for the intermittent and pulse discharge operation of a tokamak device, so that many subsystems may work with each other in correct order and/or synchronously. In the development of the DSC program, block diagrams of logical operation for sequential control are illustrated in its design at first. Then, the logical operators and I/O's which are involved in the block diagrams are compiled and converted to a certain particular form. Since the block diagrams of the sequential control amounts to about 50 sheets in the case of the JT-60 upgrade tokamak (JT-60U) high power discharge and the above steps of the development have been performed manually so far, a great effort has been required for the program development. In order to remove inefficiency in such development processes, a computer-aided software engineering (CASE) tool has been developed on a UNIX workstation. This paper reports how the authors design it for the development of the sequential control programs. The tool is composed of the following three tools: (1) Automatic drawing tool, (2) Editing tool, and (3) Trace tool. This CASE tool, an object-oriented programming tool having graphical formalism, can powerfully accelerate the cycle for the development of the sequential control function commonly associated with pulse discharge in a tokamak fusion device

  18. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  19. Micro-computer based control system and its software for carrying out the sequential acceleration on SMCAMS

    International Nuclear Information System (INIS)

    Li Deming

    2001-01-01

    Micro-computer based control system and its software for carrying out the sequential acceleration on SMCAMS is described. Also, the establishment of the 14 C particle measuring device and the improvement of the original power supply system are described

  20. PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.

    Science.gov (United States)

    MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S

    2005-06-01

    Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.

  1. Computer-assisted sequential quantitative analysis of gallium scans in pulmonary sarcoidosis

    International Nuclear Information System (INIS)

    Rohatgi, P.K.; Bates, H.R.; Noss, R.W.

    1985-01-01

    Fifty-one sequential gallium citrate scans were performed in 22 patients with biopsy-proven sarcoidosis. A computer-assisted quantitative analysis of these scans was performed to obtain a gallium score. The changes in gallium score were correlated with changes in serum angiotensin converting enzyme (SACE) activity and objective changes in clinical status. There was a good concordance between changes in gallium score, SACE activity and clinical assessment in patients with sarcoidosis, and changes in gallium index were slightly superior to SACE index in assessing activity of sarcoidosis. (author)

  2. Computationally designed libraries for rapid enzyme stabilization

    NARCIS (Netherlands)

    Wijma, Hein J.; Floor, Robert J.; Jekel, Peter A.; Baker, David; Marrink, Siewert J.; Janssen, Dick B.

    The ability to engineer enzymes and other proteins to any desired stability would have wide-ranging applications. Here, we demonstrate that computational design of a library with chemically diverse stabilizing mutations allows the engineering of drastically stabilized and fully functional variants

  3. A Comparison of Sequential and GPU Implementations of Iterative Methods to Compute Reachability Probabilities

    Directory of Open Access Journals (Sweden)

    Elise Cormie-Bowins

    2012-10-01

    Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.

  4. More power : Accelerating sequential Computer Vision algorithms using commodity parallel hardware

    NARCIS (Netherlands)

    Jaap van de Loosdrecht; K. Dijkstra

    2014-01-01

    The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer

  5. "Ultra-rapid" sequential treatment in cholecystocholedocholithiasis: alternative same-day approach to laparoendoscopic rendezvous.

    Science.gov (United States)

    Borreca, Dario; Bona, Alberto; Bellomo, Maria Paola; Borasi, Andrea; De Paolis, Paolo

    2015-12-01

    There is still no consensus about timing of laparoscopic cholecystectomy after endoscopic retrograde cholangiopancreatography in the treatment of cholecystocholedocholithiasis. The aim of our retrospective study is to analyze the optimal timing of surgical treatment in patients presenting concurrent choledocholithiasis, choosing to perform a sequential endoscopic plus surgical approach, introducing a same-day two-stage alternative. All cases of cholecystocholedocholithiasis occurred between January 2007 and December 2014 in "Gradenigo" Hospital (Turin-Italy) were reviewed. Patients were divided into three groups, based on the timing of cholecystectomy after endoscopic retrograde cholangiopancreatography, and compared. Out of 2233 cholecystectomies performed in the mentioned time interval, have been identified 93 patients that fulfill the selection criteria. 36 patients were treated with a same-day approach, while 29 within first 72 h and 28 with delayed surgery. The overall length of stay was significantly lower in patients that were treated with a same-day approach (4.7 days), compared with other groups (p = 0.001), while no significant differences were found in terms of length of surgical intervention, intraoperative complications and conversions to open procedure, postoperative stay, morbidity and mortality. Patients treated with delayed surgery had a 18 % recurrency rate of biliary events, with an odds ratio of 14.13 (p = 0.018). Same-day two-stage approach should be performed in suitable patients at the index admission, reducing overall risks, improving the patients' quality-of-life, preventing recurrency, leading to a significant cost abatement; furthermore, this approach allows same outcomes of laparoendoscopic rendezvous, avoiding technical and organizational troubles.

  6. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC

  7. Computer programmes for the control and data manipulation of a sequential x-ray-fluorescence spectrometer

    International Nuclear Information System (INIS)

    Spimpolo, G.F.

    1984-01-01

    Two computer programmes have been written for use on a fully automated Siemens SRS200 sequential X-ray-fluorescence spectrometer. The first of these is used to control the spectrometer via an LC200 logic controller using a Data General Nova IV minicomputer; the second is used for the on-line evaluation of the intensity results and the printout of the analytical results. This system is an alternative to the systems offered by Siemens Ltd, which consist of a Process PR310 or Digital DEC PDP1103 computer and the Siemens Spectra 310 software package. The multibatch capabilities of the programmes, with the option of measuring one sample or a tray of samples before the results are calculated, give the new programmes a major advantage over the dedicated software and, together with the elimination of human error in calculation, have resulted in increased efficiency and quality in routine analyses. A description is given of the two programmes, as well as instruction and guidelines to the user

  8. Sequential nearest-neighbor effects on computed {sup 13}C{sup {alpha}} chemical shifts

    Energy Technology Data Exchange (ETDEWEB)

    Vila, Jorge A. [Cornell University, Baker Laboratory of Chemistry and Chemical Biology (United States); Serrano, Pedro; Wuethrich, Kurt [The Scripps Research Institute, Department of Molecular Biology (United States); Scheraga, Harold A., E-mail: has5@cornell.ed [Cornell University, Baker Laboratory of Chemistry and Chemical Biology (United States)

    2010-09-15

    To evaluate sequential nearest-neighbor effects on quantum-chemical calculations of {sup 13}C{sup {alpha}} chemical shifts, we selected the structure of the nucleic acid binding (NAB) protein from the SARS coronavirus determined by NMR in solution (PDB id 2K87). NAB is a 116-residue {alpha}/{beta} protein, which contains 9 prolines and has 50% of its residues located in loops and turns. Overall, the results presented here show that sizeable nearest-neighbor effects are seen only for residues preceding proline, where Pro introduces an overestimation, on average, of 1.73 ppm in the computed {sup 13}C{sup {alpha}} chemical shifts. A new ensemble of 20 conformers representing the NMR structure of the NAB, which was calculated with an input containing backbone torsion angle constraints derived from the theoretical {sup 13}C{sup {alpha}} chemical shifts as supplementary data to the NOE distance constraints, exhibits very similar topology and comparable agreement with the NOE constraints as the published NMR structure. However, the two structures differ in the patterns of differences between observed and computed {sup 13}C{sup {alpha}} chemical shifts, {Delta}{sub ca,i}, for the individual residues along the sequence. This indicates that the {Delta}{sub ca,i} -values for the NAB protein are primarily a consequence of the limited sampling by the bundles of 20 conformers used, as in common practice, to represent the two NMR structures, rather than of local flaws in the structures.

  9. Rapid removal of fine particles from mine water using sequential processes of coagulation and flocculation

    Energy Technology Data Exchange (ETDEWEB)

    Jang, M.; Lee, H.J.; Shim, Y. [Korean Mine Reclamation Corporation MIRECO, Seoul (Republic of Korea)

    2010-07-01

    The processes of coagulation and flocculation using high molecular weight long-chain polymers were applied to treat mine water having fine flocs of which about 93% of the total mass was less than 3.02 {mu} m, representing the size distribution of fine particles. Six different combinations of acryl-type anionic flocculants and polyamine-type cationic coagulants were selected to conduct kinetic tests on turbidity removal in mine water. Optimization studies on the types and concentrations of the coagulant and flocculant showed that the highest rate of turbidity removal was obtained with 10 mg L{sup -1} FL-2949 (coagulant) and 12 mg L{sup -1} A333E (flocculant), which was about 14.4 and 866.7 times higher than that obtained with A333E alone and that obtained through natural precipitation by gravity, respectively. With this optimized condition, the turbidity of mine water was reduced to 0 NTU within 20 min. Zeta potential measurements were conducted to elucidate the removal mechanism of the fine particles, and they revealed that there was a strong linear relationship between the removal rate of each pair of coagulant and flocculant application and the zeta potential differences that were obtained by subtracting the zeta potential of flocculant-treated mine water from the zeta potential of coagulant-treated mine water. Accordingly, through an optimization process, coagulation-flocculation by use of polymers could be advantageous to mine water treatment, because the process rapidly removes fine particles in mine water and only requires a small-scale plant for set-up purposes owing to the short retention time in the process.

  10. Rapid removal of fine particles from mine water using sequential processes of coagulation and flocculation.

    Science.gov (United States)

    Jang, Min; Lee, Hyun-Ju; Shim, Yonsik

    2010-04-01

    The processes of coagulation and flocculation using high molecular weight long-chain polymers were applied to treat mine water having fine flocs of which about 93% of the total mass was less than 3.02 microm, representing the size distribution of fine particles. Six different combinations of acryl-type anionic flocculants and polyamine-type cationic coagulants were selected to conduct kinetic tests on turbidity removal in mine water. Optimization studies on the types and concentrations of the coagulant and flocculant showed that the highest rate of turbidity removal was obtained with 10 mg L(-1) FL-2949 (coagulant) and 12 mg L(-1) A333E (flocculant), which was about 14.4 and 866.7 times higher than that obtained with A333E alone and that obtained through natural precipitation by gravity, respectively. With this optimized condition, the turbidity of mine water was reduced to 0 NTU within 20 min. Zeta potential measurements were conducted to elucidate the removal mechanism of the fine particles, and they revealed that there was a strong linear relationship between the removal rate of each pair of coagulant and flocculant application and the zeta potential differences that were obtained by subtracting the zeta potential of flocculant-treated mine water from the zeta potential of coagulant-treated mine water. Accordingly, through an optimization process, coagulation-flocculation by use of polymers could be advantageous to mine water treatment, because the process rapidly removes fine particles in mine water and only requires a small-scale plant for set-up purposes owing to the short retention time in the process.

  11. Real sequential evaluation of materials balance data with the computer program PROSA

    International Nuclear Information System (INIS)

    Bicking, U.; Golly, W.; Seifert, R.

    1991-01-01

    Material accountancy is an important tool for international nuclear safeguards. The aim is to detect a possible loss of material timely and with high probability. In this context, a computer program called PROSA (Program for Sequential Analysis of NRTA data) was developed at the Karlsruhe Nuclear Research Center. PROSA is a statistical tool to decide on the basis of statistical considerations whether or not in a given sequence of material balances a loss of material might have occurred. The evaluation of the material balance data (MUF values) is carried out with statistical test procedures. In the present PROSA version 4.0 three tests, Page's test, CUMUF test and GEMUF test are applied at a time. These three test procedures are the result of several years of research and are supposed to be the most promising ones with respect to the detection probability of possible losses of material as well as to the timeliness of such a detection. PROSA version 4.0 is a user-friendly, menudriven computer program which is suitable for routine field application. Data input - that means MUF values and measurement model - can be performed either by diskette or by key-enter. The output consists of an information whether or not an alarm is indicated. This information can be displayed either numerically or graphically. Therefore, a comfortable graphical output utility is attached to PROSA version 4.0. In this presentation the theoretical concepts implemented in PROSA will be explained. Furthermore, the functioning of the program will be presented and the performance of PROSA will be demonstrated using balance data of a real reprocessing campaign. (J.P.N.)

  12. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    Science.gov (United States)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type

  13. Sequential method for the assessment of innovations in computer assisted industrial processes

    International Nuclear Information System (INIS)

    Suarez Antola R.

    1995-01-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs

  14. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  15. Efficacy of sequential or simultaneous interactive computer-tailored interventions for increasing physical activity and decreasing fat intake.

    Science.gov (United States)

    Vandelanotte, Corneel; De Bourdeaudhuij, Ilse; Sallis, James F; Spittaels, Heleen; Brug, Johannes

    2005-04-01

    Little evidence exists about the effectiveness of "interactive" computer-tailored interventions and about the combined effectiveness of tailored interventions on physical activity and diet. Furthermore, it is unknown whether they should be executed sequentially or simultaneously. The purpose of this study was to examine (a) the effectiveness of interactive computer-tailored interventions for increasing physical activity and decreasing fat intake and (b) which intervening mode, sequential or simultaneous, is most effective in behavior change. Participants (N = 771) were randomly assigned to receive (a) the physical activity and fat intake interventions simultaneously at baseline, (b) the physical activity intervention at baseline and the fat intake intervention 3 months later, (c) the fat intake intervention at baseline and the physical activity intervention 3 months later, or (d) a place in the control group. Six months postbaseline, the results showed that the tailored interventions produced significantly higher physical activity scores, F(2, 573) = 11.4, p physical activity intervention, the simultaneous mode appeared to work better than the sequential mode.

  16. The relative timing between eye and hand rapid sequential pointing is affected by time pressure, but not by advance knowledge

    NARCIS (Netherlands)

    Deconinck, F.; van Polanen, V.; Savelsbergh, G.J.P.; Bennett, S.

    2011-01-01

    The present study examined the effect of timing constraints and advance knowledge on eye-hand coordination strategy in a sequential pointing task. Participants were required to point at two successively appearing targets on a screen while the inter-stimulus interval (ISI) and the trial order were

  17. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  18. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application

  19. Sequential Injection Method for Rapid and Simultaneous Determination of 236U, 237Np, and Pu Isotopes in Seawater

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Steier, Peter

    2013-01-01

    An automated analytical method implemented in a novel dual-column tandem sequential injection (SI) system was developed for simultaneous determination of 236U, 237Np, 239Pu, and 240Pu in seawater samples. A combination of TEVA and UTEVA extraction chromatography was exploited to separate and purify...... target analytes, whereupon plutonium and neptunium were simultaneously isolated and purified on TEVA, while uranium was collected on UTEVA. The separation behavior of U, Np, and Pu on TEVA–UTEVA columns was investigated in detail in order to achieve high chemical yields and complete purification...

  20. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  1. Flow Injection/Sequential Injection Analysis Systems: Potential Use as Tools for Rapid Liver Diseases Biomarker Study

    Directory of Open Access Journals (Sweden)

    Supaporn Kradtap Hartwell

    2012-01-01

    Full Text Available Flow injection/sequential injection analysis (FIA/SIA systems are suitable for carrying out automatic wet chemical/biochemical reactions with reduced volume and time consumption. Various parts of the system such as pump, valve, and reactor may be built or adapted from available materials. Therefore the systems can be at lower cost as compared to other instrumentation-based analysis systems. Their applications for determination of biomarkers for liver diseases have been demonstrated in various formats of operation but only a few and limited types of biomarkers have been used as model analytes. This paper summarizes these applications for different types of reactions as a guide for using flow-based systems in more biomarker and/or multibiomarker studies.

  2. Performance of a Sequential and Parallel Computational Fluid Dynamic (CFD) Solver on a Missile Body Configuration

    National Research Council Canada - National Science Library

    Hisley, Dixie

    1999-01-01

    .... The goals of this report are: (1) to investigate the performance of message passing and loop level parallelization techniques, as they were implemented in the computational fluid dynamics (CFD...

  3. Reuse, Recycle, Reweigh: Combating Influenza through Efficient Sequential Bayesian Computation for Massive Data

    OpenAIRE

    Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.

    2010-01-01

    Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, of...

  4. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  5. Rapid and reagent-saving immunoassay using innovative stirring actions of magnetic beads in microreactors in the sequential injection mode.

    Science.gov (United States)

    Tanaka, K; Imagawa, H

    2005-12-15

    We developed new ELISA techniques in sequential injection analysis (SIA) mode using microreactors with content of a few microliters. We immobilized antibodies on magnetic beads 1.0mum in diameter, injected the beads into microreactors and applied rotating magnetic fields of several hundred gauss. Magnetic beads, suspended in liquid in density of approximately 10(9)-10(10) particles per millilitre, form a large number of thin rod clusters, whose length-wise axes are oriented in parallel with the magnetic field. We rotate the Nd magnets below the center of the microreactor by a tiny motor at about 2000-5000rpm. These rotating clusters remarkably accelerate the binding rate of the antibodies with antigens in the liquid. The beads are trapped around the center of the rotating magnetic field even in the flowing liquid. This newly found phenomenon enables easy bead handling in microreactors. Modification of reactor walls with selected blocking reagents was essential, because protein-coated beads often stick to the wall surface and cannot move freely. Washing steps were also shortened.

  6. Rapid and simultaneous determination of neptunium and plutonium isotopes in environmental samples by extraction chromatography using sequential injection analysis and ICP-MS

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2010-01-01

    plutonium and neptunium in three reference materials were in agreement with the recommended or literature values at the 0.05 significance level. The developed method is suitable for the analysis of up to 10 g of soil and 20 g of seaweed samples. The extraction chromatographic separation within the SI system......This paper reports an automated analytical method for rapid and simultaneous determination of plutonium isotopes (239Pu and 240Pu) and neptunium (237Np) in environmental samples. An extraction chromatographic column packed with TrisKem TEVA® resin was incorporated in a sequential injection (SI...... for a single sample takes less than 1.5 h. As compared to batchwise procedures, the developed method significantly improves the analysis efficiency, reduces the labor intensity and expedites the simultaneous determination of plutonium and neptunium as demanded in emergency actions....

  7. Rapid Determination of Plutonium Isotopes in Environmental Samples Using Sequential Injection Extraction Chromatography and Detection by Inductively Coupled Plasma Mass Spectrometry

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2009-01-01

    This article presents an automated method for the rapid determination of 239Pu and 240Pu in various environmental samples. The analytical method involves the in-line separation of Pu isotopes using extraction chromatography (TEVA) implemented in a sequential injection (SI) network followed...... by detection of isolated analytes with inductively coupled plasma mass spectrometry (ICP-MS). The method has been devised for the determination of Pu isotopes at environmentally relevant concentrations, whereby it has been successfully applied to the analyses of large volumes/amounts of samples, for example......, 100−200 g of soil and sediment, 20 g of seaweed, and 200 L of seawater following analyte preconcentration. The investigation of the separation capability of the assembled SI system revealed that up to 200 g of soil or sediment can be treated using a column containing about 0.70 g of TEVA resin...

  8. Simulation of skill acquisition in sequential learning of a computer game

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Nielsen, Finn Ravnsbjerg; Rasmussen, Jens

    1995-01-01

    The paper presents some theoretical assumptions about the cognitive control mechanisms of subjects learning to play a computer game. A simulation model has been developed to investigate these assumptions. The model is an automaton, reacting to instruction-like cue action rules. The prototypical...... performances of 23 experimental subjects at succeeding levels of training are compared to the performance of the model. The findings are interpreted in terms of a general taxonomy for cognitive task analysis....

  9. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development

    OpenAIRE

    Ziatdinov, Rushan; Musa, Sajid

    2013-01-01

    In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  10. Reuse, Recycle, Reweigh: Combating Influenza through Efficient Sequential Bayesian Computation for Massive Data.

    Science.gov (United States)

    Tom, Jennifer A; Sinsheimer, Janet S; Suchard, Marc A

    Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, often compared using point estimates that fail to account for the variability within and correlation between the distributions these realizations approximate. However, although the initial concession to stratify generally precludes the more sensible analysis using a single joint hierarchical model, we can circumvent this outcome and capitalize on the intermediate realizations by extending the dynamic iterative reweighting MCMC algorithm. In doing so, we reuse the available realizations by reweighting them with importance weights, recycling them into a now tractable joint hierarchical model. We apply this technique to intermediate realizations generated from stratified analyses of 687 influenza A genomes spanning 13 years allowing us to revisit hypotheses regarding the evolutionary history of influenza within a hierarchical statistical framework.

  11. Accelerated Combinatorial High Throughput Star Polymer Synthesis via a Rapid One-Pot Sequential Aqueous RAFT (rosa-RAFT) Polymerization Scheme.

    Science.gov (United States)

    Cosson, Steffen; Danial, Maarten; Saint-Amans, Julien Rosselgong; Cooper-White, Justin J

    2017-04-01

    Advanced polymerization methodologies, such as reversible addition-fragmentation transfer (RAFT), allow unprecedented control over star polymer composition, topology, and functionality. However, using RAFT to produce high throughput (HTP) combinatorial star polymer libraries remains, to date, impracticable due to several technical limitations. Herein, the methodology "rapid one-pot sequential aqueous RAFT" or "rosa-RAFT," in which well-defined homo-, copolymer, and mikto-arm star polymers can be prepared in very low to medium reaction volumes (50 µL to 2 mL) via an "arm-first" approach in air within minutes, is reported. Due to the high conversion of a variety of acrylamide/acrylate monomers achieved during each successive short reaction step (each taking 3 min), the requirement for intermediary purification is avoided, drastically facilitating and accelerating the star synthesis process. The presented methodology enables RAFT to be applied to HTP polymeric bio/nanomaterials discovery pipelines, in which hundreds of complex polymeric formulations can be rapidly produced, screened, and scaled up for assessment in a wide range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Basic and clinical studies of dynamic sequential computed tomography during arterial portography in the diagnosis of hepatic cancers

    International Nuclear Information System (INIS)

    Matsui, Osamu

    1986-01-01

    The author has developed dynamic sequential computed tomography with table incrementation during arterial portography (DSCTI-AP) for the precise diagnosis of liver cancers. 1. Basic Studies on DSCTI-AP; 1) The phantom experiment simulating DSCTI-AP revealed that it is possible to resolve a cylindrical object 5 mm in diameter with more than 50 Hounsfield Unit (HU) difference in the attenuation value compared to the surrounding area by third generation CT scanner. 2) All macroscopically visible nodules of hepatocellular carcinoma (HCC) chemically induced in rat liver and VX2 tumor transplanted via the portal vein in rabbit liver were visualized as portal perfusion defects on portal microangiograms and nodular low density areas in CT during portography. 2. Analysis of the clinical usefulness of DSCTI-AP; 1) The smallest nodules of HCC and metastatic liver cancer detected by DSCTI-AP were 5 mm in diameter. 2) DSCTI-AP was superior to radionuclide liver scanning, ultrasound (US), computed tomography (CT), selective celiac arteriography (SCA) and infusion hepatic angiography (IHA) in the detection of small HCC. But IHA was superior to DSCTI-AP in visualizing extremely hypervascular HCC nodules, and all of the small HCCs were demonstrated by the combination of DSCTI-AP and IHA. 3) DSCTI-AP was superior to the all other imaging modalities including CT following intraarterial injection of iodized oil (Lipiodol CT) in detecting metastatic liver cancers especially less than 1 cm in diameter. 4) Lipiodol CT was superior to DSCTI-AP in visualizing small hypervascular HCC nodules. 5) DSCTI-AP was the most sensitive method in diagnosing peripheral intrahepatic portal tumor thrombus. 6) DSCTI-AP had the advantage in visualizing the location of hepatic tumors and their relation to major vessels objectively. (J.P.N.)

  13. Rapid isolation of plutonium in environmental solid samples using sequential injection anion exchange chromatography followed by detection with inductively coupled plasma mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Qiao Jixin, E-mail: jixin.qiao@risoe.d [Radiation Research Division, Riso National Laboratory for Sustainable Energy, Technical University of Denmark, DK-4000 Roskilde (Denmark); Hou Xiaolin; Roos, Per [Radiation Research Division, Riso National Laboratory for Sustainable Energy, Technical University of Denmark, DK-4000 Roskilde (Denmark); Miro, Manuel [Department of Chemistry, Faculty of Sciences, University of the Balearic Islands, Carretera de Valldemossa km. 7.5, E-07122 Palma de Mallorca, Illes Balears (Spain)

    2011-01-31

    This paper reports an automated analytical method for rapid determination of plutonium isotopes ({sup 239}Pu and {sup 240}Pu) in environmental solid extracts. Anion exchange chromatographic columns were incorporated in a sequential injection (SI) system to undertake the automated separation of plutonium from matrix and interfering elements. The analytical results most distinctly demonstrated that the crosslinkage of the anion exchanger is a key parameter controlling the separation efficiency. AG 1-x4 type resin was selected as the most suitable sorbent material for analyte separation. Investigation of column size effect upon the separation efficiency revealed that small-sized (2 mL) columns sufficed to handle up to 50 g of environmental soil samples. Under the optimum conditions, chemical yields of plutonium exceeded 90% and the decontamination factors for uranium, thorium and lead ranged from 10{sup 3} to 10{sup 4}. The determination of plutonium isotopes in three standard/certified reference materials (IAEA-375 soil, IAEA-135 sediment and NIST-4359 seaweed) and two reference samples (Irish Sea sediment and Danish soil) revealed a good agreement with reference/certified values. The SI column-separation method is straightforward and less labor intensive as compared with batch-wise anion exchange chromatographic procedures. Besides, the automated method features low consumption of ion-exchanger and reagents for column washing and elution, with the consequent decrease in the generation of acidic waste, thus bearing green chemical credentials.

  14. Sequential determination of fat- and water-soluble vitamins in Rhodiola imbricata root from trans-Himalaya with rapid resolution liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Tayade, Amol B; Dhar, Priyanka; Kumar, Jatinder; Sharma, Manu; Chaurasia, Om P; Srivastava, Ravi B

    2013-07-30

    A rapid method was developed to determine both types of vitamins in Rhodiola imbricata root for the accurate quantification of free vitamin forms. Rapid resolution liquid chromatography/tandem mass spectrometry (RRLC-MS/MS) with electrospray ionization (ESI) source operating in multiple reactions monitoring (MRM) mode was optimized for the sequential analysis of nine water-soluble vitamins (B1, B2, two B3 vitamins, B5, B6, B7, B9, and B12) and six fat-soluble vitamins (A, E, D2, D3, K1, and K2). Both types of vitamins were separated by ion-suppression reversed-phase liquid chromatography with gradient elution within 30 min and detected in positive ion mode. Deviations in the intra- and inter-day precision were always below 0.6% and 0.3% for recoveries and retention time. Intra- and inter-day relative standard deviation (RSD) values of retention time for water- and fat-soluble vitamin were ranged between 0.02-0.20% and 0.01-0.15%, respectively. The mean recoveries were ranged between 88.95 and 107.07%. Sensitivity and specificity of this method allowed the limits of detection (LOD) and limits of quantitation (LOQ) of the analytes at ppb levels. The linear range was achieved for fat- and water-soluble vitamins at 100-1000 ppb and 10-100 ppb. Vitamin B-complex and vitamin E were detected as the principle vitamins in the root of this adaptogen which would be of great interest to develop novel foods from the Indian trans-Himalaya. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. A study on computer-aided diagnosis based on temporal subtraction of sequential chest radiographs (in Japanese)

    International Nuclear Information System (INIS)

    Kano, Akiko

    2001-01-01

    An automated digital image subtraction technique for use with pairs of temporally sequential chest radiographs has been developed to aid radiologists in the detection of interval changes. Automated image registration based on nonlinear geometric warping is performed prior to subtraction in order to deal with complicated radiographic misregistration. Processing includes global matching, to achieve rough registration between the entire lung fields in the two images, and local matching, based on a cross-correlation method, to determine local shift values for a number of small regions. A proper warping of x,y-coordinates is determined by fitting two-dimensional polynomials to the distributions of the shift values. One image is warped and then subtracted from the other. The resultant subtraction images were able to enhance the conspicuity of various types of interval changes. Improved global matching based on a weighted template matching method achieved robust registration even with photofluorographs taken in chest mass screening programs, which had previously presented us with a relatively large number of poor-registration images. The new method was applied to 129 pairs of chest mass screening images, and offered registration accuracy as good as manual global matching. An observer test using 114 cases including 57 lung cancer cases presented better sensitivity and specificity on average compared to conventional comparison readings. In addition, newly developed image processing that eliminates the rib edge artifacts in subtraction images was applied to 26 images having pathological interval changes; results showed the potential for application to automated schemes for the detection of interval change patterns. With its capacity to improve the diagnostic accuracy of chest radiographs, the chest temporal subtraction technique promises to become an important element of computer-aided diagnosis (CAD) systems

  16. Graphics processor efficiency for realization of rapid tabular computations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Us, S.A.; Shestakov, M.V.

    2016-01-01

    Capabilities of graphics processing units (GPU) and central processing units (CPU) have been investigated for realization of fast-calculation algorithms with the use of tabulated functions. The realization of tabulated functions is exemplified by the GPU/CPU architecture-based processors. Comparison is made between the operating efficiencies of GPU and CPU, employed for tabular calculations at different conditions of use. Recommendations are formulated for the use of graphical and central processors to speed up scientific and engineering computations through the use of tabulated functions

  17. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    Science.gov (United States)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  18. Rapid computation of chemical equilibrium composition - An application to hydrocarbon combustion

    Science.gov (United States)

    Erickson, W. D.; Prabhu, R. K.

    1986-01-01

    A scheme for rapidly computing the chemical equilibrium composition of hydrocarbon combustion products is derived. A set of ten governing equations is reduced to a single equation that is solved by the Newton iteration method. Computation speeds are approximately 80 times faster than the often used free-energy minimization method. The general approach also has application to many other chemical systems.

  19. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    Science.gov (United States)

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  20. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    Science.gov (United States)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  1. Computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9-in. NaI(Tl) crystal containing a 3.25-in. deep by 3.5-in. diameter well. This gamma detection system is controlled by a mini-computer with a dual floppy disk storage medium. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing

  2. Computed tomographic demonstration of rapid changes in fatty infiltration of the liver

    International Nuclear Information System (INIS)

    Bashist, B.; Hecht, H.L.; Harely, W.D.

    1982-01-01

    Two alcoholic patients in whom computed tomography (CT) demonstrated reversal of fatty infiltration of the liver are described. The rapid reversibility of fatty infiltration can be useful in monitoring alcoholics with fatty livers. Focal fatty infiltration can mimic focal hepatic lesions and repeat scans can be utilized to assess changes in CT attenuation values when this condition is suspected

  3. Computer-facilitated rapid HIV testing in emergency care settings: provider and patient usability and acceptability.

    Science.gov (United States)

    Spielberg, Freya; Kurth, Ann E; Severynen, Anneleen; Hsieh, Yu-Hsiang; Moring-Parris, Daniel; Mackenzie, Sara; Rothman, Richard

    2011-06-01

    Providers in emergency care settings (ECSs) often face barriers to expanded HIV testing. We undertook formative research to understand the potential utility of a computer tool, "CARE," to facilitate rapid HIV testing in ECSs. Computer tool usability and acceptability were assessed among 35 adult patients, and provider focus groups were held, in two ECSs in Washington State and Maryland. The computer tool was usable by patients of varying computer literacy. Patients appreciated the tool's privacy and lack of judgment and their ability to reflect on HIV risks and create risk reduction plans. Staff voiced concerns regarding ECS-based HIV testing generally, including resources for follow-up of newly diagnosed people. Computer-delivered HIV testing support was acceptable and usable among low-literacy populations in two ECSs. Such tools may help circumvent some practical barriers associated with routine HIV testing in busy settings though linkages to care will still be needed.

  4. Field-scale multi-phase LNAPL remediation: Validating a new computational framework against sequential field pilot trials.

    Science.gov (United States)

    Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B

    2018-03-05

    Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.

  5. A novel technique for presurgical nasoalveolar molding using computer-aided reverse engineering and rapid prototyping.

    Science.gov (United States)

    Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang

    2011-01-01

    To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.

  6. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  7. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  8. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licensing reviews of nuclear power plant structures. The documentation of the Seismic Module of CARES consists of three volumes. This report is Volume 2 of the three volume documentation of the Seismic Module of CARES and represents the User's Manual. 14 refs

  9. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licensing reviews of nuclear power plant structures. The documentation of the Seismic Module of CARES consists of three volumes. This report represents Volume 3 of the volume documentation of the Seismic Module of CARES. It presents three sample problems typically encountered in the Soil-Structure Interaction analyses. 14 refs., 36 figs., 2 tabs

  10. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licencing reviews of nuclear power plant structures. The docomentation of the Seismic Module of CARES consists of three volumes. This report represents Volume 1 of the three volume documentation of the Seismic Module of CARES. It concentrates on the theoretical basis of the system and presents modeling assumptions and limitations as well as solution schemes and algorithms of CARES. 31 refs., 6 figs

  11. Development of a rapid multi-line detector for industrial computed tomography

    International Nuclear Information System (INIS)

    Nachtrab, Frank; Firsching, Markus; Hofmann, Thomas; Uhlmann, Norman; Neubauer, Harald; Nowak, Arne

    2015-01-01

    In this paper we present the development of a rapid multi-row detector is optimized for industrial computed tomography. With a high frame rate, high spatial resolution and the ability to use up to 450 kVp it is particularly suitable for applications such as fast acquisition of large objects, inline CT or time-resolved 4D CT. (Contains PowerPoint slides). [de

  12. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  13. A computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities (RASA) Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9 inch NaI(T1) crystal containing a 3.25 inch deep by 3.5 inch diameter well. This gamma detection system is controlled by a minicomputer with a dual floppy disk storage medium, line printer, and optional X-Y plotter. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing. The computer system is a Commodore Business Machines (CBM) Model 8032 personal computer with CBM peripherals. Control and data signals are utilized via the parallel user's port to the interface unit. The analog-to-digital converter (ADC) is controlled in machine language, bootstrapped to high memory, and is addressed through the BASIC program. The BASIC program is designed to be ''user friendly'' and provides the operator with several modes of operation such as background and analysis acquisition. Any number of energy regions-of-interest (ROI) may be analyzed with automatic background substraction. Also employed in the BASIC program are the 226 Ra algorithms which utilize linear and polynomial regression equations for data conversion and look-up tables for radon equilibrating coefficients. The optional X-Y plotter may be used with two- or three-dimensional curve programs to enhance data analysis and presentation. A description of the system is presented and typical applications are discussed

  14. Computational area measurement of orbital floor fractures: Reliability, accuracy and rapidity

    International Nuclear Information System (INIS)

    Schouman, Thomas; Courvoisier, Delphine S.; Imholz, Benoit; Van Issum, Christopher; Scolozzi, Paolo

    2012-01-01

    Objective: To evaluate the reliability, accuracy and rapidity of a specific computational method for assessing the orbital floor fracture area on a CT scan. Method: A computer assessment of the area of the fracture, as well as that of the total orbital floor, was determined on CT scans taken from ten patients. The ratio of the fracture's area to the orbital floor area was also calculated. The test–retest precision of measurement calculations was estimated using the Intraclass Correlation Coefficient (ICC) and Dahlberg's formula to assess the agreement across observers and across measures. The time needed for the complete assessment was also evaluated. Results: The Intraclass Correlation Coefficient across observers was 0.92 [0.85;0.96], and the precision of the measures across observers was 4.9%, according to Dahlberg's formula .The mean time needed to make one measurement was 2 min and 39 s (range, 1 min and 32 s to 4 min and 37 s). Conclusion: This study demonstrated that (1) the area of the orbital floor fracture can be rapidly and reliably assessed by using a specific computer system directly on CT scan images; (2) this method has the potential of being routinely used to standardize the post-traumatic evaluation of orbital fractures

  15. Development of a rapid method for the sequential extraction and subsequent quantification of fatty acids and sugars from avocado mesocarp tissue.

    Science.gov (United States)

    Meyer, Marjolaine D; Terry, Leon A

    2008-08-27

    Methods devised for oil extraction from avocado (Persea americana Mill.) mesocarp (e.g., Soxhlet) are usually lengthy and require operation at high temperature. Moreover, methods for extracting sugars from avocado tissue (e.g., 80% ethanol, v/v) do not allow for lipids to be easily measured from the same sample. This study describes a new simple method that enabled sequential extraction and subsequent quantification of both fatty acids and sugars from the same avocado mesocarp tissue sample. Freeze-dried mesocarp samples of avocado cv. Hass fruit of different ripening stages were extracted by homogenization with hexane and the oil extracts quantified for fatty acid composition by GC. The resulting filter residues were readily usable for sugar extraction with methanol (62.5%, v/v). For comparison, oil was also extracted using the standard Soxhlet technique and the resulting thimble residue extracted for sugars as before. An additional experiment was carried out whereby filter residues were also extracted using ethanol. Average oil yield using the Soxhlet technique was significantly (P < 0.05) higher than that obtained by homogenization with hexane, although the difference remained very slight, and fatty acid profiles of the oil extracts following both methods were very similar. Oil recovery improved with increasing ripeness of the fruit with minor differences observed in the fatty acid composition during postharvest ripening. After lipid removal, methanolic extraction was superior in recovering sucrose and perseitol as compared to 80% ethanol (v/v), whereas mannoheptulose recovery was not affected by solvent used. The method presented has the benefits of shorter extraction time, lower extraction temperature, and reduced amount of solvent and can be used for sequential extraction of fatty acids and sugars from the same sample.

  16. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    Science.gov (United States)

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  17. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  18. Rapid prototyping of an EEG-based brain-computer interface (BCI).

    Science.gov (United States)

    Guger, C; Schlögl, A; Neuper, C; Walterspacher, D; Strein, T; Pfurtscheller, G

    2001-03-01

    The electroencephalogram (EEG) is modified by motor imagery and can be used by patients with severe motor impairments (e.g., late stage of amyotrophic lateral sclerosis) to communicate with their environment. Such a direct connection between the brain and the computer is known as an EEG-based brain-computer interface (BCI). This paper describes a new type of BCI system that uses rapid prototyping to enable a fast transition of various types of parameter estimation and classification algorithms to real-time implementation and testing. Rapid prototyping is possible by using Matlab, Simulink, and the Real-Time Workshop. It is shown how to automate real-time experiments and perform the interplay between on-line experiments and offline analysis. The system is able to process multiple EEG channels on-line and operates under Windows 95 in real-time on a standard PC without an additional digital signal processor (DSP) board. The BCI can be controlled over the Internet, LAN or modem. This BCI was tested on 3 subjects whose task it was to imagine either left or right hand movement. A classification accuracy between 70% and 95% could be achieved with two EEG channels after some sessions with feedback using an adaptive autoregressive (AAR) model and linear discriminant analysis (LDA).

  19. Quantitative evaluation of the disintegration of orally rapid disintegrating tablets by X-ray computed tomography.

    Science.gov (United States)

    Otsuka, Makoto; Yamanaka, Azusa; Uchino, Tomohiro; Otsuka, Kuniko; Sadamoto, Kiyomi; Ohshima, Hiroyuki

    2012-01-01

    To measure the rapid disintegration of Oral Disintegrating Tablets (ODT), a new test (XCT) was developed using X-ray computing tomography (X-ray CT). Placebo ODT, rapid disintegration candy (RDC) and Gaster®-D-Tablets (GAS) were used as model samples. All these ODTs were used to measure oral disintegration time (DT) in distilled water at 37±2°C by XCT. DTs were affected by the width of mesh screens, and degree to which the tablet holder vibrated from air bubbles. An in-vivo tablet disintegration test was performed for RDC using 11 volunteers. DT by the in-vivo method was significantly longer than that using the conventional tester. The experimental conditions for XCT such as the width of the mesh screen and degree of vibration were adjusted to be consistent with human DT values. Since DTs by the XCT method were almost the same as the human data, this method was able to quantitatively evaluate the rapid disintegration of ODT under the same conditions as inside the oral cavity. The DTs of four commercially available ODTs were comparatively evaluated by the XCT method, conventional tablet disintegration test and in-vivo method.

  20. Visual vs Fully Automatic Histogram-Based Assessment of Idiopathic Pulmonary Fibrosis (IPF) Progression Using Sequential Multidetector Computed Tomography (MDCT)

    Science.gov (United States)

    Colombi, Davide; Dinkel, Julien; Weinheimer, Oliver; Obermayer, Berenike; Buzan, Teodora; Nabers, Diana; Bauer, Claudia; Oltmanns, Ute; Palmowski, Karin; Herth, Felix; Kauczor, Hans Ulrich; Sverzellati, Nicola

    2015-01-01

    Objectives To describe changes over time in extent of idiopathic pulmonary fibrosis (IPF) at multidetector computed tomography (MDCT) assessed by semi-quantitative visual scores (VSs) and fully automatic histogram-based quantitative evaluation and to test the relationship between these two methods of quantification. Methods Forty IPF patients (median age: 70 y, interquartile: 62-75 years; M:F, 33:7) that underwent 2 MDCT at different time points with a median interval of 13 months (interquartile: 10-17 months) were retrospectively evaluated. In-house software YACTA quantified automatically lung density histogram (10th-90th percentile in 5th percentile steps). Longitudinal changes in VSs and in the percentiles of attenuation histogram were obtained in 20 untreated patients and 20 patients treated with pirfenidone. Pearson correlation analysis was used to test the relationship between VSs and selected percentiles. Results In follow-up MDCT, visual overall extent of parenchymal abnormalities (OE) increased in median by 5 %/year (interquartile: 0 %/y; +11 %/y). Substantial difference was found between treated and untreated patients in HU changes of the 40th and of the 80th percentiles of density histogram. Correlation analysis between VSs and selected percentiles showed higher correlation between the changes (Δ) in OE and Δ 40th percentile (r=0.69; phistogram analysis at one year follow-up of IPF patients, whether treated or untreated: Δ 40th percentile might reflect the change in overall extent of lung abnormalities, notably of ground-glass pattern; furthermore Δ 80th percentile might reveal the course of reticular opacities. PMID:26110421

  1. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. OVERVIEW OF DEVELOPMENT OF P-CARES: PROBABILISTIC COMPUTER ANALYSIS FOR RAPID EVALUATION OF STRUCTURES

    International Nuclear Information System (INIS)

    NIE, J.; XU, J.; COSTANTINO, C.; THOMAS, V.

    2007-01-01

    Brookhaven National Laboratory (BNL) undertook an effort to revise the CARES (Computer Analysis for Rapid Evaluation of Structures) program under the auspices of the US Nuclear Regulatory Commission (NRC). The CARES program provided the NRC staff a capability to quickly check the validity and/or accuracy of the soil-structure interaction (SSI) models and associated data received from various applicants. The aim of the current revision was to implement various probabilistic simulation algorithms in CARES (referred hereinafter as P-CARES [1]) for performing the probabilistic site response and soil-structure interaction (SSI) analyses. This paper provides an overview of the development process of P-CARES, including the various probabilistic simulation techniques used to incorporate the effect of site soil uncertainties into the seismic site response and SSI analyses and an improved graphical user interface (GUI)

  3. Rapid genetic algorithm optimization of a mouse computational model: Benefits for anthropomorphization of neonatal mouse cardiomyocytes

    Directory of Open Access Journals (Sweden)

    Corina Teodora Bot

    2012-11-01

    Full Text Available While the mouse presents an invaluable experimental model organism in biology, its usefulness in cardiac arrhythmia research is limited in some aspects due to major electrophysiological differences between murine and human action potentials (APs. As previously described, these species-specific traits can be partly overcome by application of a cell-type transforming clamp (CTC to anthropomorphize the murine cardiac AP. CTC is a hybrid experimental-computational dynamic clamp technique, in which a computationally calculated time-dependent current is inserted into a cell in real time, to compensate for the differences between sarcolemmal currents of that cell (e.g., murine and the desired species (e.g., human. For effective CTC performance, mismatch between the measured cell and a mathematical model used to mimic the measured AP must be minimal. We have developed a genetic algorithm (GA approach that rapidly tunes a mathematical model to reproduce the AP of the murine cardiac myocyte under study. Compared to a prior implementation that used a template-based model selection approach, we show that GA optimization to a cell-specific model results in a much better recapitulation of the desired AP morphology with CTC. This improvement was more pronounced when anthropomorphizing neonatal mouse cardiomyocytes to human-like APs than to guinea pig APs. CTC may be useful for a wide range of applications, from screening effects of pharmaceutical compounds on ion channel activity, to exploring variations in the mouse or human genome. Rapid GA optimization of a cell-specific mathematical model improves CTC performance and may therefore expand the applicability and usage of the CTC technique.

  4. Cone-beam computed tomography evaluation of dentoskeletal changes after asymmetric rapid maxillary expansion.

    Science.gov (United States)

    Baka, Zeliha Muge; Akin, Mehmet; Ucar, Faruk Izzet; Ileri, Zehra

    2015-01-01

    The aims of this study were to quantitatively evaluate the changes in arch widths and buccolingual inclinations of the posterior teeth after asymmetric rapid maxillary expansion (ARME) and to compare the measurements between the crossbite and the noncrossbite sides with cone-beam computed tomography (CBCT). From our clinic archives, we selected the CBCT records of 30 patients with unilateral skeletal crossbite (13 boys, 14.2 ± 1.3 years old; 17 girls, 13.8 ± 1.3 years old) who underwent ARME treatment. A modified acrylic bonded rapid maxillary expansion appliance including an occlusal locking mechanism was used in all patients. CBCT records had been taken before ARME treatment and after a 3-month retention period. Fourteen angular and 80 linear measurements were taken for the maxilla and the mandible. Frontally clipped CBCT images were used for the evaluation. Paired sample and independent sample t tests were used for statistical comparisons. Comparisons of the before-treatment and after-retention measurements showed that the arch widths and buccolingual inclinations of the posterior teeth increased significantly on the crossbite side of the maxilla and on the noncrossbite side of the mandible (P ARME treatment, the crossbite side of the maxilla and the noncrossbite side of the mandible were more affected than were the opposite sides. Copyright © 2015. Published by Elsevier Inc.

  5. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    International Nuclear Information System (INIS)

    Lopes, Antonio Augusto; Miranda, Rogerio dos Anjos; Goncalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-01-01

    In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P <.001) and between-methods ( P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. (author)

  6. Rapid, computer vision-enabled murine screening system identifies neuropharmacological potential of two new mechanisms

    Directory of Open Access Journals (Sweden)

    Steven L Roberds

    2011-09-01

    Full Text Available The lack of predictive in vitro models for behavioral phenotypes impedes rapid advancement in neuropharmacology and psychopharmacology. In vivo behavioral assays are more predictive of activity in human disorders, but such assays are often highly resource-intensive. Here we describe the successful application of a computer vision-enabled system to identify potential neuropharmacological activity of two new mechanisms. The analytical system was trained using multiple drugs that are used clinically to treat depression, schizophrenia, anxiety, and other psychiatric or behavioral disorders. During blinded testing the PDE10 inhibitor TP-10 produced a signature of activity suggesting potential antipsychotic activity. This finding is consistent with TP-10’s activity in multiple rodent models that is similar to that of clinically used antipsychotic drugs. The CK1ε inhibitor PF-670462 produced a signature consistent with anxiolytic activity and, at the highest dose tested, behavioral effects similar to that of opiate analgesics. Neither TP-10 nor PF-670462 was included in the training set. Thus, computer vision-based behavioral analysis can facilitate drug discovery by identifying neuropharmacological effects of compounds acting through new mechanisms.

  7. SU-F-J-102: Lower Esophagus Margin Implications Based On Rapid Computational Algorithm for SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Cardenas, M; Mazur, T; Li, H; Mutic, S; Bradley, J; Tsien, C; Green, O [Washington University School of Medicine, Saint Louis, MO (United States)

    2016-06-15

    Purpose: To quantify inter-fraction esophagus-variation. Methods: Computed tomography and daily on-treatment 0.3-T MRI data sets for 7 patients were analyzed using a novel Matlab-based (Mathworks, Natick, MA) rapid computational method. Rigid registration was performed from the cricoid to the gastro-esophageal junction. CT and MR-based contours were compared at slice intervals of 3mm. Variation was quantified by “expansion,” defined as additional length in any radial direction from CT contour to MR contour. Expansion computations were performed with 360° of freedom in each axial slice. We partitioned expansions into left anterior, right anterior, right posterior, and left posterior quadrants (LA, RA, RP, and LP, respectively). Sample means were compared by analysis of variance (ANOVA) and Fisher’s Protected Least Significant Difference test. Results: Fifteen fractions and 1121 axial slices from 7 patients undergoing SBRT for primary lung cancer (3) and metastatic lung disease (4) were analyzed, generating 41,970 measurements. Mean LA, RA, RP, and LP expansions were 4.30±0.05 mm, 3.71±0.05mm, 3.17±0.07, and 3.98±0.06mm, respectively. 50.13% of all axial slices showed variation > 5 mm in one or more directions. Variation was greatest in lower esophagus with mean LA, RA, RP, and LP expansion (5.98±0.09 mm, 4.59±0.09 mm, 4.04±0.16 mm, and 5.41±0.16 mm, respectively). The difference was significant compared to mid and upper esophagus (p<.0001). The 95th percentiles of expansion for LA, RA, RP, LP were 13.36 mm, 9.97 mm, 11.29 mm, and 12.19 mm, respectively. Conclusion: Analysis of on-treatment MR imaging of the lower esophagus during thoracic SBRT suggests margin expansions of 13.36 mm LA, 9.97 mm RA, 11.29 mm RP, 12.19 mm LP would account for 95% of measurements. Our novel algorithm for rapid assessment of margin expansion for critical structures with 360° of freedom in each axial slice enables continuously adaptive patient-specific margins which may

  8. Sequential method for the assessment of innovations in computer assisted industrial processes; Metodo secuencial para evaluacion de innovaciones en procesos industriales asistido por computadora

    Energy Technology Data Exchange (ETDEWEB)

    Suarez Antola, R [Universidad Catolica del Uruguay, Montevideo (Uruguay); Artucio, G [Ministerio de Industria Energia y Mineria. Direccion Nacional de Tecnologia Nuclear, Montevideo (Uruguay)

    1995-08-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs.

  9. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Radiographic and computed tomographic demonstration of pseudotumor cerebri due to rapid weight gain in a child with pelvic rhabdomyosarcoma

    Energy Technology Data Exchange (ETDEWEB)

    Berdon, W.E.; Barker, D.H.; Barash, F.S.

    1982-06-01

    Rapid weight gain in a malnourished child can be associated with suture diastasis in the pattern of pseudotumor cerebri; this has been previously reported in deprivational dwarfism and cystic fibrosis. In a child with pelvic rhabdomyosarcoma, skull radiographs and cranial computed tomographic (CT) scans were available prior to a period of rapid weight gain induced by hyperalimentation. Suture diastasis developed and repeat CT scans showed this to be accompanied by smaller ventricles.

  11. Radiographic and computed tomographic demonstration of pseudotumor cerebri due to rapid weight gain in a child with pelvic rhabdomyosarcoma

    International Nuclear Information System (INIS)

    Berdon, W.E.; Barker, D.H.; Barash, F.S.

    1982-01-01

    Rapid weight gain in a malnourished child can be associated with suture diastasis in the pattern of pseudotumor cerebri; this has been previously reported in deprivational dwarfism and cystic fibrosis. In a child with pelvic rhabdomyosarcoma, skull radiographs and cranial computed tomographic (CT) scans were available prior to a period of rapid weight gain induced by hyperalimentation. Suture diastasis developed and repeat CT scans showed this to be accompanied by smaller ventricles

  12. 128-slice Dual-source Computed Tomography Coronary Angiography in Patients with Atrial Fibrillation: Image Quality and Radiation Dose of Prospectively Electrocardiogram-triggered Sequential Scan Compared with Retrospectively Electrocardiogram-gated Spiral Scan.

    Science.gov (United States)

    Lin, Lu; Wang, Yi-Ning; Kong, Ling-Yan; Jin, Zheng-Yu; Lu, Guang-Ming; Zhang, Zhao-Qi; Cao, Jian; Li, Shuo; Song, Lan; Wang, Zhi-Wei; Zhou, Kang; Wang, Ming

    2013-01-01

    Objective To evaluate the image quality (IQ) and radiation dose of 128-slice dual-source computed tomography (DSCT) coronary angiography using prospectively electrocardiogram (ECG)-triggered sequential scan mode compared with ECG-gated spiral scan mode in a population with atrial fibrillation. Methods Thirty-two patients with suspected coronary artery disease and permanent atrial fibrillation referred for a second-generation 128-slice DSCT coronary angiography were included in the prospective study. Of them, 17 patients (sequential group) were randomly selected to use a prospectively ECG-triggered sequential scan, while the other 15 patients (spiral group) used a retrospectively ECG-gated spiral scan. The IQ was assessed by two readers independently, using a four-point grading scale from excel-lent (grade 1) to non-assessable (grade 4), based on the American Heart Association 15-segment model. IQ of each segment and effective dose of each patient were compared between the two groups. Results The mean heart rate (HR) of the sequential group was 96±27 beats per minute (bpm) with a variation range of 73±25 bpm, while the mean HR of the spiral group was 86±22 bpm with a variationrange of 65±24 bpm. Both of the mean HR (t=1.91, P=0.243) and HR variation range (t=0.950, P=0.350) had no significant difference between the two groups. In per-segment analysis, IQ of the sequential group vs. spiral group was rated as excellent (grade 1) in 190/244 (78%) vs. 177/217 (82%) by reader1 and 197/245 (80%) vs. 174/214 (81%) by reader2, as non-assessable (grade 4) in 4/244 (2%) vs. 2/217 (1%) by reader1 and 6/245 (2%) vs. 4/214 (2%) by reader2. Overall averaged IQ per-patient in the sequential and spiral group showed equally good (1.27±0.19 vs. 1.25±0.22, Z=-0.834, P=0.404). The effective radiation dose of the sequential group reduced significantly compared with the spiral group (4.88±1.77 mSv vs. 10.20±3.64 mSv; t=-5.372, P=0.000). Conclusion Compared with retrospectively

  13. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  14. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  15. A promising split-lesion technique for rapid tattoo removal using a novel sequential approach of a single sitting of pulsed CO(2) followed by Q-switched Nd: YAG laser (1064 nm).

    Science.gov (United States)

    Sardana, Kabir; Garg, Vijay K; Bansal, Shivani; Goel, Khushbu

    2013-12-01

    Laser tattoo removal conventionally uses Q-switched (QS) lasers, but they require multiple sittings, and the end results depend largely on the type of tattoo treated. In pigmented skin, due to the competing epidermal pigment results, laser results in tattoo are slow and inadequate. To evaluate the efficacy of a combined use of ultrapulse CO2 and QS Nd:YAG (1064 nm) laser in the treatment of tattoos in Indian skin. A split-lesion trial was carried out in five patients, with the left side of tattoos receiving the QS Nd:YAG (1064 nm) and the right side, a sequential combination of Up CO2 and QS Nd: YAG at 6 weeks interval with a maximum of six sittings. Outcome assessment was carried out by a blinded assessor using standardized photography. An assessment of physician improvement score, side-effects score, and patient satisfaction score was taken during and at the end of the study. There was a statistically significant improvement on the combination side(physician improvement score -3.7 vs. 1.87: P = 0.0019) which occurred earlier with fewer sittings (1.7 vs. 6). There was no statistically significant difference in the side effects. A combination of an Up CO2 laser with QS Nd: YAG laser is a promising tool for rapid and effective removal of blue-black/blue amateur tattoo in pigmented skin. © 2013 Wiley Periodicals, Inc.

  16. Construction of computational program of aging in insulating materials for searching reversed sequential test conditions to give damage equivalent to simultaneous exposure of heat and radiation

    International Nuclear Information System (INIS)

    Fuse, Norikazu; Homma, Hiroya; Okamoto, Tatsuki

    2013-01-01

    Two consecutive numerical calculations on degradation of polymeric insulations under thermal and radiation environment are carried out to simulate so-called reversal sequential acceleration test. The aim of the calculation is to search testing conditions which provide material damage equivalent to the case of simultaneous exposure of heat and radiation. At least following four parameters are needed to be considered in the sequential method; dose rate and exposure time in radiation, as well as temperature and aging time in heating. The present paper discusses the handling of these parameters and shows some trial calculation results. (author)

  17. Long-term Risedronate Treatment Normalizes Mineralization and Continues to Preserve Trabecular Architecture: Sequential Triple Biopsy Studies with Micro-Computed Tomography

    International Nuclear Information System (INIS)

    Borah, B.; Dufresne, T.; Ritman, E.; Jorgensen, S.; Liu, S.; Chmielewski, P.; Phipps, R.; Zhou, X.; Sibonga, J.; Turner, R.

    2006-01-01

    The objective of the study was to assess the time course of changes in bone mineralization and architecture using sequential triple biopsies from women with postmenopausal osteoporosis (PMO) who received long-term treatment with risedronate. Transiliac biopsies were obtained from the same subjects (n = 7) at baseline and after 3 and 5 years of treatment with 5 mg daily risedronate. Mineralization was measured using 3-dimensional (3D) micro-computed tomography (CT) with synchrotron radiation and was compared to levels in healthy premenopausal women (n = 12). Compared to the untreated PMO women at baseline, the premenopausal women had higher average mineralization (Avg-MIN) and peak mineralization (Peak-MIN) by 5.8% (P = 0.003) and 8.0% (P = 0.003), respectively, and lower ratio of low to high-mineralized bone volume (BMR-V) and surface area (BMR-S) by 73.3% (P = 0.005) and 61.7% (P 0.003), respectively. Relative to baseline, 3 years of risedronate treatment significantly increased Avg-MIN (4.9 ± 1.1%, P = 0.016) and Peak-MIN (6.2 ± 1.5%, P = 0.016), and significantly decreased BMR-V (-68.4 ± 7.3%, P = 0.016) and BMR-S (-50.2 ± 5.7%, P = 0.016) in the PMO women. The changes were maintained at the same level when treatment was continued up to 5 years. These results are consistent with the significant reduction of turnover observed after 3 years of treatment and which was similarly maintained through 5 years of treatment. Risedronate restored the degree of mineralization and the ratios of low- to high-mineralized bone to premenopausal levels after 3 years of treatment, suggesting that treatment reduced bone turnover in PMO women to healthy premenopausal levels. Conventional micro-CT analysis further demonstrated that bone volume (BV/TV) and trabecular architecture did not change from baseline up to 5 years of treatment, suggesting that risedronate provided long-term preservation of trabecular architecture in the PMO women. Overall, risedronate provided sustained

  18. P-CARES 2.0.0, Probabilistic Computer Analysis for Rapid Evaluation of Structures

    International Nuclear Information System (INIS)

    2008-01-01

    1 - Description of program or function: P-CARES 2.0.0 (Probabilistic Computer Analysis for Rapid Evaluation of Structures) was developed for NRC staff use to determine the validity and accuracy of the analysis methods used by various utilities for structural safety evaluations of nuclear power plants. P-CARES provides the capability to effectively evaluate the probabilistic seismic response using simplified soil and structural models and to quickly check the validity and/or accuracy of the SSI data received from applicants and licensees. The code is organized in a modular format with the basic modules of the system performing static, seismic, and nonlinear analysis. 2 - Methods: P-CARES is an update of the CARES program developed at Brookhaven National Laboratory during the 1980's. A major improvement is the enhanced analysis capability in which a probabilistic algorithm has been implemented to perform the probabilistic site response and soil-structure interaction (SSI) analyses. This is accomplished using several sampling techniques such as the Latin Hypercube sampling (LHC), engineering LHC, the Fekete Point Set method, and also the traditional Monte Carlo simulation. This new feature enhances the site response and SSI analysis such that the effect of uncertainty in local site soil properties can now be quantified. Another major addition to P-CARES is a graphical user interface (GUI) which significantly improves the performance of P-Cares in terms of the inter-relations among different functions of the program, and facilitates the input/output processing and execution management. It also provides many user friendly features that would allow an analyst to quickly develop insights from the analysis results. 3 - Restrictions on the complexity of the problem: None noted

  19. Ergonomic assessment of musculoskeletal disorders risk among the computer users by Rapid Upper Limb Assessment method

    Directory of Open Access Journals (Sweden)

    Ehsanollah Habibi

    2016-01-01

    Conclusion: This study result showed that frequency of musculoskeletal problems in the neck, back, elbow, and wrist was generally high among our subjects, and ergonomic interventions such as computer workstation redesign, users educate about ergonomic principles computer with work, reduced working hours in computers with work must be carried out.

  20. Rapid phenotyping of crop root systems in undisturbed field soils using X-ray computed tomography.

    Science.gov (United States)

    Pfeifer, Johannes; Kirchgessner, Norbert; Colombi, Tino; Walter, Achim

    2015-01-01

    X-ray computed tomography (CT) has become a powerful tool for root phenotyping. Compared to rather classical, destructive methods, CT encompasses various advantages. In pot experiments the growth and development of the same individual root can be followed over time and in addition the unaltered configuration of the 3D root system architecture (RSA) interacting with a real field soil matrix can be studied. Yet, the throughput, which is essential for a more widespread application of CT for basic research or breeding programs, suffers from the bottleneck of rapid and standardized segmentation methods to extract root structures. Using available methods, root segmentation is done to a large extent manually, as it requires a lot of interactive parameter optimization and interpretation and therefore needs a lot of time. Based on commercially available software, this paper presents a protocol that is faster, more standardized and more versatile compared to existing segmentation methods, particularly if used to analyse field samples collected in situ. To the knowledge of the authors this is the first study approaching to develop a comprehensive segmentation method suitable for comparatively large columns sampled in situ which contain complex, not necessarily connected root systems from multiple plants grown in undisturbed field soil. Root systems from several crops were sampled in situ and CT-volumes determined with the presented method were compared to root dry matter of washed root samples. A highly significant (P < 0.01) and strong correlation (R(2) = 0.84) was found, demonstrating the value of the presented method in the context of field research. Subsequent to segmentation, a method for the measurement of root thickness distribution has been used. Root thickness is a central RSA trait for various physiological research questions such as root growth in compacted soil or under oxygen deficient soil conditions, but hardly assessable in high throughput until today, due

  1. Rapid Reconstitution Packages (RRPs) implemented by integration of computational fluid dynamics (CFD) and 3D printed microfluidics.

    Science.gov (United States)

    Chi, Albert; Curi, Sebastian; Clayton, Kevin; Luciano, David; Klauber, Kameron; Alexander-Katz, Alfredo; D'hers, Sebastian; Elman, Noel M

    2014-08-01

    Rapid Reconstitution Packages (RRPs) are portable platforms that integrate microfluidics for rapid reconstitution of lyophilized drugs. Rapid reconstitution of lyophilized drugs using standard vials and syringes is an error-prone process. RRPs were designed using computational fluid dynamics (CFD) techniques to optimize fluidic structures for rapid mixing and integrating physical properties of targeted drugs and diluents. Devices were manufactured using stereo lithography 3D printing for micrometer structural precision and rapid prototyping. Tissue plasminogen activator (tPA) was selected as the initial model drug to test the RRPs as it is unstable in solution. tPA is a thrombolytic drug, stored in lyophilized form, required in emergency settings for which rapid reconstitution is of critical importance. RRP performance and drug stability were evaluated by high-performance liquid chromatography (HPLC) to characterize release kinetics. In addition, enzyme-linked immunosorbent assays (ELISAs) were performed to test for drug activity after the RRPs were exposed to various controlled temperature conditions. Experimental results showed that RRPs provided effective reconstitution of tPA that strongly correlated with CFD results. Simulation and experimental results show that release kinetics can be adjusted by tuning the device structural dimensions and diluent drug physical parameters. The design of RRPs can be tailored for a number of applications by taking into account physical parameters of the active pharmaceutical ingredients (APIs), excipients, and diluents. RRPs are portable platforms that can be utilized for reconstitution of emergency drugs in time-critical therapies.

  2. Rapid Detection of Biological and Chemical Threat Agents Using Physical Chemistry, Active Detection, and Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Myung; Dong, Li; Fu, Rong; Liotta, Lance; Narayanan, Aarthi; Petricoin, Emanuel; Ross, Mark; Russo, Paul; Zhou, Weidong; Luchini, Alessandra; Manes, Nathan; Chertow, Jessica; Han, Suhua; Kidd, Jessica; Senina, Svetlana; Groves, Stephanie

    2007-01-01

    Basic technologies have been successfully developed within this project: rapid collection of aerosols and a rapid ultra-sensitive immunoassay technique. Water-soluble, humidity-resistant polyacrylamide nano-filters were shown to (1) capture aerosol particles as small as 20 nm, (2) work in humid air and (3) completely liberate their captured particles in an aqueous solution compatible with the immunoassay technique. The immunoassay technology developed within this project combines electrophoretic capture with magnetic bead detection. It allows detection of as few as 150-600 analyte molecules or viruses in only three minutes, something no other known method can duplicate. The technology can be used in a variety of applications where speed of analysis and/or extremely low detection limits are of great importance: in rapid analysis of donor blood for hepatitis, HIV and other blood-borne infections in emergency blood transfusions, in trace analysis of pollutants, or in search of biomarkers in biological fluids. Combined in a single device, the water-soluble filter and ultra-sensitive immunoassay technique may solve the problem of early warning type detection of aerosolized pathogens. These two technologies are protected with five patent applications and are ready for commercialization.

  3. Addressing unmet need for HIV testing in emergency care settings: a role for computer-facilitated rapid HIV testing?

    Science.gov (United States)

    Kurth, Ann E; Severynen, Anneleen; Spielberg, Freya

    2013-08-01

    HIV testing in emergency departments (EDs) remains underutilized. The authors evaluated a computer tool to facilitate rapid HIV testing in an urban ED. Randomly assigned nonacute adult ED patients were randomly assigned to a computer tool (CARE) and rapid HIV testing before a standard visit (n = 258) or to a standard visit (n = 259) with chart access. The authors assessed intervention acceptability and compared noted HIV risks. Participants were 56% nonWhite and 58% male; median age was 37 years. In the CARE arm, nearly all (251/258) of the patients completed the session and received HIV results; four declined to consent to the test. HIV risks were reported by 54% of users; one participant was confirmed HIV-positive, and two were confirmed false-positive (seroprevalence 0.4%, 95% CI [0.01, 2.2]). Half (55%) of the patients preferred computerized rather than face-to-face counseling for future HIV testing. In the standard arm, one HIV test and two referrals for testing occurred. Computer-facilitated HIV testing appears acceptable to ED patients. Future research should assess cost-effectiveness compared with staff-delivered approaches.

  4. Rapid Computation of Thermodynamic Properties over Multidimensional Nonbonded Parameter Spaces Using Adaptive Multistate Reweighting.

    Science.gov (United States)

    Naden, Levi N; Shirts, Michael R

    2016-04-12

    We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free

  5. Computer aided detection of suspicious regions on digital mammograms : rapid segmentation and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ruggiero, C; Giacomini, M; Sacile, R [DIST - Department of Communication Computer and System Sciences, University of Genova, Via Opera Pia 13, 16145 Genova (Italy); Rosselli Del Turco, M [Centro per lo studio e la prevenzione oncologica, Firenze (Italy)

    1999-12-31

    A method is presented for rapid detection of suspicious regions which consists of two steps. The first step is segmentation based on texture analysis consisting of : histogram equalization, Laws filtering for texture analysis, Gaussian blur and median filtering to enhance differences between tissues in different respects, histogram thresholding to obtain a binary image, logical masking in order to detect regions to be discarded from the analysis, edge detection. This method has been tested on 60 images, obtaining 93% successful detection of suspicious regions. (authors) 4 refs, 9 figs, 1 tabs.

  6. A computational tool for the rapid design and prototyping of propellers for underwater vehicles

    OpenAIRE

    D'Epagnier, Kathryn Port.

    2007-01-01

    An open source, MATLAB (trademarked)-based propeller design code MPVL was improved to include rapid prototyping capabilities as well as other upgrades as part of this effort. The resulting code, OpenPVL is described in this thesis. In addition, results from the development code BasicPVL are presented. An intermediate code, BasicPVL, was created by the author while OpenPVL was under development, and it provides guidance for initial propeller designs and propeller efficiency analysis. OpenPVL i...

  7. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  8. Comparison of Computational Approaches for Rapid Aerodynamic Assessment of Small UAVs

    Science.gov (United States)

    Shafer, Theresa C.; Lynch, C. Eric; Viken, Sally A.; Favaregh, Noah; Zeune, Cale; Williams, Nathan; Dansie, Jonathan

    2014-01-01

    Computational Fluid Dynamic (CFD) methods were used to determine the basic aerodynamic, performance, and stability and control characteristics of the unmanned air vehicle (UAV), Kahu. Accurate and timely prediction of the aerodynamic characteristics of small UAVs is an essential part of military system acquisition and air-worthiness evaluations. The forces and moments of the UAV were predicted using a variety of analytical methods for a range of configurations and conditions. The methods included Navier Stokes (N-S) flow solvers (USM3D, Kestrel and Cobalt) that take days to set up and hours to converge on a single solution; potential flow methods (PMARC, LSAERO, and XFLR5) that take hours to set up and minutes to compute; empirical methods (Datcom) that involve table lookups and produce a solution quickly; and handbook calculations. A preliminary aerodynamic database can be developed very efficiently by using a combination of computational tools. The database can be generated with low-order and empirical methods in linear regions, then replacing or adjusting the data as predictions from higher order methods are obtained. A comparison of results from all the data sources as well as experimental data obtained from a wind-tunnel test will be shown and the methods will be evaluated on their utility during each portion of the flight envelope.

  9. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  10. An Open Source Rapid Computer Aided Control System Design Toolchain Using Scilab, Scicos and RTAI Linux

    Science.gov (United States)

    Bouchpan-Lerust-Juéry, L.

    2007-08-01

    Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.

  11. [COMPUTER ASSISTED DESIGN AND ELECTRON BEAMMELTING RAPID PROTOTYPING METAL THREE-DIMENSIONAL PRINTING TECHNOLOGY FOR PREPARATION OF INDIVIDUALIZED FEMORAL PROSTHESIS].

    Science.gov (United States)

    Liu, Hongwei; Weng, Yiping; Zhang, Yunkun; Xu, Nanwei; Tong, Jing; Wang, Caimei

    2015-09-01

    To study the feasibility of preparation of the individualized femoral prosthesis through computer assisted design and electron beammelting rapid prototyping (EBM-RP) metal three-dimensional (3D) printing technology. One adult male left femur specimen was used for scanning with 64-slice spiral CT; tomographic image data were imported into Mimics15.0 software to reconstruct femoral 3D model, then the 3D model of individualized femoral prosthesis was designed through UG8.0 software. Finally the 3D model data were imported into EBM-RP metal 3D printer to print the individualized sleeve. According to the 3D model of individualized prosthesis, customized sleeve was successfully prepared through the EBM-RP metal 3D printing technology, assembled with the standard handle component of SR modular femoral prosthesis to make the individualized femoral prosthesis. Customized femoral prosthesis accurately matching with metaphyseal cavity can be designed through the thin slice CT scanning and computer assisted design technology. Titanium alloy personalized prosthesis with complex 3D shape, pore surface, and good matching with metaphyseal cavity can be manufactured by the technology of EBM-RP metal 3D printing, and the technology has convenient, rapid, and accurate advantages.

  12. Accuracy of using computer-aided rapid prototyping templates for mandible reconstruction with an iliac crest graft

    Science.gov (United States)

    2014-01-01

    Background This study aimed to evaluate the accuracy of surgical outcomes in free iliac crest mandibular reconstructions that were carried out with virtual surgical plans and rapid prototyping templates. Methods This study evaluated eight patients who underwent mandibular osteotomy and reconstruction with free iliac crest grafts using virtual surgical planning and designed guiding templates. Operations were performed using the prefabricated guiding templates. Postoperative three-dimensional computer models were overlaid and compared with the preoperatively designed models in the same coordinate system. Results Compared to the virtual osteotomy, the mean error of distance of the actual mandibular osteotomy was 2.06 ± 0.86 mm. When compared to the virtual harvested grafts, the mean error volume of the actual harvested grafts was 1412.22 ± 439.24 mm3 (9.12% ± 2.84%). The mean error between the volume of the actual harvested grafts and the shaped grafts was 2094.35 ± 929.12 mm3 (12.40% ± 5.50%). Conclusions The use of computer-aided rapid prototyping templates for virtual surgical planning appears to positively influence the accuracy of mandibular reconstruction. PMID:24957053

  13. Renal parenchyma thickness: a rapid estimation of renal function on computed tomography

    International Nuclear Information System (INIS)

    Kaplon, Daniel M.; Lasser, Michael S.; Sigman, Mark; Haleblian, George E.; Pareek, Gyan

    2009-01-01

    Purpose: To define the relationship between renal parenchyma thickness (RPT) on computed tomography and renal function on nuclear renography in chronically obstructed renal units (ORUs) and to define a minimal thickness ratio associated with adequate function. Materials and Methods: Twenty-eight consecutive patients undergoing both nuclear renography and CT during a six-month period between 2004 and 2006 were included. All patients that had a diagnosis of unilateral obstruction were included for analysis. RPT was measured in the following manner: The parenchyma thickness at three discrete levels of each kidney was measured using calipers on a CT workstation. The mean of these three measurements was defined as RPT. The renal parenchyma thickness ratio of the ORUs and non-obstructed renal unit (NORUs) was calculated and this was compared to the observed function on Mag-3 lasix Renogram. Results: A total of 28 patients were evaluated. Mean parenchyma thickness was 1.82 cm and 2.25 cm in the ORUs and NORUs, respectively. The mean relative renal function of ORUs was 39%. Linear regression analysis comparing renogram function to RPT ratio revealed a correlation coefficient of 0.48 (p * RPT ratio. A thickness ratio of 0.68 correlated with 20% renal function. Conclusion: RPT on computed tomography appears to be a powerful predictor of relative renal function in ORUs. Assessment of RPT is a useful and readily available clinical tool for surgical decision making (renal salvage therapy versus nephrectomy) in patients with ORUs. (author)

  14. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  15. Rapidly re-computable EEG (electroencephalography) forward models for realistic head shapes

    International Nuclear Information System (INIS)

    Ermer, J.J.; Mosher, J.C.; Baillet, S.; Leahy, R.M.

    2001-01-01

    Solution of the EEG source localization (inverse) problem utilizing model-based methods typically requires a significant number of forward model evaluations. For subspace based inverse methods like MUSIC (6), the total number of forward model evaluations can often approach an order of 10 3 or 10 4 . Techniques based on least-squares minimization may require significantly more evaluations. The observed set of measurements over an M-sensor array is often expressed as a linear forward spatio-temporal model of the form: F = GQ + N (1) where the observed forward field F (M-sensors x N-time samples) can be expressed in terms of the forward model G, a set of dipole moment(s) Q (3xP-dipoles x N-time samples) and additive noise N. Because of their simplicity, ease of computation, and relatively good accuracy, multi-layer spherical models (7) (or fast approximations described in (1), (7)) have traditionally been the 'forward model of choice' for approximating the human head. However, approximation of the human head via a spherical model does have several key drawbacks. By its very shape, the use of a spherical model distorts the true distribution of passive currents in the skull cavity. Spherical models also require that the sensor positions be projected onto the fitted sphere (Fig. 1), resulting in a distortion of the true sensor-dipole spatial geometry (and ultimately the computed surface potential). The use of a single 'best-fitted' sphere has the added drawback of incomplete coverage of the inner skull region, often ignoring areas such as the frontal cortex. In practice, this problem is typically countered by fitting additional sphere(s) to those region(s) not covered by the primary sphere. The use of these additional spheres results in added complication to the forward model. Using high-resolution spatial information obtained via X-ray CT or MR imaging, a realistic head model can be formed by tessellating the head into a set of contiguous regions (typically the scalp

  16. Evaluation of apical canal shapes produced sequentially during instrumentation with stainless steel hand and Ni-Ti rotary instruments using Micro-computed tomography

    Directory of Open Access Journals (Sweden)

    Woo-Jin Lee

    2011-05-01

    Full Text Available Objectives The purpose of this study was to determine the optimal master apical file size with minimal transportation and optimal efficiency in removing infected dentin. We evaluated the transportation of the canal center and the change in untouched areas after sequential preparation with a #25 to #40 file using 3 different instruments: stainless steel K-type (SS K-file hand file, ProFile and LightSpeed using microcomputed tomography (MCT. Materials and Methods Thirty extracted human mandibular molars with separated orifices and apical foramens on mesial canals were used. Teeth were randomly divided into three groups: SS K-file, Profile, LightSpeed and the root canals were instrumented using corresponding instruments from #20 to #40. All teeth were scanned with MCT before and after instrumentation. Cross section images were used to evaluate canal transportation and untouched area at 1- , 2- , 3- , and 5- mm level from the apex. Data were statistically analyzed according to' repeated nested design'and Mann-Whitney test (p = 0.05. Results In SS K-file group, canal transportation was significantly increased over #30 instrument. In the ProFile group, canal transportation was significantly increased after preparation with the #40 instrument at the 1- and 2- mm levels. LightSpeed group showed better centering ability than ProFile group after preparation with the #40 instrument at the 1 and 2 mm levels. Conclusions SS K-file, Profile, and LightSpeed showed differences in the degree of apical transportation depending on the size of the master apical file.

  17. New layer-based imaging and rapid prototyping techniques for computer-aided design and manufacture of custom dental restoration.

    Science.gov (United States)

    Lee, M-Y; Chang, C-C; Ku, Y C

    2008-01-01

    Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.

  18. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  19. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  20. pulver: an R package for parallel ultra-rapid p-value computation for linear regression interaction terms.

    Science.gov (United States)

    Molnos, Sophie; Baumbach, Clemens; Wahl, Simone; Müller-Nurasyid, Martina; Strauch, Konstantin; Wang-Sattler, Rui; Waldenberger, Melanie; Meitinger, Thomas; Adamski, Jerzy; Kastenmüller, Gabi; Suhre, Karsten; Peters, Annette; Grallert, Harald; Theis, Fabian J; Gieger, Christian

    2017-09-29

    Genome-wide association studies allow us to understand the genetics of complex diseases. Human metabolism provides information about the disease-causing mechanisms, so it is usual to investigate the associations between genetic variants and metabolite levels. However, only considering genetic variants and their effects on one trait ignores the possible interplay between different "omics" layers. Existing tools only consider single-nucleotide polymorphism (SNP)-SNP interactions, and no practical tool is available for large-scale investigations of the interactions between pairs of arbitrary quantitative variables. We developed an R package called pulver to compute p-values for the interaction term in a very large number of linear regression models. Comparisons based on simulated data showed that pulver is much faster than the existing tools. This is achieved by using the correlation coefficient to test the null-hypothesis, which avoids the costly computation of inversions. Additional tricks are a rearrangement of the order, when iterating through the different "omics" layers, and implementing this algorithm in the fast programming language C++. Furthermore, we applied our algorithm to data from the German KORA study to investigate a real-world problem involving the interplay among DNA methylation, genetic variants, and metabolite levels. The pulver package is a convenient and rapid tool for screening huge numbers of linear regression models for significant interaction terms in arbitrary pairs of quantitative variables. pulver is written in R and C++, and can be downloaded freely from CRAN at https://cran.r-project.org/web/packages/pulver/ .

  1. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  2. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  3. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  4. Case of small hepatocellular carcinoma detected only by dynamic sequential computed tomography with table incrementation during arterial portography (DSCTI-AP)

    Energy Technology Data Exchange (ETDEWEB)

    Nishijima, Hiroshi; Ida, Masahiro; Takayama, Shigeru; Yanaghi, Sekiya; Kuroda, Yuzuru [Fukui Saiseikai Hospital (Japan)

    1984-05-01

    A 62 years old woman was admitted for liver cirrhosis with esophageal varix. Radionuclide liver scintigram, ultrasonogram, computed tomography and infusion hepatic angiography showed no signs of hepatocellular carcinoma (HCC). But DSCTI-AP detected a small solid mass lesion about 1.5cm in diameter in the right lobe of the liver. It was difficult to diagnose this lesion as HCC because of low titer of serum ..cap alpha../sub 1/-fetoprotein. After the esophageal transection for esophageal varices, she was followed as an outpatient. Six months later, serum ..cap alpha../sub 1/-fetoprotein was slightly elevated (130 ng/ml), and all imaging methods were carried out for the detection of HCC. Ultrasonogram, computed tomography, infusion hepatic angiography, DSCTI-AP all showed hepatocellular carcinoma, about 4.5cm in diameter in the same site where a small nodule was visualized by DSCTI-AP performed 5 months ago. This case suggests that DSCTI-AP is a sensitive and useful method for detection of some type of small liver cancer.

  5. Focal Stenosis in Right Upper Lobe Bronchus in a Recurrently Wheezing Child Sequentially Studied by Multidetector-row Spiral Computed Tomography and Scintigraphy

    Directory of Open Access Journals (Sweden)

    I-Chen Chen

    2009-12-01

    Full Text Available Lower respiratory tract infections associated with wheezing are not uncommon in infants and young children. Among the wheezing-associated disorders, allergic etiologies are more commonly encountered than anatomic anomalies. We present a 3-year-old girl with a sudden attack of asthmatic symptoms including dyspnea, cyanosis and diffuse wheezing. Based on a history of choking, and atelectasis in the right upper lobe detected by chest films, flexible tracheobronchoscopy was arranged and incidentally detected a stenotic orifice in the right upper lobe bronchus. Multidetector-row spiral computed tomography and pulmonary scintigraphy subsequently also disclosed the focal stenosis. She suffered from recurrent wheezing, pneumonia and lung atelectasis during 1 year of follow-up. We emphasize the diagnosis, clinical course and management of focal stenosis in the right upper lobe bronchus.

  6. Integrating Remote Sensing Data, Hybrid-Cloud Computing, and Event Notifications for Advanced Rapid Imaging & Analysis (Invited)

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.

  7. Novel method of fabricating individual trays for maxillectomy patients by computer-aided design and rapid prototyping.

    Science.gov (United States)

    Huang, Zhi; Wang, Xin-zhi; Hou, Yue-Zhong

    2015-02-01

    Making impressions for maxillectomy patients is an essential but difficult task. This study developed a novel method to fabricate individual trays by computer-aided design (CAD) and rapid prototyping (RP) to simplify the process and enhance patient safety. Five unilateral maxillectomy patients were recruited for this study. For each patient, a computed tomography (CT) scan was taken. Based on the 3D surface reconstruction of the target area, an individual tray was manufactured by CAD/RP. With a conventional custom tray as control, two final impressions were made using the different types of tray for each patient. The trays were sectioned, and in each section the thickness of the material was measured at six evenly distributed points. Descriptive statistics and paired t-test were used to examine the difference of the impression thickness. SAS 9.3 was applied in the statistical analysis. Afterwards, all casts were then optically 3D scanned and compared digitally to evaluate the feasibility of this method. Impressions of all five maxillectomy patients were successfully made with individual trays fabricated by CAD/RP and traditional trays. The descriptive statistics of impression thickness measurement showed slightly more uneven results in the traditional trays, but no statistical significance was shown. A 3D digital comparison showed acceptable discrepancies within 1 mm in the majority of cast areas. The largest difference of 3 mm was observed in the buccal wall of the defective areas. Moderate deviations of 1 to 2 mm were detected in the buccal and labial vestibular groove areas. This study confirmed the feasibility of a novel method of fabricating individual trays by CAD/RP. Impressions made by individual trays manufactured using CAD/RP had a uniform thickness, with an acceptable level of accuracy compared to those made through conventional processes. © 2014 by the American College of Prosthodontists.

  8. Replacing Heavily Damaged Teeth by Third Molar Autotransplantation With the Use of Cone-Beam Computed Tomography and Rapid Prototyping.

    Science.gov (United States)

    Verweij, Jop P; Anssari Moin, David; Wismeijer, Daniel; van Merkesteyn, J P Richard

    2017-09-01

    This article describes the autotransplantation of third molars to replace heavily damaged premolars and molars. Specifically, this article reports on the use of preoperative cone-beam computed tomographic planning and 3-dimensional (3D) printed replicas of donor teeth to prepare artificial tooth sockets. In the present case, an 18-year-old patient underwent autotransplantation of 3 third molars to replace 1 premolar and 2 molars that were heavily damaged after trauma. Approximately 1 year after the traumatic incident, autotransplantation with the help of 3D planning and rapid prototyping was performed. The right maxillary third molar replaced the right maxillary first premolar. The 2 mandibular wisdom teeth replaced the left mandibular first and second molars. During the surgical procedure, artificial tooth sockets were prepared with the help of 3D printed donor tooth copies to prevent iatrogenic damage to the actual donor teeth. These replicas of the donor teeth were designed based on the preoperative cone-beam computed tomogram and manufactured with the help of 3D printing techniques. The use of a replica of the donor tooth resulted in a predictable and straightforward procedure, with extra-alveolar times shorter than 2 minutes for all transplantations. The transplanted teeth were placed in infraocclusion and fixed with a suture splint. Postoperative follow-up showed physiologic integration of the transplanted teeth and a successful outcome for all transplants. In conclusion, this technique facilitates a straightforward and predictable procedure for autotransplantation of third molars. The use of printed analogues of the donor teeth decreases the risk of iatrogenic damage and the extra-alveolar time of the transplanted tooth is minimized. This facilitates a successful outcome. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  10. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  11. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    International Nuclear Information System (INIS)

    Um, Ki Doo; Lee, Byung Do

    2004-01-01

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  12. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    Energy Technology Data Exchange (ETDEWEB)

    Um, Ki Doo; Lee, Byung Do [Wonkwang University College of Medicine, Iksan (Korea, Republic of)

    2004-03-15

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  13. Autotransplantation of immature third molars using a computer-aided rapid prototyping model: a report of 4 cases.

    Science.gov (United States)

    Jang, Ji-Hyun; Lee, Seung-Jong; Kim, Euiseong

    2013-11-01

    Autotransplantation of immature teeth can be an option for premature tooth loss in young patients as an alternative to immediately replacing teeth with fixed or implant-supported prostheses. The present case series reports 4 successful autotransplantation cases using computer-aided rapid prototyping (CARP) models with immature third molars. The compromised upper and lower molars (n = 4) of patients aged 15-21 years old were transplanted with third molars using CARP models. Postoperatively, the pulp vitality and the development of the roots were examined clinically and radiographically. The patient follow-up period was 2-7.5 years after surgery. The long-term follow-up showed that all of the transplants were asymptomatic and functional. Radiographic examination indicated that the apices developed continuously and the root length and thickness increased. The final follow-up examination revealed that all of the transplants kept the vitality, and the apices were fully developed with normal periodontal ligaments and trabecular bony patterns. Based on long-term follow-up observations, our 4 cases of autotransplantation of immature teeth using CARP models resulted in favorable prognoses. The CARP model assisted in minimizing the extraoral time and the possible Hertwig epithelial root sheath injury of the transplanted tooth. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  14. Innovative procedure for computer-assisted genioplasty: three-dimensional cephalometry, rapid-prototyping model and surgical splint.

    Science.gov (United States)

    Olszewski, R; Tranduy, K; Reychler, H

    2010-07-01

    The authors present a new procedure of computer-assisted genioplasty. They determined the anterior, posterior and inferior limits of the chin in relation to the skull and face with the newly developed and validated three-dimensional cephalometric planar analysis (ACRO 3D). Virtual planning of the osteotomy lines was carried out with Mimics (Materialize) software. The authors built a three-dimensional rapid-prototyping multi-position model of the chin area from a medical low-dose CT scan. The transfer of virtual information to the operating room consisted of two elements. First, the titanium plates on the 3D RP model were pre-bent. Second, a surgical guide for the transfer of the osteotomy lines and the positions of the screws to the operating room was manufactured. The authors present the first case of the use of this model on a patient. The postoperative results are promising, and the technique is fast and easy-to-use. More patients are needed for a definitive clinical validation of this procedure. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. Cone-beam computed tomography evaluation of dental, skeletal, and alveolar bone changes associated with bonded rapid maxillary expansion

    Directory of Open Access Journals (Sweden)

    Namrata Dogra

    2016-01-01

    Full Text Available Aims and Objectives: To evaluate skeletal changes in maxilla and its surrounding structures, changes in the maxillary dentition and maxillary alveolar bone changes produced by bonded rapid maxillary expansion (RME using cone-beam computed tomography (CBCT. Materials and Methods: The sample consisted of 10 patients (6 males and 4 females with age range 12 to 15 years treated with bonded RME. CBCT scans were performed at T1 (pretreatment and at T2 (immediately after expansion to evaluate the dental, skeletal, and alveolar bone changes. Results: RME treatment increased the overall skeletal parameters such as interorbital, zygomatic, nasal, and maxillary widths. Significant increases in buccal maxillary width was observed at first premolar, second premolar, and first molar level. There was a significant increase in arch width both on the palatal side and on the buccal side. Significant tipping of right and left maxillary first molars was seen. There were significant reductions in buccal bone plate thickness and increase in palatal bone plate thickness. Conclusions: Total expansion achieved with RME was a combination of dental, skeletal and alveolar bone changes. At the first molar level, 28.45% orthopedic, 16.03% alveolar bone bending, and 55.5% orthodontic changes were observed.

  16. A rapid, computational approach for assessing interfraction esophageal motion for use in stereotactic body radiation therapy planning

    Directory of Open Access Journals (Sweden)

    Michael L. Cardenas, MD

    2018-04-01

    Full Text Available Purpose: We present a rapid computational method for quantifying interfraction motion of the esophagus in patients undergoing stereotactic body radiation therapy on a magnetic resonance (MR guided radiation therapy system. Methods and materials: Patients who underwent stereotactic body radiation therapy had simulation computed tomography (CT and on-treatment MR scans performed. The esophagus was contoured on each scan. CT contours were transferred to MR volumes via rigid registration. Digital Imaging and Communications in Medicine files containing contour points were exported to MATLAB. In-plane CT and MR contour points were spline interpolated, yielding boundaries with centroid positions, CCT and CMR. MR contour points lying outside of the CT contour were extracted. For each such point, BMR(j, a segment from CCT intersecting BMR(j, was produced; its intersection with the CT contour, BCT(i, was calculated. The length of the segment Sij, between BCT(i and BMR(j, was found. The orientation θ was calculated from Sij vector components:θ = arctan[(Sijy / (Sijx]A set of segments {Sij} was produced for each slice and binned by quadrant with 0° < θ ≤ 90°, 90° < θ ≤ 180°, 180° < θ ≤ 270°, and 270° < θ ≤ 360° for the left anterior, right anterior, right posterior, and left posterior quadrants, respectively. Slices were binned into upper, middle, and lower esophageal (LE segments. Results: Seven patients, each having 3 MR scans, were evaluated, yielding 1629 axial slices and 84,716 measurements. The LE segment exhibited the greatest magnitude of motion. The mean LE measurements in the left anterior, left posterior, right anterior, and right posterior were 5.2 ± 0.07 mm, 6.0 ± 0.09 mm, 4.8 ± 0.08 mm, and 5.1 ± 0.08 mm, respectively. There was considerable interpatient variability. Conclusions: The LE segment exhibited the greatest magnitude of mobility compared with the

  17. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  18. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  19. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    Science.gov (United States)

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  20. Assessment of Universal Healthcare Coverage in a District of North India: A Rapid Cross-Sectional Survey Using Tablet Computers.

    Science.gov (United States)

    Singh, Tarundeep; Roy, Pritam; Jamir, Limalemla; Gupta, Saurav; Kaur, Navpreet; Jain, D K; Kumar, Rajesh

    2016-01-01

    A rapid survey was carried out in Shaheed Bhagat Singh Nagar District of Punjab state in India to ascertain health seeking behavior and out-of-pocket health expenditures. Using multistage cluster sampling design, 1,008 households (28 clusters x 36 households in each cluster) were selected proportionately from urban and rural areas. Households were selected through a house-to-house survey during April and May 2014 whose members had (a) experienced illness in the past 30 days, (b) had illness lasting longer than 30 days, (c) were hospitalized in the past 365 days, or (d) had women who were currently pregnant or experienced childbirth in the past two years. In these selected households, trained investigators, using a tablet computer-based structured questionnaire, enquired about the socio-demographics, nature of illness, source of healthcare, and healthcare and household expenditure. The data was transmitted daily to a central server using wireless communication network. Mean healthcare expenditures were computed for various health conditions. Catastrophic healthcare expenditure was defined as more than 10% of the total annual household expenditure on healthcare. Chi square test for trend was used to compare catastrophic expenditures on hospitalization between households classified into expenditure quartiles. The mean monthly household expenditure was 15,029 Indian Rupees (USD 188.2). Nearly 14.2% of the household expenditure was on healthcare. Fever, respiratory tract diseases, gastrointestinal diseases were the common acute illnesses, while heart disease, diabetes mellitus, and respiratory diseases were the more common chronic diseases. Hospitalizations were mainly due to cardiovascular diseases, gastrointestinal problems, and accidents. Only 17%, 18%, 20% and 31% of the healthcare for acute illnesses, chronic illnesses, hospitalizations and childbirth was sought in the government health facilities. Average expenditure in government health facilities was 16.6% less

  1. Assessment of Universal Healthcare Coverage in a District of North India: A Rapid Cross-Sectional Survey Using Tablet Computers.

    Directory of Open Access Journals (Sweden)

    Tarundeep Singh

    Full Text Available A rapid survey was carried out in Shaheed Bhagat Singh Nagar District of Punjab state in India to ascertain health seeking behavior and out-of-pocket health expenditures.Using multistage cluster sampling design, 1,008 households (28 clusters x 36 households in each cluster were selected proportionately from urban and rural areas. Households were selected through a house-to-house survey during April and May 2014 whose members had (a experienced illness in the past 30 days, (b had illness lasting longer than 30 days, (c were hospitalized in the past 365 days, or (d had women who were currently pregnant or experienced childbirth in the past two years. In these selected households, trained investigators, using a tablet computer-based structured questionnaire, enquired about the socio-demographics, nature of illness, source of healthcare, and healthcare and household expenditure. The data was transmitted daily to a central server using wireless communication network. Mean healthcare expenditures were computed for various health conditions. Catastrophic healthcare expenditure was defined as more than 10% of the total annual household expenditure on healthcare. Chi square test for trend was used to compare catastrophic expenditures on hospitalization between households classified into expenditure quartiles.The mean monthly household expenditure was 15,029 Indian Rupees (USD 188.2. Nearly 14.2% of the household expenditure was on healthcare. Fever, respiratory tract diseases, gastrointestinal diseases were the common acute illnesses, while heart disease, diabetes mellitus, and respiratory diseases were the more common chronic diseases. Hospitalizations were mainly due to cardiovascular diseases, gastrointestinal problems, and accidents. Only 17%, 18%, 20% and 31% of the healthcare for acute illnesses, chronic illnesses, hospitalizations and childbirth was sought in the government health facilities. Average expenditure in government health facilities was

  2. Sequential decay of Reggeons

    International Nuclear Information System (INIS)

    Yoshida, Toshihiro

    1981-01-01

    Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)

  3. Evaluation of the rapid and slow maxillary expansion using cone-beam computed tomography: a randomized clinical trial

    Directory of Open Access Journals (Sweden)

    Juliana da S. Pereira

    Full Text Available ABSTRACT OBJECTIVE: The aim of this randomized clinical trial was to evaluate the dental, dentoalveolar, and skeletal changes occurring right after the rapid maxillary expansion (RME and slow maxillary expansion (SME treatment using Haas-type expander. METHODS: All subjects performed cone-beam computed tomography (CBCT before installation of expanders (T1 and right after screw stabilization (T2. Patients who did not follow the research parameters were excluded. The final sample resulted in 21 patients in RME group (mean age of 8.43 years and 16 patients in SME group (mean age of 8.70 years. Based on the skewness and kurtosis statistics, the variables were judged to be normally distributed and paired t-test and student t-test were performed at significance level of 5%. RESULTS: Intermolar angle changed significantly due to treatment and RME showed greater buccal tipping than SME. RME showed significant changes in other four measurements due to treatment: maxilla moved forward and mandible showed backward rotation and, at transversal level both skeletal and dentoalveolar showed significant changes due to maxillary expansion. SME showed significant dentoalveolar changes due to maxillary expansion. CONCLUSIONS: Only intermolar angle showed significant difference between the two modalities of maxillary expansion with greater buccal tipping for RME. Also, RME produced skeletal maxillary expansion and SME did not. Both maxillary expansion modalities were efficient to promote transversal gain at dentoalveolar level. Sagittal and vertical measurements did not show differences between groups, but RME promoted a forward movement of the maxilla and backward rotation of the mandible.

  4. Rapid estimation of the vertebral body volume: a combination of the Cavalieri principle and computed tomography images

    International Nuclear Information System (INIS)

    Odaci, Ersan; Sahin, Buenyamin; Sonmez, Osman Fikret; Kaplan, Sueleyman; Bas, Orhan; Bilgic, Sait; Bek, Yueksel; Erguer, Hayati

    2003-01-01

    Objective: The exact volume of the vertebral body is necessary for the evaluation, treatment and surgical application of related vertebral body. Thereby, the volume changes of the vertebral body are monitored, such as infectious diseases of vertebra and traumatic or non-traumatic fractures and deformities of the spine. Several studies have been conducted for the assessment of the vertebral body size based on the evaluation of the different criteria of the spine using different techniques. However, we have not found any detailed study in the literature describing the combination of the Cavalieri principle and vertebral body volume estimation. Materials and methods: In the present study we describe a rapid, simple, accurate and practical technique for estimating the volume of vertebral body. Two specimens were taken from the cadavers including ten lumbar vertebras and were scanned in axial, sagittal and coronal section planes by a computed tomography (CT) machine. The consecutive sections in 5 and 3 mm thicknesses were used to estimate the total volume of the vertebral bodies by means of the Cavalieri principle. Furthermore, to evaluate inter-observer differences the volume estimations were carried out by three performers. Results: There were no significant differences between the performers' estimates and real volumes of the vertebral bodies (P>0.05) and also between the performers' volume estimates (P>0.05). The section thickness and the section plains did not affect the accuracy of the estimates (P>0.05). A high correlation was seen between the estimates of performers and the real volumes of the vertebral bodies (r=0.881). Conclusion: We concluded that the combination of CT scanning with the Cavalieri principle is a direct and accurate technique that can be safely applied to estimate the volume of the vertebral body with the mean of 5 min and 11 s workload per vertebra

  5. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    International Nuclear Information System (INIS)

    Kato, Fumi; Kamishima, Tamotsu; Morita, Ken; Muto, Natalia S.; Okamoto, Syozou; Omatsu, Tokuhiko; Oyama, Noriko; Terae, Satoshi; Kanegae, Kakuko; Nonomura, Katsuya; Shirato, Hiroki

    2011-01-01

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using 99m Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 ± 3.9 s) was significantly shorter than that of the workstation (102.6 ± 38.9 s, p < 0.0001). The results of n-SRV (49.7 ± 4.0%) were highly consistent with those of z-SRV (49.9 ± 3.6%), with a mean discrepancy of 0.12 ± 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 ± 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  6. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Fumi, E-mail: fumikato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kamishima, Tamotsu, E-mail: ktamotamo2@yahoo.co.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Morita, Ken, E-mail: kenordic@carrot.ocn.ne.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Muto, Natalia S., E-mail: nataliamuto@gmail.com [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Okamoto, Syozou, E-mail: shozo@med.hokudai.ac.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Omatsu, Tokuhiko, E-mail: omatoku@nirs.go.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Oyama, Noriko, E-mail: ZAT04404@nifty.ne.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Terae, Satoshi, E-mail: saterae@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kanegae, Kakuko, E-mail: IZW00143@nifty.ne.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Nonomura, Katsuya, E-mail: k-nonno@med.hokudai.ac.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Shirato, Hiroki, E-mail: shirato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan)

    2011-07-15

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using {sup 99m}Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 {+-} 3.9 s) was significantly shorter than that of the workstation (102.6 {+-} 38.9 s, p < 0.0001). The results of n-SRV (49.7 {+-} 4.0%) were highly consistent with those of z-SRV (49.9 {+-} 3.6%), with a mean discrepancy of 0.12 {+-} 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 {+-} 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  7. Sequential pattern recognition by maximum conditional informativity

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří

    2014-01-01

    Roč. 45, č. 1 (2014), s. 39-45 ISSN 0167-8655 R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S Keywords : Multivariate statistics * Statistical pattern recognition * Sequential decision making * Product mixtures * EM algorithm * Shannon information Subject RIV: IN - Informatics, Computer Sci ence Impact factor: 1.551, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/grim-0428565.pdf

  8. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

  9. Rapid Computer Aided Ligand Design and Screening of Precious Metal Extractants from TRUEX Raffinate with Experimental Validation

    International Nuclear Information System (INIS)

    Clark, Aurora Sue; Wall, Nathalie; Benny, Paul

    2015-01-01

    design of a software program that uses state-of-the-art computational combinatorial chemistry, and is developed and validated with experimental data acquisition; the resulting tool allows for rapid design and screening of new ligands for the extraction of precious metals from SNF. This document describes the software that has been produced, ligands that have been designed, and fundamental new understandings of the extraction process of Rh(III) as a function of solution phase conditions (pH, nature of acid, etc.).

  10. Rapid Computer Aided Ligand Design and Screening of Precious Metal Extractants from TRUEX Raffinate with Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Clark, Aurora Sue [Washington State Univ., Pullman, WA (United States); Wall, Nathalie [Washington State Univ., Pullman, WA (United States); Benny, Paul [Washington State Univ., Pullman, WA (United States)

    2015-11-16

    through the design of a software program that uses state-of-the-art computational combinatorial chemistry, and is developed and validated with experimental data acquisition; the resulting tool allows for rapid design and screening of new ligands for the extraction of precious metals from SNF. This document describes the software that has been produced, ligands that have been designed, and fundamental new understandings of the extraction process of Rh(III) as a function of solution phase conditions (pH, nature of acid, etc.).

  11. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  12. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  13. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  14. Adaptive sequential controller

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  15. Adaptive sequential controller

    Science.gov (United States)

    El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  16. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  17. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  18. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  19. Modifications of ORNL's computer programs MSF-21 and VTE-21 for the evaluation and rapid optimization of multistage flash and vertical tube evaporators

    Energy Technology Data Exchange (ETDEWEB)

    Glueckstern, P.; Wilson, J.V.; Reed, S.A.

    1976-06-01

    Design and cost modifications were made to ORNL's Computer Programs MSF-21 and VTE-21 originally developed for the rapid calculation and design optimization of multistage flash (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. The modifications include additional design options to make possible the evaluation of desalting plants based on current technology (the original programs were based on conceptual designs applying advanced and not yet proven technological developments and design features) and new materials and equipment costs updated to mid-1975.

  20. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    International Nuclear Information System (INIS)

    Iliopoulos, AS; Sun, X; Pitsianis, N; Yin, FF; Ren, L

    2015-01-01

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digital projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub

  1. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Iliopoulos, AS; Sun, X [Duke University, Durham, NC (United States); Pitsianis, N [Aristotle University of Thessaloniki (Greece); Duke University, Durham, NC (United States); Yin, FF; Ren, L [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digital projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub

  2. Use of computer tomography and 3DP Rapid Prototyping technique in cranioplasty planning - analysis of accuracy of bone defect modeling

    International Nuclear Information System (INIS)

    Markowska, O.; Gardzinska, A.; Miechowicz, S.; Chrzan, R.; Urbanik, A.; Miechowicz, S.

    2009-01-01

    Background: The accuracy considerations of alignment of skull bone loss and artificial model of implant are presented. In standard surgical treatment the application of prefabricated alloplastic implants requires complicated procedures during surgery, especially additional geometry processing to provide better adjustment of implant. Rapid Prototyping can be used as an effective tool to generate complex 3D medical models and to improve and simplify surgical treatment planning. The operation time can also be significantly reduced. The aim of the study is adjustment accuracy analysis by measurements of fissure between the bone loss and implant. Material/Methods: The 3D numerical model was obtained from CT imaging with Siemens Sensation 10 CT scanner. The physical models were fabricated with 3DP Rapid Prototyping technology. The measurements were performed in determined points of the bone loss and implant borders. Results: Maximal width of fissure between bone loss and implant was 1.8 mm and minimal 0 mm. Average width was 0.714 mm, standard deviation 0.663 mm. Conclusions: Accuracy of 3DP technique is enough to create medical models in selected field of medicine. Models created using RP methods may be then used to produce implants of biocompatible material, for example by vacuum casting. Using of method suggested may allow shortening of presurgery and surgery time. (authors)

  3. Characterization of a Reconfigurable Free-Space Optical Channel for Embedded Computer Applications with Experimental Validation Using Rapid Prototyping Technology

    Directory of Open Access Journals (Sweden)

    Rafael Gil-Otero

    2007-02-01

    Full Text Available Free-space optical interconnects (FSOIs are widely seen as a potential solution to current and future bandwidth bottlenecks for parallel processors. In this paper, an FSOI system called optical highway (OH is proposed. The OH uses polarizing beam splitter-liquid crystal plate (PBS/LC assemblies to perform reconfigurable beam combination functions. The properties of the OH make it suitable for embedding complex network topologies such as completed connected mesh or hypercube. This paper proposes the use of rapid prototyping technology for implementing an optomechanical system suitable for studying the reconfigurable characteristics of a free-space optical channel. Additionally, it reports how the limited contrast ratio of the optical components can affect the attenuation of the optical signal and the crosstalk caused by misdirected signals. Different techniques are also proposed in order to increase the optical modulation amplitude (OMA of the system.

  4. Characterization of a Reconfigurable Free-Space Optical Channel for Embedded Computer Applications with Experimental Validation Using Rapid Prototyping Technology

    Directory of Open Access Journals (Sweden)

    Lim Theodore

    2007-01-01

    Full Text Available Free-space optical interconnects (FSOIs are widely seen as a potential solution to current and future bandwidth bottlenecks for parallel processors. In this paper, an FSOI system called optical highway (OH is proposed. The OH uses polarizing beam splitter-liquid crystal plate (PBS/LC assemblies to perform reconfigurable beam combination functions. The properties of the OH make it suitable for embedding complex network topologies such as completed connected mesh or hypercube. This paper proposes the use of rapid prototyping technology for implementing an optomechanical system suitable for studying the reconfigurable characteristics of a free-space optical channel. Additionally, it reports how the limited contrast ratio of the optical components can affect the attenuation of the optical signal and the crosstalk caused by misdirected signals. Different techniques are also proposed in order to increase the optical modulation amplitude (OMA of the system.

  5. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  6. Fast and accurate non-sequential protein structure alignment using a new asymmetric linear sum assignment heuristic.

    Science.gov (United States)

    Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi

    2016-02-01

    The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Rapid communication: Computational simulation and analysis of a candidate for the design of a novel silk-based biopolymer.

    Science.gov (United States)

    Golas, Ewa I; Czaplewski, Cezary

    2014-09-01

    This work theoretically investigates the mechanical properties of a novel silk-derived biopolymer as polymerized in silico from sericin and elastin-like monomers. Molecular Dynamics simulations and Steered Molecular Dynamics were the principal computational methods used, the latter of which applies an external force onto the system and thereby enables an observation of its response to stress. The models explored herein are single-molecule approximations, and primarily serve as tools in a rational design process for the preliminary assessment of properties in a new material candidate. © 2014 Wiley Periodicals, Inc.

  8. Rapid computation of single PET scan rest-stress myocardial blood flow parametric images by table look up.

    Science.gov (United States)

    Guehl, Nicolas J; Normandin, Marc D; Wooten, Dustin W; Rozen, Guy; Ruskin, Jeremy N; Shoup, Timothy M; Woo, Jonghye; Ptaszek, Leon M; Fakhri, Georges El; Alpert, Nathaniel M

    2017-09-01

    We have recently reported a method for measuring rest-stress myocardial blood flow (MBF) using a single, relatively short, PET scan session. The method requires two IV tracer injections, one to initiate rest imaging and one at peak stress. We previously validated absolute flow quantitation in ml/min/cc for standard bull's eye, segmental analysis. In this work, we extend the method for fast computation of rest-stress MBF parametric images. We provide an analytic solution to the single-scan rest-stress flow model which is then solved using a two-dimensional table lookup method (LM). Simulations were performed to compare the accuracy and precision of the lookup method with the original nonlinear method (NLM). Then the method was applied to 16 single scan rest/stress measurements made in 12 pigs: seven studied after infarction of the left anterior descending artery (LAD) territory, and nine imaged in the native state. Parametric maps of rest and stress MBF as well as maps of left (f LV ) and right (f RV ) ventricular spill-over fractions were generated. Regions of interest (ROIs) for 17 myocardial segments were defined in bull's eye fashion on the parametric maps. The mean of each ROI was then compared to the rest (K 1r ) and stress (K 1s ) MBF estimates obtained from fitting the 17 regional TACs with the NLM. In simulation, the LM performed as well as the NLM in terms of precision and accuracy. The simulation did not show that bias was introduced by the use of a predefined two-dimensional lookup table. In experimental data, parametric maps demonstrated good statistical quality and the LM was computationally much more efficient than the original NLM. Very good agreement was obtained between the mean MBF calculated on the parametric maps for each of the 17 ROIs and the regional MBF values estimated by the NLM (K 1map LM  = 1.019 × K 1 ROI NLM  + 0.019, R 2  = 0.986; mean difference = 0.034 ± 0.036 mL/min/cc). We developed a table lookup method for fast

  9. Rapid and efficient radiosynthesis of [{sup 123}I]I-PK11195, a single photon emission computed tomography tracer for peripheral benzodiazepine receptors

    Energy Technology Data Exchange (ETDEWEB)

    Pimlott, Sally L. [Department of Clinical Physics, West of Scotland Radionuclide Dispensary, Western Infirmary, G11 6NT Glasgow (United Kingdom)], E-mail: s.pimlott@clinmed.gla.ac.uk; Stevenson, Louise [Department of Chemistry, WestCHEM, University of Glasgow, G12 8QQ Glasgow (United Kingdom); Wyper, David J. [Institute of Neurological Sciences, Southern General Hospital, G51 4TF Glasgow (United Kingdom); Sutherland, Andrew [Department of Chemistry, WestCHEM, University of Glasgow, G12 8QQ Glasgow (United Kingdom)

    2008-07-15

    Introduction: [{sup 123}I]I-PK11195 is a high-affinity single photon emission computed tomography radiotracer for peripheral benzodiazepine receptors that has previously been used to measure activated microglia and to assess neuroinflammation in the living human brain. This study investigates the radiosynthesis of [{sup 123}I]I-PK11195 in order to develop a rapid and efficient method that obtains [{sup 123}I]I-PK11195 with a high specific activity for in vivo animal and human imaging studies. Methods: The synthesis of [{sup 123}I]I-PK11195 was evaluated using a solid-state interhalogen exchange method and an electrophilic iododestannylation method, where bromine and trimethylstannyl derivatives were used as precursors, respectively. In the electrophilic iododestannylation method, the oxidants peracetic acid and chloramine-T were both investigated. Results: Electrophilic iododestannylation produced [{sup 123}I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than achievable using the halogen exchange method investigated. Using chloramine-T as oxidant provided a rapid and efficient method of choice for the synthesis of [{sup 123}I]I-PK11195. Conclusions: [{sup 123}I]I-PK11195 has been successfully synthesized via a rapid and efficient electrophilic iododestannylation method, producing [{sup 123}I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than previously achieved.

  10. Rapid and efficient radiosynthesis of [123I]I-PK11195, a single photon emission computed tomography tracer for peripheral benzodiazepine receptors

    International Nuclear Information System (INIS)

    Pimlott, Sally L.; Stevenson, Louise; Wyper, David J.; Sutherland, Andrew

    2008-01-01

    Introduction: [ 123 I]I-PK11195 is a high-affinity single photon emission computed tomography radiotracer for peripheral benzodiazepine receptors that has previously been used to measure activated microglia and to assess neuroinflammation in the living human brain. This study investigates the radiosynthesis of [ 123 I]I-PK11195 in order to develop a rapid and efficient method that obtains [ 123 I]I-PK11195 with a high specific activity for in vivo animal and human imaging studies. Methods: The synthesis of [ 123 I]I-PK11195 was evaluated using a solid-state interhalogen exchange method and an electrophilic iododestannylation method, where bromine and trimethylstannyl derivatives were used as precursors, respectively. In the electrophilic iododestannylation method, the oxidants peracetic acid and chloramine-T were both investigated. Results: Electrophilic iododestannylation produced [ 123 I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than achievable using the halogen exchange method investigated. Using chloramine-T as oxidant provided a rapid and efficient method of choice for the synthesis of [ 123 I]I-PK11195. Conclusions: [ 123 I]I-PK11195 has been successfully synthesized via a rapid and efficient electrophilic iododestannylation method, producing [ 123 I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than previously achieved

  11. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  12. Fast regularizing sequential subspace optimization in Banach spaces

    International Nuclear Information System (INIS)

    Schöpfer, F; Schuster, T

    2009-01-01

    We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level

  13. Rapid data processing for ultrafast X-ray computed tomography using scalable and modular CUDA based pipelines

    Science.gov (United States)

    Frust, Tobias; Wagner, Michael; Stephan, Jan; Juckeland, Guido; Bieberle, André

    2017-10-01

    Ultrafast X-ray tomography is an advanced imaging technique for the study of dynamic processes basing on the principles of electron beam scanning. A typical application case for this technique is e.g. the study of multiphase flows, that is, flows of mixtures of substances such as gas-liquidflows in pipelines or chemical reactors. At Helmholtz-Zentrum Dresden-Rossendorf (HZDR) a number of such tomography scanners are operated. Currently, there are two main points limiting their application in some fields. First, after each CT scan sequence the data of the radiation detector must be downloaded from the scanner to a data processing machine. Second, the current data processing is comparably time-consuming compared to the CT scan sequence interval. To enable online observations or use this technique to control actuators in real-time, a modular and scalable data processing tool has been developed, consisting of user-definable stages working independently together in a so called data processing pipeline, that keeps up with the CT scanner's maximal frame rate of up to 8 kHz. The newly developed data processing stages are freely programmable and combinable. In order to achieve the highest processing performance all relevant data processing steps, which are required for a standard slice image reconstruction, were individually implemented in separate stages using Graphics Processing Units (GPUs) and NVIDIA's CUDA programming language. Data processing performance tests on different high-end GPUs (Tesla K20c, GeForce GTX 1080, Tesla P100) showed excellent performance. Program Files doi:http://dx.doi.org/10.17632/65sx747rvm.1 Licensing provisions: LGPLv3 Programming language: C++/CUDA Supplementary material: Test data set, used for the performance analysis. Nature of problem: Ultrafast computed tomography is performed with a scan rate of up to 8 kHz. To obtain cross-sectional images from projection data computer-based image reconstruction algorithms must be applied. The

  14. Self-adaptive method to distinguish inner and outer contours of industrial computed tomography image for rapid prototype

    International Nuclear Information System (INIS)

    Duan Liming; Ye Yong; Zhang Xia; Zuo Jian

    2013-01-01

    A self-adaptive identification method is proposed for realizing more accurate and efficient judgment about the inner and outer contours of industrial computed tomography (CT) slice images. The convexity-concavity of the single-pixel-wide closed contour is identified with angle method at first. Then, contours with concave vertices are distinguished to be inner or outer contours with ray method, and contours without concave vertices are distinguished with extreme coordinate value method. The method was chosen to automatically distinguish contours by means of identifying the convexity and concavity of the contours. Thus, the disadvantages of single distinguishing methods, such as ray method's time-consuming and extreme coordinate method's fallibility, can be avoided. The experiments prove the adaptability, efficiency, and accuracy of the self-adaptive method. (authors)

  15. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  16. Comparative use of the computer-aided angiography and rapid prototyping technology versus conventional imaging in the management of the Tile C pelvic fractures.

    Science.gov (United States)

    Li, Baofeng; Chen, Bei; Zhang, Ying; Wang, Xinyu; Wang, Fei; Xia, Hong; Yin, Qingshui

    2016-01-01

    Computed tomography (CT) scan with three-dimensional (3D) reconstruction has been used to evaluate complex fractures in pre-operative planning. In this study, rapid prototyping of a life-size model based on 3D reconstructions including bone and vessel was applied to evaluate the feasibility and prospect of these new technologies in surgical therapy of Tile C pelvic fractures by observing intra- and perioperative outcomes. The authors conducted a retrospective study on a group of 157 consecutive patients with Tile C pelvic fractures. Seventy-six patients were treated with conventional pre-operative preparation (A group) and 81 patients were treated with the help of computer-aided angiography and rapid prototyping technology (B group). Assessment of the two groups considered the following perioperative parameters: length of surgical procedure, intra-operative complications, intra- and postoperative blood loss, postoperative pain, postoperative nausea and vomiting (PONV), length of stay, and type of discharge. The two groups were homogeneous when compared in relation to mean age, sex, body weight, injury severity score, associated injuries and pelvic fracture severity score. Group B was performed in less time (105 ± 19 minutes vs. 122 ± 23 minutes) and blood loss (31.0 ± 8.2 g/L vs. 36.2 ± 7.4 g/L) compared with group A. Patients in group B experienced less pain (2.5 ± 2.3 NRS score vs. 2.8 ± 2.0 NRS score), and PONV affected only 8 % versus 10 % of cases. Times to discharge were shorter (7.8 ± 2.0 days vs. 10.2 ± 3.1 days) in group B, and most of patients were discharged to home. In our study, patients of Tile C pelvic fractures treated with computer-aided angiography and rapid prototyping technology had a better perioperative outcome than patients treated with conventional pre-operative preparation. Further studies are necessary to investigate the advantages in terms of clinical results in the short and long run.

  17. Evaluation Using Sequential Trials Methods.

    Science.gov (United States)

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  18. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  19. Pepsi-SAXS: an adaptive method for rapid and accurate computation of small-angle X-ray scattering profiles.

    Science.gov (United States)

    Grudinin, Sergei; Garkavenko, Maria; Kazennov, Andrei

    2017-05-01

    A new method called Pepsi-SAXS is presented that calculates small-angle X-ray scattering profiles from atomistic models. The method is based on the multipole expansion scheme and is significantly faster compared with other tested methods. In particular, using the Nyquist-Shannon-Kotelnikov sampling theorem, the multipole expansion order is adapted to the size of the model and the resolution of the experimental data. It is argued that by using the adaptive expansion order, this method has the same quadratic dependence on the number of atoms in the model as the Debye-based approach, but with a much smaller prefactor in the computational complexity. The method has been systematically validated on a large set of over 50 models collected from the BioIsis and SASBDB databases. Using a laptop, it was demonstrated that Pepsi-SAXS is about seven, 29 and 36 times faster compared with CRYSOL, FoXS and the three-dimensional Zernike method in SAStbx, respectively, when tested on data from the BioIsis database, and is about five, 21 and 25 times faster compared with CRYSOL, FoXS and SAStbx, respectively, when tested on data from SASBDB. On average, Pepsi-SAXS demonstrates comparable accuracy in terms of χ 2 to CRYSOL and FoXS when tested on BioIsis and SASBDB profiles. Together with a small allowed variation of adjustable parameters, this demonstrates the effectiveness of the method. Pepsi-SAXS is available at http://team.inria.fr/nano-d/software/pepsi-saxs.

  20. Rapid identification of pearl powder from Hyriopsis cumingii by Tri-step infrared spectroscopy combined with computer vision technology

    Science.gov (United States)

    Liu, Siqi; Wei, Wei; Bai, Zhiyi; Wang, Xichang; Li, Xiaohong; Wang, Chuanxian; Liu, Xia; Liu, Yuan; Xu, Changhua

    2018-01-01

    Pearl powder, an important raw material in cosmetics and Chinese patent medicines, is commonly uneven in quality and frequently adulterated with low-cost shell powder in the market. The aim of this study is to establish an adequate approach based on Tri-step infrared spectroscopy with enhancing resolution combined with chemometrics for qualitative identification of pearl powder originated from three different quality grades of pearls and quantitative prediction of the proportions of shell powder adulterated in pearl powder. Additionally, computer vision technology (E-eyes) can investigate the color difference among different pearl powders and make it traceable to the pearl quality trait-visual color categories. Though the different grades of pearl powder or adulterated pearl powder have almost identical IR spectra, SD-IR peak intensity at about 861 cm- 1 (v2 band) exhibited regular enhancement with the increasing quality grade of pearls, while the 1082 cm- 1 (v1 band), 712 cm- 1 and 699 cm- 1 (v4 band) were just the reverse. Contrastly, only the peak intensity at 862 cm- 1 was enhanced regularly with the increasing concentration of shell powder. Thus, the bands in the ranges of (1550-1350 cm- 1, 730-680 cm- 1) and (830-880 cm- 1, 690-725 cm- 1) could be exclusive ranges to discriminate three distinct pearl powders and identify adulteration, respectively. For massive sample analysis, a qualitative classification model and a quantitative prediction model based on IR spectra was established successfully by principal component analysis (PCA) and partial least squares (PLS), respectively. The developed method demonstrated great potential for pearl powder quality control and authenticity identification in a direct, holistic manner.

  1. Fast Megavoltage Computed Tomography: A Rapid Imaging Method for Total Body or Marrow Irradiation in Helical Tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Magome, Taiki [Department of Radiological Sciences, Faculty of Health Sciences, Komazawa University, Tokyo (Japan); Department of Radiology, The University of Tokyo Hospital, Tokyo (Japan); Masonic Cancer Center, University of Minnesota, Minneapolis, Minnesota (United States); Haga, Akihiro [Department of Radiology, The University of Tokyo Hospital, Tokyo (Japan); Takahashi, Yutaka [Masonic Cancer Center, University of Minnesota, Minneapolis, Minnesota (United States); Department of Radiation Oncology, Osaka University, Osaka (Japan); Nakagawa, Keiichi [Department of Radiology, The University of Tokyo Hospital, Tokyo (Japan); Dusenbery, Kathryn E. [Department of Therapeutic Radiology, University of Minnesota, Minneapolis, Minnesota (United States); Hui, Susanta K., E-mail: shui@coh.org [Masonic Cancer Center, University of Minnesota, Minneapolis, Minnesota (United States); Department of Therapeutic Radiology, University of Minnesota, Minneapolis, Minnesota (United States); Department of Radiation Oncology and Beckman Research Institute, City of Hope, Duarte, California (United States)

    2016-11-01

    Purpose: Megavoltage computed tomographic (MVCT) imaging has been widely used for the 3-dimensional (3-D) setup of patients treated with helical tomotherapy (HT). One drawback of MVCT is its very long imaging time, the result of slow couch speeds of approximately 1 mm/s, which can be difficult for the patient to tolerate. We sought to develop an MVCT imaging method allowing faster couch speeds and to assess its accuracy for image guidance for HT. Methods and Materials: Three cadavers were scanned 4 times with couch speeds of 1, 2, 3, and 4 mm/s. The resulting MVCT images were reconstructed using an iterative reconstruction (IR) algorithm with a penalty term of total variation and with a conventional filtered back projection (FBP) algorithm. The MVCT images were registered with kilovoltage CT images, and the registration errors from the 2 reconstruction algorithms were compared. This fast MVCT imaging was tested in 3 cases of total marrow irradiation as a clinical trial. Results: The 3-D registration errors of the MVCT images reconstructed with the IR algorithm were smaller than the errors of images reconstructed with the FBP algorithm at fast couch speeds (2, 3, 4 mm/s). The scan time and imaging dose at a speed of 4 mm/s were reduced to 30% of those from a conventional coarse mode scan. For the patient imaging, faster MVCT (3 mm/s couch speed) scanning reduced the imaging time and still generated images useful for anatomic registration. Conclusions: Fast MVCT with the IR algorithm is clinically feasible for large 3-D target localization, which may reduce the overall time for the treatment procedure. This technique may also be useful for calculating daily dose distributions or organ motion analyses in HT treatment over a wide area. Automated integration of this imaging is at least needed to further assess its clinical benefits.

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  3. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  5. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  7. Simultaneous optimization of sequential IMRT plans

    International Nuclear Information System (INIS)

    Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.

    2005-01-01

    Radiotherapy often comprises two phases, in which irradiation of a volume at risk for microscopic disease is followed by a sequential dose escalation to a smaller volume either at a higher risk for microscopic disease or containing only gross disease. This technique is difficult to implement with intensity modulated radiotherapy, as the tolerance doses of critical structures must be respected over the sum of the two plans. Techniques that include an integrated boost have been proposed to address this problem. However, clinical experience with such techniques is limited, and many clinicians are uncomfortable prescribing nonconventional fractionation schemes. To solve this problem, we developed an optimization technique that simultaneously generates sequential initial and boost IMRT plans. We have developed an optimization tool that uses a commercial treatment planning system (TPS) and a high level programming language for technical computing. The tool uses the TPS to calculate the dose deposition coefficients (DDCs) for optimization. The DDCs were imported into external software and the treatment ports duplicated to create the boost plan. The initial, boost, and tolerance doses were specified and used to construct cost functions. The initial and boost plans were optimized simultaneously using a gradient search technique. Following optimization, the fluence maps were exported to the TPS for dose calculation. Seven patients treated using sequential techniques were selected from our clinical database. The initial and boost plans used to treat these patients were developed independently of each other by dividing the tolerance doses proportionally between the initial and boost plans and then iteratively optimizing the plans until a summation that met the treatment goals was obtained. We used the simultaneous optimization technique to generate plans that met the original planning goals. The coverage of the initial and boost target volumes in the simultaneously optimized

  8. Computer automation and artificial intelligence

    International Nuclear Information System (INIS)

    Hasnain, S.B.

    1992-01-01

    Rapid advances in computing, resulting from micro chip revolution has increased its application manifold particularly for computer automation. Yet the level of automation available, has limited its application to more complex and dynamic systems which require an intelligent computer control. In this paper a review of Artificial intelligence techniques used to augment automation is presented. The current sequential processing approach usually adopted in artificial intelligence has succeeded in emulating the symbolic processing part of intelligence, but the processing power required to get more elusive aspects of intelligence leads towards parallel processing. An overview of parallel processing with emphasis on transputer is also provided. A Fuzzy knowledge based controller for amination drug delivery in muscle relaxant anesthesia on transputer is described. 4 figs. (author)

  9. Synthesizing genetic sequential logic circuit with clock pulse generator.

    Science.gov (United States)

    Chuang, Chia-Hua; Lin, Chun-Liang

    2014-05-28

    Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal.

  10. Programme for test generation for combinatorial and sequential systems

    International Nuclear Information System (INIS)

    Tran Huy Hoan

    1973-01-01

    This research thesis reports the computer-assisted search for tests aimed at failure detection in combinatorial and sequential logic circuits. As he wants to deal with complex circuits with many modules such as those met in large scale integrated circuits (LSI), the author used propagation paths. He reports the development of a method which is valid for combinatorial systems and for several sequential circuits comprising elementary logic modules and JK and RS flip-flops. This method is developed on an IBM 360/91 computer in PL/1 language. The used memory space is limited and adjustable with respect to circuit dimension. Computing time is short when compared to that needed by other programmes. The solution is practical and efficient for failure test and localisation

  11. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  13. Dancing Twins: Stellar Hierarchies That Formed Sequentially?

    Science.gov (United States)

    Tokovinin, Andrei

    2018-04-01

    This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).

  14. Verification of computer system PROLOG - software tool for rapid assessments of consequences of short-term radioactive releases to the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Kiselev, Alexey A.; Krylov, Alexey L.; Bogatov, Sergey A. [Nuclear Safety Institute (IBRAE), Bolshaya Tulskaya st. 52, 115191, Moscow (Russian Federation)

    2014-07-01

    In case of nuclear and radiation accidents emergency response authorities require a tool for rapid assessments of possible consequences. One of the most significant problems is lack of data on initial state of an accident. The lack can be especially critical in case the accident occurred in a location that was not thoroughly studied beforehand (during transportation of radioactive materials for example). One of possible solutions is the hybrid method when a model that enables rapid assessments with the use of reasonable minimum of input data is used conjointly with an observed data that can be collected shortly after accidents. The model is used to estimate parameters of the source and uncertain meteorological parameters on the base of some observed data. For example, field of fallout density can be observed and measured within hours after an accident. After that the same model with the use of estimated parameters is used to assess doses and necessity of recommended and mandatory countermeasures. The computer system PROLOG was designed to solve the problem. It is based on the widely used Gaussian model. The standard Gaussian model is supplemented with several sub-models that allow to take into account: polydisperse aerosols, aerodynamic shade from buildings in the vicinity of the place of accident, terrain orography, initial size of the radioactive cloud, effective height of the release, influence of countermeasures on the doses of radioactive exposure of humans. It uses modern GIS technologies and can use web map services. To verify ability of PROLOG to solve the problem it is necessary to test its ability to assess necessary parameters of real accidents in the past. Verification of the computer system on the data of Chazhma Bay accident (Russian Far East, 1985) was published previously. In this work verification was implemented on the base of observed contamination from the Kyshtym disaster (PA Mayak, 1957) and the Tomsk accident (1993). Observations of Sr-90

  15. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  16. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  19. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  20. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  7. Usefulness of Computed Tomography in pre-surgical evaluation of maxillo-facial pathology with rapid prototyping and surgical pre-planning by virtual reality

    International Nuclear Information System (INIS)

    Toso, Francesco; Zuiani, Chiara; Vergendo, Maurizio; Bazzocchi, Massimo; Salvo, Iolanda; Robiony, Massimo; Politi, Massimo

    2005-01-01

    Purpose. To validate a protocol for creating virtual models to be used in the construction of solid prototypes useful for the planning-simulation of maxillo-facial surgery, in particular for very complex anatomical and pathologic problems. To optimize communications between the radiology, engineering and surgical laboratories. Methods and materials. We studied 16 patients with different clinical problems of the maxillo-facial district. Exams were performed with multidetector computed tomography (MDCT) and single slice computed tomography (SDCT) with axial scans and collimation of 0.5-2 mm, and reconstruction interval of 1 mm. Subsequently we performed 2D multiplanar reconstructions and 3D volume-rendering reconstructions. We exported the DICOM images to the engineering laboratory, to recognize and isolate the bony structures by software. With these data the solid prototypes were generated using stereolitography. To date, surgery has been preformed on 12 patients after simulation of the procedure on the stereolitography model. Results. The solid prototypes constructed in the difficult cases were sufficiently detailed despite problems related to the artefacts generated by dental fillings and prostheses. In the remaining cases the MPR/3D images were sufficiently detailed for surgical planning. The surgical results were excellent in all patients who underwent surgery, and the surgeons were satisfied with the improvement in quality and the reduction in time required for the procedure. Conclusions. MDCT enables rapid prototyping using solid replication, which was very helpful in maxillofacial surgery, despite problems related to artifacts due to dental fillings and prosthesis within the acquisition field; solutions for this problem are work in progress. The protocol used for communication between the different laboratories was valid and reproducible [it

  8. Diffusible iodine-based contrast-enhanced computed tomography (diceCT): an emerging tool for rapid, high-resolution, 3-D imaging of metazoan soft tissues.

    Science.gov (United States)

    Gignac, Paul M; Kley, Nathan J; Clarke, Julia A; Colbert, Matthew W; Morhardt, Ashley C; Cerio, Donald; Cost, Ian N; Cox, Philip G; Daza, Juan D; Early, Catherine M; Echols, M Scott; Henkelman, R Mark; Herdina, A Nele; Holliday, Casey M; Li, Zhiheng; Mahlow, Kristin; Merchant, Samer; Müller, Johannes; Orsbon, Courtney P; Paluh, Daniel J; Thies, Monte L; Tsai, Henry P; Witmer, Lawrence M

    2016-06-01

    Morphologists have historically had to rely on destructive procedures to visualize the three-dimensional (3-D) anatomy of animals. More recently, however, non-destructive techniques have come to the forefront. These include X-ray computed tomography (CT), which has been used most commonly to examine the mineralized, hard-tissue anatomy of living and fossil metazoans. One relatively new and potentially transformative aspect of current CT-based research is the use of chemical agents to render visible, and differentiate between, soft-tissue structures in X-ray images. Specifically, iodine has emerged as one of the most widely used of these contrast agents among animal morphologists due to its ease of handling, cost effectiveness, and differential affinities for major types of soft tissues. The rapid adoption of iodine-based contrast agents has resulted in a proliferation of distinct specimen preparations and scanning parameter choices, as well as an increasing variety of imaging hardware and software preferences. Here we provide a critical review of the recent contributions to iodine-based, contrast-enhanced CT research to enable researchers just beginning to employ contrast enhancement to make sense of this complex new landscape of methodologies. We provide a detailed summary of recent case studies, assess factors that govern success at each step of the specimen storage, preparation, and imaging processes, and make recommendations for standardizing both techniques and reporting practices. Finally, we discuss potential cutting-edge applications of diffusible iodine-based contrast-enhanced computed tomography (diceCT) and the issues that must still be overcome to facilitate the broader adoption of diceCT going forward. © 2016 The Authors. Journal of Anatomy published by John Wiley & Sons Ltd on behalf of Anatomical Society.

  9. Rapid freeze-drying cycle optimization using computer programs developed based on heat and mass transfer models and facilitated by tunable diode laser absorption spectroscopy (TDLAS).

    Science.gov (United States)

    Kuu, Wei Y; Nail, Steven L

    2009-09-01

    Computer programs in FORTRAN were developed to rapidly determine the optimal shelf temperature, T(f), and chamber pressure, P(c), to achieve the shortest primary drying time. The constraint for the optimization is to ensure that the product temperature profile, T(b), is below the target temperature, T(target). Five percent mannitol was chosen as the model formulation. After obtaining the optimal sets of T(f) and P(c), each cycle was assigned with a cycle rank number in terms of the length of drying time. Further optimization was achieved by dividing the drying time into a series of ramping steps for T(f), in a cascading manner (termed the cascading T(f) cycle), to further shorten the cycle time. For the purpose of demonstrating the validity of the optimized T(f) and P(c), four cycles with different predicted lengths of drying time, along with the cascading T(f) cycle, were chosen for experimental cycle runs. Tunable diode laser absorption spectroscopy (TDLAS) was used to continuously measure the sublimation rate. As predicted, maximum product temperatures were controlled slightly below the target temperature of -25 degrees C, and the cascading T(f)-ramping cycle is the most efficient cycle design. In addition, the experimental cycle rank order closely matches with that determined by modeling.

  10. On the sequentiality of the multiple Coulomb-excitation process

    International Nuclear Information System (INIS)

    Dannhaeuser, G.; Boer, J. de

    1978-01-01

    This paper describes the results of 'computer experiments' illustrating the meaning of a new concept called 'sequentiality'. This concept applies to processes in which the excitation of a given state is mainly accomplished by a large multiple of steps, and it deals with the question as to what extent a transition close to the ground state occurs before one between the highest excited states. (orig.) [de

  11. Sequential Triangle Strip Generator based on Hopfield Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Lněnička, Radim

    2009-01-01

    Roč. 21, č. 2 (2009), s. 583-617 ISSN 0899-7667 R&D Projects: GA MŠk(CZ) 1M0545; GA AV ČR 1ET100300517; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10300504; CEZ:AV0Z10750506 Keywords : sequential triangle strip * combinatorial optimization * Hopfield network * minimum energy * simulated annealing Subject RIV: IN - Informatics, Computer Science Impact factor: 2.175, year: 2009

  12. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  13. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  14. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  16. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  17. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  20. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    Directory of Open Access Journals (Sweden)

    Dario Cuevas Rivera

    2015-10-01

    Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.

  1. Sequential series for nuclear reactions

    International Nuclear Information System (INIS)

    Izumo, Ko

    1975-01-01

    A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  3. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  5. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  6. The composite sequential clustering technique for analysis of multispectral scanner data

    Science.gov (United States)

    Su, M. Y.

    1972-01-01

    The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.

  7. Laser Doppler imaging of cutaneous blood flow through transparent face masks: a necessary preamble to computer-controlled rapid prototyping fabrication with submillimeter precision.

    Science.gov (United States)

    Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H

    2008-01-01

    A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.

  8. A practical method of modeling a treatment couch using cone-beam computed tomography for intensity-modulated radiation therapy and RapidArc treatment delivery

    Energy Technology Data Exchange (ETDEWEB)

    Aldosary, Ghada, E-mail: ghada.aldosary@mail.mcgill.ca [Medical Physics Unit, McGill University Health Centre, Montreal, Quebec (Canada); Nobah, Ahmad; Al-Zorkani, Faisal [Biomedical Physics Department, King Faisal Specialist Hospital and Research Center, Riyadh (Saudi Arabia); Devic, Slobodan [Department of Radiation Oncology, Jewish General Hospital, McGill University, Montreal, Quebec (Canada); Moftah, Belal [Medical Physics Unit, McGill University Health Centre, Montreal, Quebec (Canada); Biomedical Physics Department, King Faisal Specialist Hospital and Research Center, Riyadh (Saudi Arabia)

    2015-01-01

    The effect of a treatment couch on dose perturbation is not always fully considered in intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). In the course of inverse planning radiotherapy techniques, beam parameter optimization may change in the absence of the couch, causing errors in the calculated dose distributions. Although modern treatment planning systems (TPS) include data for the treatment couch components, they are not manufactured identically. Thus, variations in their Hounsfield unit (HU) values may exist. Moreover, a radiotherapy facility may wish to have a third-party custom tabletop installed that is not included by the TPS vendor. This study demonstrates a practical and simple method of acquiring reliable computed tomography (CT) data for the treatment couch and shows how the absorbed dose calculated with the modeled treatment couch can differ from that with the default treatment couch found in the TPS. We also experimentally verified that neglecting to incorporate the treatment couch completely in the treatment planning process might result in dose differences of up to 9.5% and 7.3% for 4-MV and 10-MV photon beams, respectively. Furthermore, 20 RapidArc and IMRT cases were used to quantify the change in calculated dose distributions caused by using either the default or modeled couch. From 2-dimensional (2D) ionization chamber array measurements, we observed large dose distribution differences between the measurements and calculations when the couch was omitted that varied according to the planning technique and anatomic site. Thus, incorporating the treatment couch in the dose calculation phase of treatment planning significantly decreases dose calculation errors.

  9. Immediate Sequential Bilateral Cataract Surgery

    DEFF Research Database (Denmark)

    Kessel, Line; Andresen, Jens; Erngaard, Ditte

    2015-01-01

    The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...

  10. Random and cooperative sequential adsorption

    Science.gov (United States)

    Evans, J. W.

    1993-10-01

    Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

  11. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  12. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  13. Double-blind photo lineups using actual eyewitnesses: an experimental test of a sequential versus simultaneous lineup procedure.

    Science.gov (United States)

    Wells, Gary L; Steblay, Nancy K; Dysart, Jennifer E

    2015-02-01

    Eyewitnesses (494) to actual crimes in 4 police jurisdictions were randomly assigned to view simultaneous or sequential photo lineups using laptop computers and double-blind administration. The sequential procedure used in the field experiment mimicked how it is conducted in actual practice (e.g., using a continuation rule, witness does not know how many photos are to be viewed, witnesses resolve any multiple identifications), which is not how most lab experiments have tested the sequential lineup. No significant differences emerged in rates of identifying lineup suspects (25% overall) but the sequential procedure produced a significantly lower rate (11%) of identifying known-innocent lineup fillers than did the simultaneous procedure (18%). The simultaneous/sequential pattern did not significantly interact with estimator variables and no lineup-position effects were observed for either the simultaneous or sequential procedures. Rates of nonidentification were not significantly different for simultaneous and sequential but nonidentifiers from the sequential procedure were more likely to use the "not sure" response option than were nonidentifiers from the simultaneous procedure. Among witnesses who made an identification, 36% (41% of simultaneous and 32% of sequential) identified a known-innocent filler rather than a suspect, indicating that eyewitness performance overall was very poor. The results suggest that the sequential procedure that is used in the field reduces the identification of known-innocent fillers, but the differences are relatively small.

  14. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  15. Cortical responses following simultaneous and sequential retinal neurostimulation with different return configurations.

    Science.gov (United States)

    Barriga-Rivera, Alejandro; Morley, John W; Lovell, Nigel H; Suaning, Gregg J

    2016-08-01

    Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas.

  16. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  17. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  18. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  19. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  20. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  1. Sequential Detection of Fission Processes for Harbor Defense

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J V; Walston, S E; Chambers, D H

    2015-02-12

    With the large increase in terrorist activities throughout the world, the timely and accurate detection of special nuclear material (SNM) has become an extremely high priority for many countries concerned with national security. The detection of radionuclide contraband based on their γ-ray emissions has been attacked vigorously with some interesting and feasible results; however, the fission process of SNM has not received as much attention due to its inherent complexity and required predictive nature. In this paper, on-line, sequential Bayesian detection and estimation (parameter) techniques to rapidly and reliably detect unknown fissioning sources with high statistical confidence are developed.

  2. Sequential Scintigraphy in Renal Transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Winkel, K. zum; Harbst, H.; Schenck, P.; Franz, H. E.; Ritz, E.; Roehl, L.; Ziegler, M.; Ammann, W.; Maier-Borst, W. [Institut Fuer Nuklearmedizin, Deutsches Krebsforschungszentrum, Heidelberg, Federal Republic of Germany (Germany)

    1969-05-15

    Based on experience gained from more than 1600 patients with proved or suspected kidney diseases and on results on extended studies with dogs, sequential scintigraphy was performed after renal transplantation in dogs. After intravenous injection of 500 {mu}Ci. {sup 131}I-Hippuran scintiphotos were taken during the first minute with an exposure time of 15 sec each and thereafter with an exposure of 2 min up to at least 16 min.. Several examinations were evaluated digitally. 26 examinations were performed on 11 dogs with homotransplanted kidneys. Immediately after transplantation the renal function was almost normal arid the bladder was filled in due time. At the beginning of rejection the initial uptake of radioactive Hippuran was reduced. The intrarenal transport became delayed; probably the renal extraction rate decreased. Corresponding to the development of an oedema in the transplant the uptake area increased in size. In cases of thrombosis of the main artery there was no evidence of any uptake of radioactivity in the transplant. Similar results were obtained in 41 examinations on 15 persons. Patients with postoperative anuria due to acute tubular necrosis showed still some uptake of radioactivity contrary to those with thrombosis of the renal artery, where no uptake was found. In cases of rejection the most frequent signs were a reduced initial uptake and a delayed intrarenal transport of radioactive Hippuran. Infarction could be detected by a reduced uptake in distinct areas of the transplant. (author)

  3. Sequential provisional implant prosthodontics therapy.

    Science.gov (United States)

    Zinner, Ira D; Markovits, Stanley; Jansen, Curtis E; Reid, Patrick E; Schnader, Yale E; Shapiro, Herbert J

    2012-01-01

    The fabrication and long-term use of first- and second-stage provisional implant prostheses is critical to create a favorable prognosis for function and esthetics of a fixed-implant supported prosthesis. The fixed metal and acrylic resin cemented first-stage prosthesis, as reviewed in Part I, is needed for prevention of adjacent and opposing tooth movement, pressure on the implant site as well as protection to avoid micromovement of the freshly placed implant body. The second-stage prosthesis, reviewed in Part II, should be used following implant uncovering and abutment installation. The patient wears this provisional prosthesis until maturation of the bone and healing of soft tissues. The second-stage provisional prosthesis is also a fail-safe mechanism for possible early implant failures and also can be used with late failures and/or for the necessity to repair the definitive prosthesis. In addition, the screw-retained provisional prosthesis is used if and when an implant requires removal or other implants are to be placed as in a sequential approach. The creation and use of both first- and second-stage provisional prostheses involve a restorative dentist, dental technician, surgeon, and patient to work as a team. If the dentist alone cannot do diagnosis and treatment planning, surgery, and laboratory techniques, he or she needs help by employing the expertise of a surgeon and a laboratory technician. This team approach is essential for optimum results.

  4. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  5. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  6. Sequential-Injection Analysis: Principles, Instrument Construction, and Demonstration by a Simple Experiment

    Science.gov (United States)

    Economou, A.; Tzanavaras, P. D.; Themelis, D. G.

    2005-01-01

    The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…

  7. Efficacy of premixed versus sequential administration of ...

    African Journals Online (AJOL)

    sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

  8. Efficient computation of hashes

    International Nuclear Information System (INIS)

    Lopes, Raul H C; Franqueira, Virginia N L; Hobson, Peter R

    2014-01-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  9. Efficient sequential and parallel algorithms for record linkage.

    Science.gov (United States)

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  10. Synthetic Aperture Sequential Beamforming implemented on multi-core platforms

    DEFF Research Database (Denmark)

    Kjeldsen, Thomas; Lassen, Lee; Hemmsen, Martin Christian

    2014-01-01

    This paper compares several computational ap- proaches to Synthetic Aperture Sequential Beamforming (SASB) targeting consumer level parallel processors such as multi-core CPUs and GPUs. The proposed implementations demonstrate that ultrasound imaging using SASB can be executed in real- time with ...... per second) on an Intel Core i7 2600 CPU with an AMD HD7850 and a NVIDIA GTX680 GPU. The fastest CPU and GPU implementations use 14% and 1.3% of the real-time budget of 62 ms/frame, respectively. The maximum achieved processing rate is 1265 frames/s....

  11. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  12. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  13. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  14. Computational Cognitive Neuroscience Modeling of Sequential Skill Learning

    Science.gov (United States)

    2016-09-21

    learning during declarative control. 8. Journal of Experimental Psychology : Learning, Memory , and Cognition . 9. Crossley, M. J., Ashby, F. G., & Maddox...learning: Sensitivity to feedback timing. Frontiers in Psychology – Cognitive Science, 5, article 643, 1-9. 15. Worthy, D.A. & Maddox, W.T. (2014). A...Learning, Memory , and Cognition . Crossley, M. J., Ashby, F. G., & Maddox, W. T. (2014). Context-dependent savings in procedural category learning

  15. Sequential computed tomographic imaging of a transplantable rabbit brain tumor

    International Nuclear Information System (INIS)

    Kumar, A.J.; Rosenbaum, A.E.; Beck, T.J.; Ahn, H.S.; Anderson, J.

    1986-01-01

    The accuracy of CT imaging in evaluating VX-2 tumor growth in the rabbit brain was assessed. CT scanning was performed in 5 outbred New Zealand white male rabbits before and at 4, 7, 9 and 13 (in 3 animals) days after surgical implantation of 3 x 10 5 viable VX-2 tumor cells in the frontoparietal lobes. The CT studies were correlated with gross pathology in each. The tumor was visualized with CT in all 5 rabbits by the 9th day post implantation when the tumor ranged in size from 4-6 x 3-4 x 2-3 mm. Between the 9th and 13th day, the tumor increased 6-fold in two rabbits and 12-fold in the third rabbit. CT is a useful technique to evaluate brain tumor growth in this model and should be valuable in documenting the efficacy of chemotherapy on tumor growth. (orig.)

  16. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  17. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  18. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  19. Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment

    Science.gov (United States)

    Carpenter, James R.; Markley, F Landis

    2014-01-01

    This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.

  20. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  1. epSICAR: An Emerging Patterns based Approach to Sequential, Interleaved and Concurrent Activity Recognition

    DEFF Research Database (Denmark)

    Gu, Tao; Wu, Zhanqing; Tao, Xianping

    2009-01-01

    Recognizing human activity from sensor readings has recently attracted much research interest in pervasive computing. Human activity recognition is particularly challenging because activities are often performed in not only simple (i.e., sequential), but also complex (i.e., interleaved...

  2. An Undergraduate Survey Course on Asynchronous Sequential Logic, Ladder Logic, and Fuzzy Logic

    Science.gov (United States)

    Foster, D. L.

    2012-01-01

    For a basic foundation in computer engineering, universities traditionally teach synchronous sequential circuit design, using discrete gates or field programmable gate arrays, and a microcomputers course that includes basic I/O processing. These courses, though critical, expose students to only a small subset of tools. At co-op schools like…

  3. Sequential dependencies in magnitude scaling of loudness

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2013-01-01

    Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...

  4. Dihydroazulene photoswitch operating in sequential tunneling regime

    DEFF Research Database (Denmark)

    Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg

    2012-01-01

    to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...

  5. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  6. Sequential MR images of uterus after Gd-DTPA injection

    International Nuclear Information System (INIS)

    Okada, Susumu; Kato, Tomoyasu; Yamada, Keiko; Sawano, Seishi; Yamashita, Takashi; Hirai, Yasuo; Hasumi, Katsuhiko

    1993-01-01

    To investigate the sequential changes in signal intensity (SI) of normal and abnormal uteri, T1-weighted images were taken repeatedly after the injection of Gd-diethylenetriaminepentaacetic acid (DTPA). Six volunteers and 19 patients with known uterine body malignancy (18 carcinomas, one carcinosarcoma) were examined. The results in volunteers were as follows. In the secretory phase, SI of the endometrium was stronger in the late images than in the early ones, whereas in the proliferative phase, SI was stronger in the early images. SI of the myometrium decreased rapidly and there were no differences in SI between menstrual phases. In 17 of 18 endometrial carcinomas, the tumors showed hypointensity relative to the myometrium, and the contrast between the tumor and the myometrium was better in the early images. In the remaining two cases, the tumor showed hyperintensity and the contrast was better in the late images. After the injection of Gd-DTPA, the endometrium appeared differently according to the menstrual cycle in normal volunteers, and the appearance of uterine structures and endometrial malignant tumors changed sequentially. These findings must be kept in mind when evaluating uterine diseases by Gd-DTPA enhanced MRI. (author)

  7. Rapid Geometry Creation for Computer-Aided Engineering Parametric Analyses: A Case Study Using ComGeom2 for Launch Abort System Design

    Science.gov (United States)

    Hawke, Veronica; Gage, Peter; Manning, Ted

    2007-01-01

    ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.

  8. Unconventional Quantum Computing Devices

    OpenAIRE

    Lloyd, Seth

    2000-01-01

    This paper investigates a variety of unconventional quantum computation devices, including fermionic quantum computers and computers that exploit nonlinear quantum mechanics. It is shown that unconventional quantum computing devices can in principle compute some quantities more rapidly than `conventional' quantum computers.

  9. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  10. Sequential Classification of Palm Gestures Based on A* Algorithm and MLP Neural Network for Quadrocopter Control

    Directory of Open Access Journals (Sweden)

    Wodziński Marek

    2017-06-01

    Full Text Available This paper presents an alternative approach to the sequential data classification, based on traditional machine learning algorithms (neural networks, principal component analysis, multivariate Gaussian anomaly detector and finding the shortest path in a directed acyclic graph, using A* algorithm with a regression-based heuristic. Palm gestures were used as an example of the sequential data and a quadrocopter was the controlled object. The study includes creation of a conceptual model and practical construction of a system using the GPU to ensure the realtime operation. The results present the classification accuracy of chosen gestures and comparison of the computation time between the CPU- and GPU-based solutions.

  11. Interpretability degrees of finitely axiomatized sequential theories

    NARCIS (Netherlands)

    Visser, Albert

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed

  12. Interpretability Degrees of Finitely Axiomatized Sequential Theories

    NARCIS (Netherlands)

    Visser, Albert

    2012-01-01

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question

  13. S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.

    Science.gov (United States)

    CICIARELLI, V; LEONARD, JOSEPH

    A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…

  14. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  15. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  16. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  17. Sequential motor skill: cognition, perception and action

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.

    2013-01-01

    Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role

  18. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

    2014-01-01

    the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

  19. A framework for sequential multiblock component methods

    NARCIS (Netherlands)

    Smilde, A.K.; Westerhuis, J.A.; Jong, S.de

    2003-01-01

    Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework

  20. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  1. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); V. Capasso

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

  2. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Capasso, V.

    2009-01-01

    We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video

  3. Sequential Analysis: Hypothesis Testing and Changepoint Detection

    Science.gov (United States)

    2014-07-11

    maintains the flexibility of deciding sooner than the fixed sample size procedure at the price of some lower power [13, 514]. The sequential probability... markets , detection of signals with unknown arrival time in seismology, navigation, radar and sonar signal processing, speech segmentation, and the... skimming cruise missile can yield a significant increase in the probability of raid annihilation. Furthermore, usually detection systems are

  4. STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY

    Directory of Open Access Journals (Sweden)

    Damián Fernández

    2014-12-01

    Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

  5. Truly costly sequential search and oligopolistic pricing

    NARCIS (Netherlands)

    Janssen, Maarten C W; Moraga-González, José Luis; Wildenbeest, Matthijs R.

    We modify the paper of Stahl (1989) [Stahl, D.O., 1989. Oligopolistic pricing with sequential consumer search. American Economic Review 79, 700-12] by relaxing the assumption that consumers obtain the first price quotation for free. When all price quotations are costly to obtain, the unique

  6. Zips : mining compressing sequential patterns in streams

    NARCIS (Netherlands)

    Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

    2013-01-01

    We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

  7. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  8. Adult Word Recognition and Visual Sequential Memory

    Science.gov (United States)

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  9. Terminating Sequential Delphi Survey Data Collection

    Science.gov (United States)

    Kalaian, Sema A.; Kasim, Rafa M.

    2012-01-01

    The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…

  10. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  11. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  12. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  13. Mining Sequential Update Summarization with Hierarchical Text Analysis

    Directory of Open Access Journals (Sweden)

    Chunyun Zhang

    2016-01-01

    Full Text Available The outbreak of unexpected news events such as large human accident or natural disaster brings about a new information access problem where traditional approaches fail. Mostly, news of these events shows characteristics that are early sparse and later redundant. Hence, it is very important to get updates and provide individuals with timely and important information of these incidents during their development, especially when being applied in wireless and mobile Internet of Things (IoT. In this paper, we define the problem of sequential update summarization extraction and present a new hierarchical update mining system which can broadcast with useful, new, and timely sentence-length updates about a developing event. The new system proposes a novel method, which incorporates techniques from topic-level and sentence-level summarization. To evaluate the performance of the proposed system, we apply it to the task of sequential update summarization of temporal summarization (TS track at Text Retrieval Conference (TREC 2013 to compute four measurements of the update mining system: the expected gain, expected latency gain, comprehensiveness, and latency comprehensiveness. Experimental results show that our proposed method has good performance.

  14. Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations

    Directory of Open Access Journals (Sweden)

    Nah-Oak Song

    2015-08-01

    Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.

  15. Application of computer-aided three-dimensional skull model with rapid prototyping technique in repair of zygomatico-orbito-maxillary complex fracture.

    Science.gov (United States)

    Li, Wei Zhong; Zhang, Mei Chao; Li, Shao Ping; Zhang, Lei Tao; Huang, Yu

    2009-06-01

    With the advent of CAD/CAM and rapid prototyping (RP), a technical revolution in oral and maxillofacial trauma was promoted to benefit treatment, repair of maxillofacial fractures and reconstruction of maxillofacial defects. For a patient with zygomatico-facial collapse deformity resulting from a zygomatico-orbito-maxillary complex (ZOMC) fracture, CT scan data were processed by using Mimics 10.0 for three-dimensional (3D) reconstruction. The reduction design was aided by 3D virtual imaging and the 3D skull model was reproduced using the RP technique. In line with the design by Mimics, presurgery was performed on the 3D skull model and the semi-coronal incision was taken for reduction of ZOMC fracture, based on the outcome from the presurgery. Postoperative CT and images revealed significantly modified zygomatic collapse and zygomatic arch rise and well-modified facial symmetry. The CAD/CAM and RP technique is a relatively useful tool that can assist surgeons with reconstruction of the maxillofacial skeleton, especially in repairs of ZOMC fracture.

  16. Autotransplantation of teeth using computer-aided rapid prototyping of a three-dimensional replica of the donor tooth: a systematic literature review.

    Science.gov (United States)

    Verweij, J P; Jongkees, F A; Anssari Moin, D; Wismeijer, D; van Merkesteyn, J P R

    2017-11-01

    This systematic review provides an overview of studies on autotransplantation techniques using rapid prototyping for preoperative fabrication of donor tooth replicas for preparation of the neo-alveolus. Different three-dimensional autotransplantation techniques and their treatment outcomes are discussed. The systematic literature search yielded 19 articles that satisfied the criteria for inclusion. These papers described one case-control study, four clinical observational studies, one study with a clinical and in vitro part, four in vitro studies, and nine case reports. The in vitro studies reported high accuracy for the printing and planning processes. The case reports all reported successful transplantation without any pathological signs. The clinical studies reported a short extraoral time of the donor tooth, with subsequent success and survival rates of 80.0-91.1% and 95.5-100%, respectively. The case-control study reported a significant decrease in extraoral time and high success rates with the use of donor tooth replicas. In conclusion, the use of a preoperatively designed surgical guide for autotransplantation enables accurate positional planning, increases the ease of surgery, and decreases the extraoral time. However, the quality of the existing body of evidence is low. Further research is therefore required to investigate the clinical advantages of this innovative autotransplantation technique. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. Accelerating Sequential Gaussian Simulation with a constant path

    Science.gov (United States)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  18. Deciphering Intrinsic Inter-subunit Couplings that Lead to Sequential Hydrolysis of F 1 -ATPase Ring

    Science.gov (United States)

    Dai, Liqiang; Flechsig, Holger; Yu, Jin

    2017-10-01

    The rotary sequential hydrolysis of metabolic machine F1-ATPase is a prominent feature to reveal high coordination among multiple chemical sites on the stator F1 ring, which also contributes to tight coupling between the chemical reaction and central {\\gamma}-shaft rotation. High-speed AFM experiments discovered that the sequential hydrolysis was maintained on the F1 ring even in the absence of the {\\gamma} rotor. To explore how the intrinsic sequential performance arises, we computationally investigated essential inter-subunit couplings on the hexameric ring of mitochondrial and bacterial F1. We first reproduced the sequential hydrolysis schemes as experimentally detected, by simulating tri-site ATP hydrolysis cycles on the F1 ring upon kinetically imposing inter-subunit couplings to substantially promote the hydrolysis products release. We found that it is key for certain ATP binding and hydrolysis events to facilitate the neighbor-site ADP and Pi release to support the sequential hydrolysis. The kinetically feasible couplings were then scrutinized through atomistic molecular dynamics simulations as well as coarse-grained simulations, in which we enforced targeted conformational changes for the ATP binding or hydrolysis. Notably, we detected the asymmetrical neighbor-site opening that would facilitate the ADP release upon the enforced ATP binding, and computationally captured the complete Pi release through charge hopping upon the enforced neighbor-site ATP hydrolysis. The ATP-hydrolysis triggered Pi release revealed in current TMD simulation confirms a recent prediction made from statistical analyses of single molecule experimental data in regard to the role ATP hydrolysis plays. Our studies, therefore, elucidate both the concerted chemical kinetics and underlying structural dynamics of the inter-subunit couplings that lead to the rotary sequential hydrolysis of the F1 ring.

  19. Impact of Diagrams on Recalling Sequential Elements in Expository Texts.

    Science.gov (United States)

    Guri-Rozenblit, Sarah

    1988-01-01

    Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)

  20. Constructing Multiply Substituted Arenes Using Sequential Pd(II)-Catalyzed C–H Olefination**

    Science.gov (United States)

    Engle, Keary M.; Wang, Dong-Hui; Yu, Jin-Quan

    2011-01-01

    Complementary catalytic systems have been developed in which the reactivity/selectivity balance in Pd(II)-catalyzed ortho-C–H olefination can be modulated through ligand control. This allows for sequential C–H functionalization for the rapid preparation of 1,2,3-trisubstituted arenes. Additionally, a rare example of iterative C–H activation, in which a newly installed functional group directs subsequent C–H activation has been demonstrated. PMID:20632344

  1. A Sequential Quadratically Constrained Quadratic Programming Method of Feasible Directions

    International Nuclear Information System (INIS)

    Jian Jinbao; Hu Qingjie; Tang Chunming; Zheng Haiyan

    2007-01-01

    In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported

  2. Modeling two-phase ferroelectric composites by sequential laminates

    International Nuclear Information System (INIS)

    Idiart, Martín I

    2014-01-01

    Theoretical estimates are given for the overall dissipative response of two-phase ferroelectric composites with complex particulate microstructures under arbitrary loading histories. The ferroelectric behavior of the constituent phases is described via a stored energy density and a dissipation potential in accordance with the theory of generalized standard materials. An implicit time-discretization scheme is used to generate a variational representation of the overall response in terms of a single incremental potential. Estimates are then generated by constructing sequentially laminated microgeometries of particulate type whose overall incremental potential can be computed exactly. Because they are realizable, by construction, these estimates are guaranteed to conform with any material constraints, to satisfy all pertinent bounds and to exhibit the required convexity properties with no duality gap. Predictions for representative composite and porous systems are reported and discussed in the light of existing experimental data. (paper)

  3. Acute hydrocarbon pneumonia after white spirit aspiration: sequential HRCT findings

    Energy Technology Data Exchange (ETDEWEB)

    Facon, David; Coumbaras, Jean; Bigot, Emmanuelle; Bahlouli, Fouad; Bellin, Marie-France [Universite Paris 11, Department of Radiology, Hopital Paul-Brousse, AP-HP, Villejuif Cedex (France); Boissonnas, Alain [Universite Paris 11, Department of Internal Medicine, Hopital Paul-Brousse, AP-HP, Villejuif Cedex (France)

    2005-01-01

    Hydrocarbon pneumonia is a very uncommon condition resulting from aspiration of mineral oil into the lung. We report the first description of early and sequential high-resolution computed tomographic (HRCT) findings of hydrocarbon pneumonia following attempted suicide by white spirit aspiration. Initial HRCT showed patchy opacities of coalescing masses with well-defined walls. They were visible in the middle lobe, lingula and lower lobes. Follow-up CT showed regression of the alveolar opacities, the presence of pneumatoceles and right asymptomatic pneumothorax. After 23 months of follow-up, the patient remained asymptomatic, and the follow-up CT scan was considered normal. The radiological features and a review of the relevant literature are briefly discussed. (orig.)

  4. Utilizing a sequential injection system furnished with an extraction microcolumn as a novel approach for executing sequential extractions of metal species in solid samples

    DEFF Research Database (Denmark)

    Chomchoei, R.; Hansen, Elo Harald; Shiowatana, J.

    2007-01-01

    This communication presents a novel approach to perform sequential extraction of elements in solid samples by using a sequential injection (SI) system incorporating a specially designed extraction microcolumn. Based on the operation of the syringe pump, different modes of extraction are potentially...... that the system entails many advantages such as being fully automated, and besides being characterised by rapidity, ease of operation and robustness, it is less prone to risks of contamination and personal errors as encountered in traditional batch systems. Moreover, improvement of the precision and accuracy...... of the chemical fractionation of metal in solids as compared with previous reports are obtained. The system ensures that extraction is performed at designated pH values. Variation of sample weight to column volume ratios do not affect the amounts of extractable metals, nor do extraction flow rates ranging from 50...

  5. A one-sided sequential test

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-04-16

    The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

  6. Documentscape: Intertextuality, Sequentiality & Autonomy at Work

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Bjørn, Pernille

    2014-01-01

    On the basis of an ethnographic field study, this article introduces the concept of documentscape to the analysis of document-centric work practices. The concept of documentscape refers to the entire ensemble of documents in their mutual intertextual interlocking. Providing empirical data from...... a global software development case, we show how hierarchical structures and sequentiality across the interlocked documents are critical to how actors make sense of the work of others and what to do next in a geographically distributed setting. Furthermore, we found that while each document is created...... as part of a quasi-sequential order, this characteristic does not make the document, as a single entity, into a stable object. Instead, we found that the documents were malleable and dynamic while suspended in intertextual structures. Our concept of documentscape points to how the hierarchical structure...

  7. PyCSP - Communicating Sequential Processes for Python

    DEFF Research Database (Denmark)

    Vinter, Brian; Bjørndalen, John Markus; Anshus, Otto Johan

    CSP presently supports the core CSP abstractions. We introduce the PyCSP library, its implementation, a few performance benchmarks, and show example code using PyCSP. An early prototype of PyCSP has been used in this year's Extreme Multiprogramming Class at the CS department, university of Copenhagen......The Python programming language is effective for rapidly specifying programs and experimenting with them. It is increasingly being used in computational sciences, and in teaching computer science. CSP is effective for describing concurrency. It has become especially relevant with the emergence...... of commodity multi-core architectures. We are interested in exploring how a combination of Python and CSP can benefit both the computational sciences and the hands-on teaching of distributed and parallel computing in computer science. To make this possible, we have developed PyCSP, a CSP library for Python. Py...

  8. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  9. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  10. Decision-making in research tasks with sequential testing.

    Directory of Open Access Journals (Sweden)

    Thomas Pfeiffer

    Full Text Available BACKGROUND: In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low priors of the tested hypotheses and biases in the evaluation and publication of research findings increase the fraction of false positives. These findings raise concerns about the reliability of research. However, they are based on a very simple scenario of scientific research, where single tests are used to evaluate independent hypotheses. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present computer simulations and experimental approaches for analyzing more realistic scenarios. In these scenarios, research tasks are solved sequentially, i.e. subsequent tests can be chosen depending on previous results. We investigate simple sequential testing and scenarios where only a selected subset of results can be published and used for future rounds of test choice. Results from computer simulations indicate that for the tasks analyzed in this study, the fraction of false among the positive findings declines over several rounds of testing if the most informative tests are performed. Our experiments show that human subjects frequently perform the most informative tests, leading to a decline of false positives as expected from the simulations. CONCLUSIONS/SIGNIFICANCE: For the research tasks studied here, findings tend to become more reliable over time. We also find that the performance in those experimental settings where not all performed tests could be published turned out to be surprisingly inefficient. Our results may help optimize existing procedures used in the practice of scientific research and provide guidance for the development of novel forms of scholarly communication.

  11. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985807 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016

  12. Comparing two Poisson populations sequentially: an application

    International Nuclear Information System (INIS)

    Halteman, E.J.

    1986-01-01

    Rocky Flats Plant in Golden, Colorado monitors each of its employees for radiation exposure. Excess exposure is detected by comparing the means of two Poisson populations. A sequential probability ratio test (SPRT) is proposed as a replacement for the fixed sample normal approximation test. A uniformly most efficient SPRT exists, however logistics suggest using a truncated SPRT. The truncated SPRT is evaluated in detail and shown to possess large potential savings in average time spent by employees in the monitoring process

  13. Heat accumulation during sequential cortical bone drilling.

    Science.gov (United States)

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  14. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  15. Sequential test procedures for inventory differences

    International Nuclear Information System (INIS)

    Goldman, A.S.; Kern, E.A.; Emeigh, C.W.

    1985-01-01

    By means of a simulation study, we investigated the appropriateness of Page's and power-one sequential tests on sequences of inventory differences obtained from an example materials control unit, a sub-area of a hypothetical UF 6 -to-U 3 O 8 conversion process. The study examined detection probability and run length curves obtained from different loss scenarios. 12 refs., 10 figs., 2 tabs

  16. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  17. Moving Synergistically Acting Drug Combinations to the Clinic by Comparing Sequential versus Simultaneous Drug Administrations.

    Science.gov (United States)

    Dinavahi, Saketh S; Noory, Mohammad A; Gowda, Raghavendra; Drabick, Joseph J; Berg, Arthur; Neves, Rogerio I; Robertson, Gavin P

    2018-03-01

    Drug combinations acting synergistically to kill cancer cells have become increasingly important in melanoma as an approach to manage the recurrent resistant disease. Protein kinase B (AKT) is a major target in this disease but its inhibitors are not effective clinically, which is a major concern. Targeting AKT in combination with WEE1 (mitotic inhibitor kinase) seems to have potential to make AKT-based therapeutics effective clinically. Since agents targeting AKT and WEE1 have been tested individually in the clinic, the quickest way to move the drug combination to patients would be to combine these agents sequentially, enabling the use of existing phase I clinical trial toxicity data. Therefore, a rapid preclinical approach is needed to evaluate whether simultaneous or sequential drug treatment has maximal therapeutic efficacy, which is based on a mechanistic rationale. To develop this approach, melanoma cell lines were treated with AKT inhibitor AZD5363 [4-amino- N -[(1 S )-1-(4-chlorophenyl)-3-hydroxypropyl]-1-(7 H -pyrrolo[2,3- d ]pyrimidin-4-yl)piperidine-4-carboxamide] and WEE1 inhibitor AZD1775 [2-allyl-1-(6-(2-hydroxypropan-2-yl)pyridin-2-yl)-6-((4-(4-methylpiperazin-1-yl)phenyl)amino)-1 H -pyrazolo[3,4- d ]pyrimidin-3(2 H )-one] using simultaneous and sequential dosing schedules. Simultaneous treatment synergistically reduced melanoma cell survival and tumor growth. In contrast, sequential treatment was antagonistic and had a minimal tumor inhibitory effect compared with individual agents. Mechanistically, simultaneous targeting of AKT and WEE1 enhanced deregulation of the cell cycle and DNA damage repair pathways by modulating transcription factors p53 and forkhead box M1, which was not observed with sequential treatment. Thus, this study identifies a rapid approach to assess the drug combinations with a mechanistic basis for selection, which suggests that combining AKT and WEE1 inhibitors is needed for maximal efficacy. Copyright © 2018 by The American

  18. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  19. Fast parallel computation of polynomials using few processors

    DEFF Research Database (Denmark)

    Valiant, Leslie; Skyum, Sven

    1981-01-01

    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  20. Bursts and heavy tails in temporal and sequential dynamics of foraging decisions.

    Directory of Open Access Journals (Sweden)

    Kanghoon Jung

    2014-08-01

    Full Text Available A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a a highly biased choice distribution; and (b preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices.

  1. [Co-composting high moisture vegetable waste and flower waste in a sequential fed operation].

    Science.gov (United States)

    Zhang, Xiangfeng; Wang, Hongtao; Nie, Yongfeng

    2003-11-01

    Co-composting of high moisture vegetable wastes (celery and cabbage) and flower wastes (carnation) were studied in a sequential fed bed. The preliminary materials of composting were celery and carnation wastes. The sequential fed materials of composting were cabbage wastes and were fed every 4 days. Moisture content of mixture materials was between 60% and 70%. Composting was done in an aerobic static bed of composting based temperature feedback and control via aeration rate regulation. Aeration was ended when temperature of the pile was about 40 degrees C. Changes of composting of temperature, aeration rate, water content, organic matter, ash, pH, volume, NH4(+)-N, and NO3(-)-N were studied. Results show that co-composting of high moisture vegetable wastes and flower wastes, in a sequential fed aerobic static bed based temperature feedback and control via aeration rate regulation, can stabilize organic matter and removal water rapidly. The sequential fed operation are effective to overcome the difficult which traditional composting cannot applied successfully where high moisture vegetable wastes in more excess of flower wastes, such as Dianchi coastal.

  2. Bursts and Heavy Tails in Temporal and Sequential Dynamics of Foraging Decisions

    Science.gov (United States)

    Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D.; Jeong, Jaeseung

    2014-01-01

    A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices. PMID:25122498

  3. Hemodynamic analysis of sequential graft from right coronary system to left coronary system.

    Science.gov (United States)

    Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun

    2016-12-28

    Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.

  4. Sequential and double sequential fission observed in heavy ion interaction of (11.67 MeV/u){sup 197}Au projectile with {sup 197}Au target

    Energy Technology Data Exchange (ETDEWEB)

    Nasir, Tabassum [Gomal University, Dera Ismail Khan (Pakistan). Dept. of Physics; Khan, Ehsan Ullah [COMSATS Institute of Information Technology (CIIT), Islamabad (Pakistan). Dept. of Physics; Baluch, Javaid Jahan [COMSATS Institute of Information Technology (CIIT), Abbottabad, (Pakistan). Dept. of Environmental Sciences; Shafi-Ur-Rehman, [PAEC, Dera Ghazi Khan (Pakistan). ISL Project; Matiullah, [PINSTECH, Nilore, Islamabad (Pakistan). Physics Div.; Rafique, Muhammad [University of Azad Jammu and Kashmir, Muzaffarabad (Pakistan). Dept. of Physics

    2009-09-15

    The heavy ion interaction of 11.67 MeV/u {sup 197}Au+ {sup 197}Au has been investigated using mica as a passive detector. By employing Solid State Nuclear Track Detection Technique the data of elastic scattering as well as inelastic reaction channel was collected. The off-line data analysis of multi-pronged events was performed by measuring the three-dimensional geometrical coordinates of correlated tracks on event-by-event basis. Multi pronged events observed in this reaction were due to sequential and double sequential fission. Using a computer code PRONGY based on the procedure of internal calibration, it was possible to derive quantities like mass transfer, total kinetic energy loss and scattering angles. (author)

  5. Sequential and double sequential fission observed in heavy ion interaction of (11.67 MeV/u)197Au projectile with 197Au target

    International Nuclear Information System (INIS)

    Nasir, Tabassum; Khan, Ehsan Ullah; Baluch, Javaid Jahan; Shafi-Ur-Rehman; Matiullah; Rafique, Muhammad

    2009-01-01

    The heavy ion interaction of 11.67 MeV/u 197 Au+ 197 Au has been investigated using mica as a passive detector. By employing Solid State Nuclear Track Detection Technique the data of elastic scattering as well as inelastic reaction channel was collected. The off-line data analysis of multi-pronged events was performed by measuring the three-dimensional geometrical coordinates of correlated tracks on event-by-event basis. Multi pronged events observed in this reaction were due to sequential and double sequential fission. Using a computer code PRONGY based on the procedure of internal calibration, it was possible to derive quantities like mass transfer, total kinetic energy loss and scattering angles. (author)

  6. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985556 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/kalina-0474065.pdf

  7. Decoding restricted participation in sequential electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Knaut, Andreas; Paschmann, Martin

    2017-06-15

    Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.

  8. THE DEVELOPMENT OF SPECIAL SEQUENTIALLY-TIMED

    Directory of Open Access Journals (Sweden)

    Stanislav LICHOROBIEC

    2016-06-01

    Full Text Available This article documents the development of the noninvasive use of explosives during the destruction of ice mass in river flows. The system of special sequentially-timed charges utilizes the increase in efficiency of cutting charges by covering them with bags filled with water, while simultaneously increasing the effect of the entire system of timed charges. Timing, spatial combinations during placement, and the linking of these charges results in the loosening of ice barriers on a frozen waterway, while at the same time regulating the size of the ice fragments. The developed charges will increase the operability and safety of IRS units.

  9. Pass-transistor asynchronous sequential circuits

    Science.gov (United States)

    Whitaker, Sterling R.; Maki, Gary K.

    1989-01-01

    Design methods for asynchronous sequential pass-transistor circuits, which result in circuits that are hazard- and critical-race-free and which have added degrees of freedom for the input signals, are discussed. The design procedures are straightforward and easy to implement. Two single-transition-time state assignment methods are presented, and hardware bounds for each are established. A surprising result is that the hardware realizations for each next state variable and output variable is identical for a given flow table. Thus, a state machine with N states and M outputs can be constructed using a single layout replicated N + M times.

  10. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

  11. A sequential/parallel track selector

    CERN Document Server

    Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A

    1980-01-01

    A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).

  12. Boundary conditions in random sequential adsorption

    Science.gov (United States)

    Cieśla, Michał; Ziff, Robert M.

    2018-04-01

    The influence of different boundary conditions on the density of random packings of disks is studied. Packings are generated using the random sequential adsorption algorithm with three different types of boundary conditions: periodic, open, and wall. It is found that the finite size effects are smallest for periodic boundary conditions, as expected. On the other hand, in the case of open and wall boundaries it is possible to introduce an effective packing size and a constant correction term to significantly improve the packing densities.

  13. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  14. From sequential to parallel programming with patterns

    CERN Document Server

    CERN. Geneva

    2018-01-01

    To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

  15. Sequential extraction of uranium metal contamination

    International Nuclear Information System (INIS)

    Murry, M.M.; Spitz, H.B.; Connick, W.B.

    2016-01-01

    Samples of uranium contaminated dirt collected from the dirt floor of an abandoned metal rolling mill were analyzed for uranium using a sequential extraction protocol involving a series of five increasingly aggressive solvents. The quantity of uranium extracted from the contaminated dirt by each reagent can aid in predicting the fate and transport of the uranium contamination in the environment. Uranium was separated from each fraction using anion exchange, electrodeposition and analyzed by alpha spectroscopy analysis. Results demonstrate that approximately 77 % of the uranium was extracted using NH 4 Ac in 25 % acetic acid. (author)

  16. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  17. Modeling sequential context effects in diagnostic interpretation of screening mammograms.

    Science.gov (United States)

    Alamudun, Folami; Paulus, Paige; Yoon, Hong-Jun; Tourassi, Georgia

    2018-07-01

    Prior research has shown that physicians' medical decisions can be influenced by sequential context, particularly in cases where successive stimuli exhibit similar characteristics when analyzing medical images. This type of systematic error is known to psychophysicists as sequential context effect as it indicates that judgments are influenced by features of and decisions about the preceding case in the sequence of examined cases, rather than being based solely on the peculiarities unique to the present case. We determine if radiologists experience some form of context bias, using screening mammography as the use case. To this end, we explore correlations between previous perceptual behavior and diagnostic decisions and current decisions. We hypothesize that a radiologist's visual search pattern and diagnostic decisions in previous cases are predictive of the radiologist's current diagnostic decisions. To test our hypothesis, we tasked 10 radiologists of varied experience to conduct blind reviews of 100 four-view screening mammograms. Eye-tracking data and diagnostic decisions were collected from each radiologist under conditions mimicking clinical practice. Perceptual behavior was quantified using the fractal dimension of gaze scanpath, which was computed using the Minkowski-Bouligand box-counting method. To test the effect of previous behavior and decisions, we conducted a multifactor fixed-effects ANOVA. Further, to examine the predictive value of previous perceptual behavior and decisions, we trained and evaluated a predictive model for radiologists' current diagnostic decisions. ANOVA tests showed that previous visual behavior, characterized by fractal analysis, previous diagnostic decisions, and image characteristics of previous cases are significant predictors of current diagnostic decisions. Additionally, predictive modeling of diagnostic decisions showed an overall improvement in prediction error when the model is trained on additional information about

  18. Sequential Logic Model Deciphers Dynamic Transcriptional Control of Gene Expressions

    Science.gov (United States)

    Yeo, Zhen Xuan; Wong, Sum Thai; Arjunan, Satya Nanda Vel; Piras, Vincent; Tomita, Masaru; Selvarajoo, Kumar; Giuliani, Alessandro; Tsuchiya, Masa

    2007-01-01

    Background Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. Methodology Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM) is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. Principal Findings SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin) during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. Conclusions/Significance The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet providing rich biological

  19. Sequential logic model deciphers dynamic transcriptional control of gene expressions.

    Directory of Open Access Journals (Sweden)

    Zhen Xuan Yeo

    Full Text Available BACKGROUND: Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. METHODOLOGY: Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. PRINCIPAL FINDINGS: SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. CONCLUSIONS/SIGNIFICANCE: The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet

  20. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  1. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  2. Comparison of simultaneous and sequential SPECT imaging for discrimination tasks in assessment of cardiac defects.

    Science.gov (United States)

    Trott, C M; Ouyang, J; El Fakhri, G

    2010-11-21

    Simultaneous rest perfusion/fatty-acid metabolism studies have the potential to replace sequential rest/stress perfusion studies for the assessment of cardiac function. Simultaneous acquisition has the benefits of increased signal and lack of need for patient stress, but is complicated by cross-talk between the two radionuclide signals. We consider a simultaneous rest (99m)Tc-sestamibi/(123)I-BMIPP imaging protocol in place of the commonly used sequential rest/stress (99m)Tc-sestamibi protocol. The theoretical precision with which the severity of a cardiac defect and the transmural extent of infarct can be measured is computed for simultaneous and sequential SPECT imaging, and their performance is compared for discriminating (1) degrees of defect severity and (2) sub-endocardial from transmural defects. We consider cardiac infarcts for which reduced perfusion and metabolism are observed. From an information perspective, simultaneous imaging is found to yield comparable or improved performance compared with sequential imaging for discriminating both severity of defect and transmural extent of infarct, for three defects of differing location and size.

  3. Comparison of simultaneous and sequential SPECT imaging for discrimination tasks in assessment of cardiac defects

    International Nuclear Information System (INIS)

    Trott, C M; Ouyang, J; El Fakhri, G

    2010-01-01

    Simultaneous rest perfusion/fatty-acid metabolism studies have the potential to replace sequential rest/stress perfusion studies for the assessment of cardiac function. Simultaneous acquisition has the benefits of increased signal and lack of need for patient stress, but is complicated by cross-talk between the two radionuclide signals. We consider a simultaneous rest 99m Tc-sestamibi/ 123 I-BMIPP imaging protocol in place of the commonly used sequential rest/stress 99m Tc-sestamibi protocol. The theoretical precision with which the severity of a cardiac defect and the transmural extent of infarct can be measured is computed for simultaneous and sequential SPECT imaging, and their performance is compared for discriminating (1) degrees of defect severity and (2) sub-endocardial from transmural defects. We consider cardiac infarcts for which reduced perfusion and metabolism are observed. From an information perspective, simultaneous imaging is found to yield comparable or improved performance compared with sequential imaging for discriminating both severity of defect and transmural extent of infarct, for three defects of differing location and size.

  4. Simultaneous or Early Sequential Rupture of Multiple Intracranial Aneurysms: A Rare and Insufficiently Understood Entity.

    Science.gov (United States)

    Hou, Kun; Zhao, Jinchuan; Zhang, Yang; Zhu, Xiaobo; Zhao, Yan; Li, Guichen

    2016-05-01

    Simultaneous or early sequential rupture of multiple intracranial aneurysms (MIAs) is encountered rarely, with no more than 10 cases having been reported. As a result of its rarity, there are a lot of questions concerning this entity need to be answered. A 67-year-old woman was admitted to the First Hospital of Jilin University (Eastern Division) from a local hospital after a sudden onset of severe headache, nausea, and vomiting. Head computed tomography (CT) at the local hospital revealed diffuse subarachnoid hemorrhage (SAH) that was concentrated predominately in the suprasellar cistern and interhemispheric fissure. During her transfer to our hospital, she experienced another episode of sudden headache. CT on admission to our hospital revealed that the SAH was increased with 2 isolated hematomas both in the interhemispheric fissure and the left paramedian frontal lobe. Further CT angiography and intraoperative findings were in favor of early sequential rupture of 2 intracranial aneurysms. To further elucidate the characteristics, mechanism, management, and prognosis of this specific entity, we conducted a comprehensive review of the literature. The mechanism of simultaneous or early sequential rupture of MIAs is still obscure. Transient elevation of blood pressure might play a role in the process, and preventing the sudden elevation of blood pressure might be beneficial for patients with aneurysmal SAH and MIAs. The management of simultaneously or early sequentially ruptured aneurysms is more complex for its difficulty in responsible aneurysm determination, urgency in treatment, toughness in intraoperative manipulation and poorness in prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Memory and decision making: Effects of sequential presentation of probabilities and outcomes in risky prospects.

    Science.gov (United States)

    Millroth, Philip; Guath, Mona; Juslin, Peter

    2018-06-07

    The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  7. Equivalence between quantum simultaneous games and quantum sequential games

    OpenAIRE

    Kobayashi, Naoki

    2007-01-01

    A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...

  8. Discrimination between sequential and simultaneous virtual channels with electrical hearing

    OpenAIRE

    Landsberger, David; Galvin, John J.

    2011-01-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation mode...

  9. C-quence: a tool for analyzing qualitative sequential data.

    Science.gov (United States)

    Duncan, Starkey; Collier, Nicholson T

    2002-02-01

    C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

  10. Rapid Prototyping Enters Mainstream Manufacturing.

    Science.gov (United States)

    Winek, Gary

    1996-01-01

    Explains rapid prototyping, a process that uses computer-assisted design files to create a three-dimensional object automatically, speeding the industrial design process. Five commercially available systems and two emerging types--the 3-D printing process and repetitive masking and depositing--are described. (SK)

  11. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

    Science.gov (United States)

    Landsberger, David; Galvin, John J

    2011-09-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

  12. Lineup composition, suspect position, and the sequential lineup advantage.

    Science.gov (United States)

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  13. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

  14. Group sequential and confirmatory adaptive designs in clinical trials

    CERN Document Server

    Wassmer, Gernot

    2016-01-01

    This book provides an up-to-date review of the general principles of and techniques for confirmatory adaptive designs. Confirmatory adaptive designs are a generalization of group sequential designs. With these designs, interim analyses are performed in order to stop the trial prematurely under control of the Type I error rate. In adaptive designs, it is also permissible to perform a data-driven change of relevant aspects of the study design at interim stages. This includes, for example, a sample-size reassessment, a treatment-arm selection or a selection of a pre-specified sub-population. Essentially, this adaptive methodology was introduced in the 1990s. Since then, it has become popular and the object of intense discussion and still represents a rapidly growing field of statistical research. This book describes adaptive design methodology at an elementary level, while also considering designing and planning issues as well as methods for analyzing an adaptively planned trial. This includes estimation methods...

  15. Sequential infiltration synthesis for advanced lithography

    Energy Technology Data Exchange (ETDEWEB)

    Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing

    2017-10-10

    A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.

  16. Clinical evaluation of synthetic aperture sequential beamforming

    DEFF Research Database (Denmark)

    Hansen, Peter Møller; Hemmsen, Martin Christian; Lange, Theis

    2012-01-01

    In this study clinically relevant ultrasound images generated with synthetic aperture sequential beamforming (SASB) is compared to images generated with a conventional technique. The advantage of SASB is the ability to produce high resolution ultrasound images with a high frame rate and at the same...... time massively reduce the amount of generated data. SASB was implemented in a system consisting of a conventional ultrasound scanner connected to a PC via a research interface. This setup enables simultaneous recording with both SASB and conventional technique. Eighteen volunteers were ultrasound...... scanned abdominally, and 84 sequence pairs were recorded. Each sequence pair consists of two simultaneous recordings of the same anatomical location with SASB and conventional B-mode imaging. The images were evaluated in terms of spatial resolution, contrast, unwanted artifacts, and penetration depth...

  17. Sequential cooling insert for turbine stator vane

    Science.gov (United States)

    Jones, Russel B

    2017-04-04

    A sequential flow cooling insert for a turbine stator vane of a small gas turbine engine, where the impingement cooling insert is formed as a single piece from a metal additive manufacturing process such as 3D metal printing, and where the insert includes a plurality of rows of radial extending impingement cooling air holes alternating with rows of radial extending return air holes on a pressure side wall, and where the insert includes a plurality of rows of chordwise extending second impingement cooling air holes on a suction side wall. The insert includes alternating rows of radial extending cooling air supply channels and return air channels that form a series of impingement cooling on the pressure side followed by the suction side of the insert.

  18. Gleason-Busch theorem for sequential measurements

    Science.gov (United States)

    Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah

    2017-12-01

    Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.

  19. Sequential Stereotype Priming: A Meta-Analysis.

    Science.gov (United States)

    Kidder, Ciara K; White, Katherine R; Hinojos, Michelle R; Sandoval, Mayra; Crites, Stephen L

    2017-08-01

    Psychological interest in stereotype measurement has spanned nearly a century, with researchers adopting implicit measures in the 1980s to complement explicit measures. One of the most frequently used implicit measures of stereotypes is the sequential priming paradigm. The current meta-analysis examines stereotype priming, focusing specifically on this paradigm. To contribute to ongoing discussions regarding methodological rigor in social psychology, one primary goal was to identify methodological moderators of the stereotype priming effect-whether priming is due to a relation between the prime and target stimuli, the prime and target response, participant task, stereotype dimension, stimulus onset asynchrony (SOA), and stimuli type. Data from 39 studies yielded 87 individual effect sizes from 5,497 participants. Analyses revealed that stereotype priming is significantly moderated by the presence of prime-response relations, participant task, stereotype dimension, target stimulus type, SOA, and prime repetition. These results carry both practical and theoretical implications for future research on stereotype priming.

  20. Sequential Acral Lentiginous Melanomas of the Foot

    Directory of Open Access Journals (Sweden)

    Jiro Uehara

    2010-12-01

    Full Text Available A 64-year-old Japanese woman had a lightly brown-blackish pigmented macule (1.2 cm in diameter on the left sole of her foot. She received surgical excision following a diagnosis of acral lentiginous melanoma (ALM, which was confirmed histopathologically. One month after the operation, a second melanoma lesion was noticed adjacent to the grafted site. Histopathologically, the two lesions had no continuity, but HMB-45 and cyclin D1 double-positive cells were detected not only on aggregates of atypical melanocytes but also on single cells near the cutting edge of the first lesion. The unique occurrence of a sequential lesion of a primary melanoma might be caused by stimulated subclinical field cells during the wound healing process following the initial operation. This case warrants further investigation to establish the appropriate surgical margin of ALM lesions.

  1. Sequential Therapy in Metastatic Renal Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Bradford R Hirsch

    2016-04-01

    Full Text Available The treatment of metastatic renal cell carcinoma (mRCC has changed dramatically in the past decade. As the number of available agents, and related volume of research, has grown, it is increasingly complex to know how to optimally treat patients. The authors are practicing medical oncologists at the US Oncology Network, the largest community-based network of oncology providers in the country, and represent the leadership of the Network's Genitourinary Research Committee. We outline our thought process in approaching sequential therapy of mRCC and the use of real-world data to inform our approach. We also highlight the evolving literature that will impact practicing oncologists in the near future.

  2. Microstructure history effect during sequential thermomechanical processing

    International Nuclear Information System (INIS)

    Yassar, Reza S.; Murphy, John; Burton, Christina; Horstemeyer, Mark F.; El kadiri, Haitham; Shokuhfar, Tolou

    2008-01-01

    The key to modeling the material processing behavior is the linking of the microstructure evolution to its processing history. This paper quantifies various microstructural features of an aluminum automotive alloy that undergoes sequential thermomechanical processing which is comprised hot rolling of a 150-mm billet to a 75-mm billet, rolling to 3 mm, annealing, and then cold rolling to a 0.8-mm thickness sheet. The microstructural content was characterized by means of electron backscatter diffraction, scanning electron microscopy, and transmission electron microscopy. The results clearly demonstrate the evolution of precipitate morphologies, dislocation structures, and grain orientation distributions. These data can be used to improve material models that claim to capture the history effects of the processing materials

  3. Prosody and alignment: a sequential perspective

    Science.gov (United States)

    Szczepek Reed, Beatrice

    2010-12-01

    In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of `positive', or aligning actions.

  4. Monitoring sequential electron transfer with EPR

    International Nuclear Information System (INIS)

    Thurnauer, M.C.; Feezel, L.L.; Snyder, S.W.; Tang, J.; Norris, J.R.; Morris, A.L.; Rustandi, R.R.

    1989-01-01

    A completely general model which treats electron spin polarization (ESP) found in a system in which radical pairs with different magnetic interactions are formed sequentially has been described. This treatment has been applied specifically to the ESP found in the bacterial reaction center. Test cases show clearly how parameters such as structure, lifetime, and magnetic interactions within the successive radical pairs affect the ESP, and demonstrate that previous treatments of this problem have been incomplete. The photosynthetic bacterial reaction center protein is an ideal system for testing the general model of ESP. The radical pair which exhibits ESP, P 870 + Q - (P 870 + is the oxidized, primary electron donor, a bacteriochlorophyll special pair and Q - is the reduced, primary quinone acceptor) is formed via sequential electron transport through the intermediary radical pair P 870 + I - (I - is the reduced, intermediary electron acceptor, a bacteriopheophytin). In addition, it is possible to experimentally vary most of the important parameters, such as the lifetime of the intermediary radical pair and the magnetic interactions in each pair. It has been shown how selective isotopic substitution ( 1 H or 2 H) on P 870 , I and Q affects the ESP of the EPR spectrum of P 870 + Q - , observed at two different microwave frequencies, in Fe 2+ -depleted bacterial reaction centers of Rhodobacter sphaeroides R26. Thus, the relative magnitudes of the magnetic properties (nuclear hyperfine and g-factor differences) which influence ESP development were varied. The results support the general model of ESP in that they suggest that the P 870 + Q - radical pair interactions are the dominant source of ESP production in 2 H bacterial reaction centers

  5. Sequential reduction of external networks for the security- and short circuit monitor in power system control centers

    Energy Technology Data Exchange (ETDEWEB)

    Dietze, P [Siemens A.G., Erlangen (Germany, F.R.). Abt. ESTE

    1978-01-01

    For the evaluation of the effects of switching operations or simulation of line, transformer, and generator outages the influence of interconnected neighbor networks is modelled by network equivalents in the process computer. The basic passive conductivity model is produced by sequential reduction and adapted to fit the active network behavior. The reduction routine uses the admittance matrix, sparse technique and optimal ordering; it is applicable to process computer applications.

  6. Automatic quantitative analysis of liver functions by a computer system

    International Nuclear Information System (INIS)

    Shinpo, Takako

    1984-01-01

    In the previous paper, we confirmed the clinical usefulness of hepatic clearance (hepatic blood flow), which is the hepatic uptake and blood disappearance rate coefficients. These were obtained by the initial slope index of each minute during a period of five frames of a hepatogram by injecting sup(99m)Tc-Sn-colloid 37 MBq. To analyze the information simply, rapidly and accurately, we developed a automatic quantitative analysis for liver functions. Information was obtained every quarter minute during a period of 60 frames of the sequential image. The sequential counts were measured for the heart, whole liver, both left lobe and right lobes using a computer connected to a scintillation camera. We measured the effective hepatic blood flow, from the disappearance rate multiplied by the percentage of hepatic uptake as follows, (liver counts)/(tatal counts of the field) Our method of analysis automatically recorded the reappearance graph of the disappearance curve and uptake curve on the basis of the heart and the whole liver, respectively; and computed using BASIC language. This method makes it possible to obtain the image of the initial uptake of sup(99m)Tc-Sn-colloid into the liver by a small dose of it. (author)

  7. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  8. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-10-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  9. Pricing for a Durable-Goods Monopolist Under Rapid Sequential Innovation

    OpenAIRE

    Laura J. Kornish

    2001-01-01

    A durable-goods monopolist who will be introducing new and improved versions of his product must decide how to price his products, keeping in mind the relative attractiveness of the current and future products. Dhebar (1994) has shown that if technology is changing too quickly and the producer cannot credibly commit to future prices and quality, then no equilibrium strategy exists. That is, there is no credible strategy for the future product that the producer can commit to in the first perio...

  10. Campbell and moment measures for finite sequential spatial processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2006-01-01

    textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector

  11. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

    Science.gov (United States)

    Hayashi, Ken; Hayashi, Hideyuki

    2006-10-01

    To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

  12. Reading Remediation Based on Sequential and Simultaneous Processing.

    Science.gov (United States)

    Gunnison, Judy; And Others

    1982-01-01

    The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…

  13. Induction of simultaneous and sequential malolactic fermentation in durian wine.

    Science.gov (United States)

    Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan

    2016-08-02

    This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A Survey of Multi-Objective Sequential Decision-Making

    NARCIS (Netherlands)

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential

  15. Dynamics-based sequential memory: Winnerless competition of patterns

    International Nuclear Information System (INIS)

    Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.

    2003-01-01

    We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns

  16. Sequential, progressive, equal-power, reflective beam-splitter arrays

    Science.gov (United States)

    Manhart, Paul K.

    2017-11-01

    The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.

  17. The sequential price of anarchy for atomic congestion games

    NARCIS (Netherlands)

    de Jong, Jasper; Uetz, Marc Jochen; Liu, Tie-Yan; Qi, Qi; Ye, Yinyu

    2014-01-01

    In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, we consider games where players choose their actions sequentially. The sequential

  18. Quantum Probability Zero-One Law for Sequential Terminal Events

    Science.gov (United States)

    Rehder, Wulf

    1980-07-01

    On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.

  19. Lineup Composition, Suspect Position, and the Sequential Lineup Advantage

    Science.gov (United States)

    Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.

    2008-01-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…

  20. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

    NARCIS (Netherlands)

    Zamarro, G.

    2006-01-01

    This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

  1. Collaborative Filtering Based on Sequential Extraction of User-Item Clusters

    Science.gov (United States)

    Honda, Katsuhiro; Notsu, Akira; Ichihashi, Hidetomo

    Collaborative filtering is a computational realization of “word-of-mouth” in network community, in which the items prefered by “neighbors” are recommended. This paper proposes a new item-selection model for extracting user-item clusters from rectangular relation matrices, in which mutual relations between users and items are denoted in an alternative process of “liking or not”. A technique for sequential co-cluster extraction from rectangular relational data is given by combining the structural balancing-based user-item clustering method with sequential fuzzy cluster extraction appraoch. Then, the tecunique is applied to the collaborative filtering problem, in which some items may be shared by several user clusters.

  2. Fast-responding liquid crystal light-valve technology for color-sequential display applications

    Science.gov (United States)

    Janssen, Peter J.; Konovalov, Victor A.; Muravski, Anatoli A.; Yakovenko, Sergei Y.

    1996-04-01

    A color sequential projection system has some distinct advantages over conventional systems which make it uniquely suitable for consumer TV as well as high performance professional applications such as computer monitors and electronic cinema. A fast responding light-valve is, clearly, essential for a good performing system. Response speed of transmissive LC lightvalves has been marginal thus far for good color rendition. Recently, Sevchenko Institute has made some very fast reflective LC cells which were evaluated at Philips Labs. These devices showed sub millisecond-large signal-response times, even at room temperature, and produced good color in a projector emulation testbed. In our presentation we describe our highly efficient color sequential projector and demonstrate its operation on video tape. Next we discuss light-valve requirements and reflective light-valve test results.

  3. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    Science.gov (United States)

    Zhang, G.

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  4. JINR rapid communications

    International Nuclear Information System (INIS)

    1999-01-01

    The present collection of rapid communications from JINR, Dubna, contains seven separate records on measurements of the total cross section difference Δσ L (np) at 1.59, 1.79, and 2.20 GeV, to the estimation of angular distributions of double charged spectator fragments in nucleus-nucleus interactions at superhigh energies, simulation dE/dx analysis results for silicon inner tracking system of ALICE set-up at LHC accelerator, high-multiplicity processes, triggering of high-multiplicity events using calorimetry, ORBIT-3.0 - a computer code for simulation and correction of the closed orbit and first turn in synchrotrons and determination of memory performance

  5. Articulated Human Motion Tracking Using Sequential Immune Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yi Li

    2013-01-01

    Full Text Available We formulate human motion tracking as a high-dimensional constrained optimization problem. A novel generative method is proposed for human motion tracking in the framework of evolutionary computation. The main contribution is that we introduce immune genetic algorithm (IGA for pose optimization in latent space of human motion. Firstly, we perform human motion analysis in the learnt latent space of human motion. As the latent space is low dimensional and contents the prior knowledge of human motion, it makes pose analysis more efficient and accurate. Then, in the search strategy, we apply IGA for pose optimization. Compared with genetic algorithm and other evolutionary methods, its main advantage is the ability to use the prior knowledge of human motion. We design an IGA-based method to estimate human pose from static images for initialization of motion tracking. And we propose a sequential IGA (S-IGA algorithm for motion tracking by incorporating the temporal continuity information into the traditional IGA. Experimental results on different videos of different motion types show that our IGA-based pose estimation method can be used for initialization of motion tracking. The S-IGA-based motion tracking method can achieve accurate and stable tracking of 3D human motion.

  6. Transparent thin-film transistor exploratory development via sequential layer deposition and thermal annealing

    International Nuclear Information System (INIS)

    Hong, David; Chiang, Hai Q.; Presley, Rick E.; Dehuff, Nicole L.; Bender, Jeffrey P.; Park, Cheol-Hee; Wager, John F.; Keszler, Douglas A.

    2006-01-01

    A novel deposition methodology is employed for exploratory development of a class of high-performance transparent thin-film transistor (TTFT) channel materials involving oxides composed of heavy-metal cations with (n - 1)d 10 ns 0 (n ≥ 4) electronic configurations. The method involves sequential radio-frequency sputter deposition of thin, single cation oxide layers and subsequent post-deposition annealing in order to obtain a multi-component oxide thin film. The viability of this rapid materials development methodology is demonstrated through the realization of high-performance TTFTs with channel layers composed of zinc oxide/tin oxide, and tin oxide/indium oxide

  7. Sinonasal carcinoma presenting as chronic sinusitis and sequential bilateral visual loss

    Directory of Open Access Journals (Sweden)

    Wei-Yu Chiang

    2015-01-01

    Full Text Available Sinonasal undifferentiated carcinoma-related rhinogenic optic neuropathy is rare and may lead to visual loss. To the best of our knowledge, this is the first report of bilateral sequential visual loss induced by this etiology. It is important to differentiate between chronic sinusitis and malignancy on the basis of specific findings on magnetic resonance images. Surgical decompression with multidisciplinary therapy, including steroids, chemotherapy, and radiotherapy, is indicated. However, no visual improvement was noted in this case, emphasizing the rapid disease progression and importance of early diagnosis and treatment.

  8. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  9. Development of New Lipid-Based Paclitaxel Nanoparticles Using Sequential Simplex Optimization

    Science.gov (United States)

    Dong, Xiaowei; Mattingly, Cynthia A.; Tseng, Michael; Cho, Moo; Adams, Val R.; Mumper, Russell J.

    2008-01-01

    The objective of these studies was to develop Cremophor-free lipid-based paclitaxel (PX) nanoparticle formulations prepared from warm microemulsion precursors. To identify and optimize new nanoparticles, experimental design was performed combining Taguchi array and sequential simplex optimization. The combination of Taguchi array and sequential simplex optimization efficiently directed the design of paclitaxel nanoparticles. Two optimized paclitaxel nanoparticles (NPs) were obtained: G78 NPs composed of glyceryl tridodecanoate (GT) and polyoxyethylene 20-stearyl ether (Brij 78), and BTM NPs composed of Miglyol 812, Brij 78 and D-alpha-tocopheryl polyethylene glycol 1000 succinate (TPGS). Both nanoparticles successfully entrapped paclitaxel at a final concentration of 150 μg/ml (over 6% drug loading) with particle sizes less than 200 nm and over 85% of entrapment efficiency. These novel paclitaxel nanoparticles were stable at 4°C over three months and in PBS at 37°C over 102 hours as measured by physical stability. Release of paclitaxel was slow and sustained without initial burst release. Cytotoxicity studies in MDA-MB-231 cancer cells showed that both nanoparticles have similar anticancer activities compared to Taxol®. Interestingly, PX BTM nanocapsules could be lyophilized without cryoprotectants. The lyophilized powder comprised only of PX BTM NPs in water could be rapidly rehydrated with complete retention of original physicochemical properties, in-vitro release properties, and cytotoxicity profile. Sequential Simplex Optimization has been utilized to identify promising new lipid-based paclitaxel nanoparticles having useful attributes. PMID:19111929

  10. RAPID TRANSFER ALIGNMENT USING FEDERATED KALMAN FILTER

    Institute of Scientific and Technical Information of China (English)

    GUDong-qing; QINYong-yuan; PENGRong; LIXin

    2005-01-01

    The dimension number of the centralized Kalman filter (CKF) for the rapid transfer alignment (TA) is as high as 21 if the aircraft wing flexure motion is considered in the rapid TA. The 21-dimensional CKF brings the calculation burden on the computer and the difficulty to meet a high filtering updating rate desired by rapid TA. The federated Kalman filter (FKF) for the rapid TA is proposed to solve the dilemma. The structure and the algorithm of the FKF, which can perform parallel computation and has less calculation burden, are designed.The wing flexure motion is modeled, and then the 12-order velocity matching local filter and the 15-order attitud ematching local filter are devised. Simulation results show that the proposed EKE for the rapid TA almost has the same performance as the CKF. Thus the calculation burden of the proposed FKF for the rapid TA is markedly decreased.

  11. Rapid shallow breathing

    Science.gov (United States)

    Tachypnea; Breathing - rapid and shallow; Fast shallow breathing; Respiratory rate - rapid and shallow ... Shallow, rapid breathing has many possible medical causes, including: Asthma Blood clot in an artery in the ...

  12. Constrained treatment planning using sequential beam selection

    International Nuclear Information System (INIS)

    Woudstra, E.; Storchi, P.R.M.

    2000-01-01

    In this paper an algorithm is described for automated treatment plan generation. The algorithm aims at delivery of the prescribed dose to the target volume without violation of constraints for target, organs at risk and the surrounding normal tissue. Pre-calculated dose distributions for all candidate orientations are used as input. Treatment beams are selected in a sequential way. A score function designed for beam selection is used for the simultaneous selection of beam orientations and weights. In order to determine the optimum choice for the orientation and the corresponding weight of each new beam, the score function is first redefined to account for the dose distribution of the previously selected beams. Addition of more beams to the plan is stopped when the target dose is reached or when no additional dose can be delivered without violating a constraint. In the latter case the score function is modified by importance factor changes to enforce better sparing of the organ with the limiting constraint and the algorithm is run again. (author)

  13. Phenomenology of the next sequential lepton

    International Nuclear Information System (INIS)

    Rizzo, T.G.

    1980-01-01

    We consider the phenomenology of a sequential, charged lepton in the mass range 6 --13 GeV. We find the semileptonic branching ratio of such a lepton to be approx. 13%; the dominant two-body modes are found to include the decay L → ν/sub L/F* with a branching ratio approx. 6%. In this analysis we assume that the mass of the lepton under consideration is lighter than the t quark such that decays such as L → ν/sub L/t-barq, where q= (d, s, or b) are kinematically forbidden. We also find that decays such as L → ν/sub L/B* (c-barb) can also be as large as approx. 6% depending on the mixing angles; the lifetime of such a lepton is found to be approx. 2.6 x 10 -12 M/sub L/ -5 sec, where M/sub L/ is in GeV

  14. The Origin of Sequential Chromospheric Brightenings

    Science.gov (United States)

    Kirk, M. S.; Balasubramaniam, K. S.; Jackiewicz, J.; Gilbert, H. R.

    2017-06-01

    Sequential chromospheric brightenings (SCBs) are often observed in the immediate vicinity of erupting flares and are associated with coronal mass ejections. Since their initial discovery in 2005, there have been several subsequent investigations of SCBs. These studies have used differing detection and analysis techniques, making it difficult to compare results between studies. This work employs the automated detection algorithm of Kirk et al. (Solar Phys. 283, 97, 2013) to extract the physical characteristics of SCBs in 11 flares of varying size and intensity. We demonstrate that the magnetic substructure within the SCB appears to have a significantly smaller area than the corresponding Hα emission. We conclude that SCBs originate in the lower corona around 0.1 R_{⊙} above the photosphere, propagate away from the flare center at speeds of 35 - 85 km s^{-1}, and have peak photosphere magnetic intensities of 148±2.9 G. In light of these measurements, we infer SCBs to be distinctive chromospheric signatures of erupting coronal mass ejections.

  15. Social Influences in Sequential Decision Making.

    Directory of Open Access Journals (Sweden)

    Markus Schöbel

    Full Text Available People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

  16. Social Influences in Sequential Decision Making

    Science.gov (United States)

    Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

    2016-01-01

    People often make decisions in a social environment. The present work examines social influence on people’s decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others’ authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions. PMID:26784448

  17. Sequential acquisition of mutations in myelodysplastic syndromes.

    Science.gov (United States)

    Makishima, Hideki

    2017-01-01

    Recent progress in next-generation sequencing technologies allows us to discover frequent mutations throughout the coding regions of myelodysplastic syndromes (MDS), potentially providing us with virtually a complete spectrum of driver mutations in this disease. As shown by many study groups these days, such driver mutations are acquired in a gene-specific fashion. For instance, DDX41 mutations are observed in germline cells long before MDS presentation. In blood samples from healthy elderly individuals, somatic DNMT3A and TET2 mutations are detected as age-related clonal hematopoiesis and are believed to be a risk factor for hematological neoplasms. In MDS, mutations of genes such as NRAS and FLT3, designated as Type-1 genes, may be significantly associated with leukemic evolution. Another type (Type-2) of genes, including RUNX1 and GATA2, are related to progression from low-risk to high-risk MDS. Overall, various driver mutations are sequentially acquired in MDS, at a specific time, in either germline cells, normal hematopoietic cells, or clonal MDS cells.

  18. Building a Lego wall: Sequential action selection.

    Science.gov (United States)

    Arnold, Amy; Wing, Alan M; Rotshtein, Pia

    2017-05-01

    The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Social Influences in Sequential Decision Making.

    Science.gov (United States)

    Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael

    2016-01-01

    People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.

  20. Computer-Based Career Interventions.

    Science.gov (United States)

    Mau, Wei-Cheng

    The possible utilities and limitations of computer-assisted career guidance systems (CACG) have been widely discussed although the effectiveness of CACG has not been systematically considered. This paper investigates the effectiveness of a theory-based CACG program, integrating Sequential Elimination and Expected Utility strategies. Three types of…

  1. Selective condensation drives partitioning and sequential secretion of cyst wall proteins in differentiating Giardia lamblia.

    Directory of Open Access Journals (Sweden)

    Christian Konrad

    2010-04-01

    Full Text Available Controlled secretion of a protective extracellular matrix is required for transmission of the infective stage of a large number of protozoan and metazoan parasites. Differentiating trophozoites of the highly minimized protozoan parasite Giardia lamblia secrete the proteinaceous portion of the cyst wall material (CWM consisting of three paralogous cyst wall proteins (CWP1-3 via organelles termed encystation-specific vesicles (ESVs. Phylogenetic and molecular data indicate that Diplomonads have lost a classical Golgi during reductive evolution. However, neogenesis of ESVs in encysting Giardia trophozoites transiently provides basic Golgi functions by accumulating presorted CWM exported from the ER for maturation. Based on this "minimal Golgi" hypothesis we predicted maturation of ESVs to a trans Golgi-like stage, which would manifest as a sorting event before regulated secretion of the CWM. Here we show that proteolytic processing of pro-CWP2 in maturing ESVs coincides with partitioning of CWM into two fractions, which are sorted and secreted sequentially with different kinetics. This novel sorting function leads to rapid assembly of a structurally defined outer cyst wall, followed by slow secretion of the remaining components. Using live cell microscopy we find direct evidence for condensed core formation in maturing ESVs. Core formation suggests that a mechanism controlled by phase transitions of the CWM from fluid to condensed and back likely drives CWM partitioning and makes sorting and sequential secretion possible. Blocking of CWP2 processing by a protease inhibitor leads to mis-sorting of a CWP2 reporter. Nevertheless, partitioning and sequential secretion of two portions of the CWM are unaffected in these cells. Although these cysts have a normal appearance they are not water resistant and therefore not infective. Our findings suggest that sequential assembly is a basic architectural principle of protective wall formation and requires

  2. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

    Science.gov (United States)

    Lin, Jane-Ming; Tsai, Yi-Yu

    2005-01-01

    To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

  3. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

    Directory of Open Access Journals (Sweden)

    Huber-Wagner S

    2010-05-01

    Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

  4. Immediate versus Delayed Sequential Bilateral Cataract Surgery: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Monali S Malvankar-Mehta

    Full Text Available Immediately sequential bilateral cataract surgery (ISBCS, the cataract surgery that is performed in both eyes simultaneously, is gaining popularity worldwide compared to the traditional treatment paradigm: delayed sequential bilateral cataract surgery (DSBCS, the surgery that is performed in each eye on a different day as a completely separate operation. ISBCS provides advantages to patients and patients' families in the form of fewer hospital visits. Additionally, patients enjoy rapid rehabilitation, lack of anisometropia - potentially reducing accidents and falls, and avoid suboptimal visual function in daily life. The hospital may benefit due to lower cost.To perform a systematic review and meta-analysis to evaluate ISBCS and DSBCS.Databases including MEDLINE, EMBASE, BIOSIS, CINAHL, Health Economic Evaluations Database (HEED, ISI Web of Science (Thomson-Reuters and the Cochrane Library were searched.Not applicable.Literature was systematically reviewed using EPPI-Reviewer 4 gateway. Meta-analysis was conducted using STATA v. 13.0. Standardized mean difference (SMD and 95% confidence intervals (CI were calculated and heterogeneity was assessed using I2 statistics. Fixed-effect and random-effect models were computed based on heterogeneity. Meta-analysis was done by instrument used to calculate utility score.In total, 9,133 records were retrieved from multiple databases and an additional 128 records were identified through grey literature search. Eleven articles with 3,657 subjects were included for analysis. Our meta-analysis results indicated significant improvement in post-operative utility score using TTO, EQ5D, HUI3, VF-7, and VF-14 and a non-significant improvement using Catquest questionnaire for both surgeries. For ISBCS versus DSBCS, utility-specific fixed-effect model provided an overall SMD of the utility score using the TTO method as 0.12 (95% CI: -0.15, 0.40, EQ5D as 0.14 (95% CI: -0.14, 0.41, HUI3 as 0.12 (95% CI: -0.15, 0.40, VF

  5. Sequential determination of actinides in a variety of matrices

    International Nuclear Information System (INIS)

    Olsen, S.C.

    2002-01-01

    A large number of analytical procedures for the actinides have been published, each catering for a specific need. Due to the bioassay programme in our laboratory, a need arose for a method to determine natural (Th and U) and anthropogenic actinides (Np, Pu and Am/Cm) together in a variety of samples. The method would have to be suitable for routine application: simple, inexpensive, rapid and robust. In some cases, the amount of material available is not sufficient for the determination of separate groups of actinides, and a sequential separation and measurement of the analytes would therefore be required. The types of matrices vary from aqueous samples to radiological surveillance (urine and faeces) to environmental studies (soil, sediment and fish), but the separation procedure should be able to service all of these. The working range of the method would have to cater for lower levels of the transuranium actinides in particular sample types containing higher levels of the natural actinides (U and Th). The first analytical problem to be discussed, is how to get the different sample types into the same loading solution required by a single separation approach. This entails sample dissolution or decomposition in some cases, and pre-concentration or pre-separation in others. A separation scheme is presented for the clean separation of all the actinides in a form suitable for alpha spectrometry. The development of a single column separation of the analytes of interest are looked at, as well as observations made during the development of the separation scheme, such as concentration effects. Results for test samples and certified reference materials are be presented. (author)

  6. Sequential reconstruction of driving-forces from nonlinear nonstationary dynamics

    Science.gov (United States)

    Güntürkün, Ulaş

    2010-07-01

    This paper describes a functional analysis-based method for the estimation of driving-forces from nonlinear dynamic systems. The driving-forces account for the perturbation inputs induced by the external environment or the secular variations in the internal variables of the system. The proposed algorithm is applicable to the problems for which there is too little or no prior knowledge to build a rigorous mathematical model of the unknown dynamics. We derive the estimator conditioned on the differentiability of the unknown system’s mapping, and smoothness of the driving-force. The proposed algorithm is an adaptive sequential realization of the blind prediction error method, where the basic idea is to predict the observables, and retrieve the driving-force from the prediction error. Our realization of this idea is embodied by predicting the observables one-step into the future using a bank of echo state networks (ESN) in an online fashion, and then extracting the raw estimates from the prediction error and smoothing these estimates in two adaptive filtering stages. The adaptive nature of the algorithm enables to retrieve both slowly and rapidly varying driving-forces accurately, which are illustrated by simulations. Logistic and Moran-Ricker maps are studied in controlled experiments, exemplifying chaotic state and stochastic measurement models. The algorithm is also applied to the estimation of a driving-force from another nonlinear dynamic system that is stochastic in both state and measurement equations. The results are judged by the posterior Cramer-Rao lower bounds. The method is finally put into test on a real-world application; extracting sun’s magnetic flux from the sunspot time series.

  7. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  8. Sequential formation of subgroups in OB associations

    International Nuclear Information System (INIS)

    Elmegreen, B.G.; Lada, C.J.

    1977-01-01

    We reconsider the structure and formation of OB association in view of recent radio and infrared observations of the adjacent molecular clouds. As a result of this reexamination, we propose that OB subgroups are formed in a step-by-step process which involves the propagation of ionization (I) and shock (S) fronts through a molecular cloud complex. OB stars formed at the edge of a molecular cloud drive these I-S fronts into the cloud. A layer of dense neutral material accumulates between the I and S fronts and eventually becomes gravitationally unstable. This process is analyzed in detail. Several arguments concerning the temperature and mass of this layer suggest that a new OB subgroup will form. After approximately one-half million years, these stars will emerge from and disrupt the star-forming layer. A new shock will be driven into the remaining molecular cloud and will initiate another cycle of star formation.Several observed properties of OB associations are shown to follow from a sequential star-forming mechanism. These include the spatial separation and systematic differences in age of OB subgroups in a given association, the regularity of subgroup masses, the alignment of subgroups along the galactic plane, and their physical expansion. Detailed observations of ionization fronts, masers, IR sources, and molecular clouds are also in agreement with this model. Finally, this mechanism provides a means of dissipating a molecular cloud and exposing less massive stars (e.g., T Tauri stars) which may have formed ahead of the shock as part of the original cloud collapsed and fragmented

  9. District heating in sequential energy supply

    International Nuclear Information System (INIS)

    Persson, Urban; Werner, Sven

    2012-01-01

    Highlights: ► European excess heat recovery and utilisation by district heat distribution. ► Heat recovery in district heating systems – a structural energy efficiency measure. ► Introduction of new theoretical concepts to express excess heat recovery. ► Fourfold potential for excess heat utilisation in EU27 compared to current levels. ► Large scale excess heat recovery – a collaborative challenge for future Europe. -- Abstract: Increased recovery of excess heat from thermal power generation and industrial processes has great potential to reduce primary energy demands in EU27. In this study, current excess heat utilisation levels by means of district heat distribution are assessed and expressed by concepts such as recovery efficiency, heat recovery rate, and heat utilisation rate. For two chosen excess heat activities, current average EU27 heat recovery levels are compared to currently best Member State practices, whereby future potentials of European excess heat recovery and utilisation are estimated. The principle of sequential energy supply is elaborated to capture the conceptual idea of excess heat recovery in district heating systems as a structural and organisational energy efficiency measure. The general conditions discussed concerning expansion of heat recovery into district heating systems include infrastructure investments in district heating networks, collaboration agreements, maintained value chains, policy support, world market energy prices, allocation of synergy benefits, and local initiatives. The main conclusion from this study is that a future fourfold increase of current EU27 excess heat utilisation by means of district heat distribution to residential and service sectors is conceived as plausible if applying best Member State practice. This estimation is higher than the threefold increase with respect to direct feasible distribution costs estimated by the same authors in a previous study. Hence, no direct barriers appear with

  10. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

    OpenAIRE

    Utpala Niranjan; R.B.V. Subramanyam; V-Khana

    2010-01-01

    Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

  11. Rapidity correlations test stochastic hydrodynamics

    International Nuclear Information System (INIS)

    Zin, C; Gavin, S; Moschelli, G

    2017-01-01

    We show that measurements of the rapidity dependence of transverse momentum correlations can be used to determine the characteristic time τ π that dictates the rate of isotropization of the stress energy tensor, as well as the shear viscosity ν = η/sT . We formulate methods for computing these correlations using second order dissipative hydrodynamics with noise. Current data are consistent with τ π /ν ∼ 10 but targeted measurements can improve this precision. (paper)

  12. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    Science.gov (United States)

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  13. Avaliação da expansão rápida da maxila por meio da tomografia computadorizada: relato de um caso Computed tomography evaluation of rapid maxillary expansion: a case report

    Directory of Open Access Journals (Sweden)

    Daniela Gamba Garib

    2005-08-01

    Full Text Available Este trabalho descreve os efeitos dentoesqueléticos e periodontais da expansão rápida da maxila (ERM, avaliados por meio da tomografia computadorizada (TC, em uma jovem de 11,6 anos de idade com má oclusão de Classe I e mordida cruzada posterior unilateral funcional. Durante a fase ativa da ERM, o expansor dentossuportado com parafuso Hyrax foi ativado 7mm. A paciente submeteu-se ao exame de TC helicoidal, antes da expansão e após a remoção do aparelho expansor, findo o período de três meses de contenção. Realizaram-se cortes axiais, de um milímetro de espessura, paralelamente ao plano palatino, englobando as regiões dentoalveolar e basal da maxila, até o terço inferior da cavidade nasal. Utilizando-se o recurso de reconstruções multiplanares, mensuraram-se pelo método computadorizado: as dimensões transversas maxilares, a inclinação dos dentes posteriores, a espessura das tábuas ósseas vestibular e lingual, e o nível da crista óssea alveolar vestibular. A expansão rápida da maaxila ocasionou um significante aumento transverso em todas as regiões aferidas, com magnitude decrescente do arco dentário para a base óssea. Os dentes posteriores foram movimentados para vestibular, com um componente de inclinação e translação associados. Tal efeito ortodôntico ocasionou uma redução na espessura da tábua óssea vestibular, e um concomitante aumento na espessura da tábua óssea lingual. Após a expansão, observou-se o desenvolvimento de deiscências ósseas por vestibular dos dentes de ancoragem.This paper presents a computed tomography (CT evaluation of rapid maxillary expansion dentoskeletal and periodontal effects, in a 11.6 year-old girl presenting Class I malocclusion with posterior unilateral crossbite. The tooth-borne expander with Hyrax screw was activated 7-mm. The patient was submitted to helicoidal CT scan before expansion and after the three-month retention period when the expander was removed. Axial

  14. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  15. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    International Nuclear Information System (INIS)

    Zhao, Chao; Vu, Quoc Dong; Li, Pu

    2013-01-01

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach

  16. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chao [FuZhou University, FuZhou (China); Vu, Quoc Dong; Li, Pu [Ilmenau University of Technology, Ilmenau (Germany)

    2013-02-15

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach.

  17. Forensic Computing (Dagstuhl Seminar 13482)

    OpenAIRE

    Freiling, Felix C.; Hornung, Gerrit; Polcák, Radim

    2014-01-01

    Forensic computing} (sometimes also called digital forensics, computer forensics or IT forensics) is a branch of forensic science pertaining to digital evidence, i.e., any legal evidence that is processed by digital computer systems or stored on digital storage media. Forensic computing is a new discipline evolving within the intersection of several established research areas such as computer science, computer engineering and law. Forensic computing is rapidly gaining importance since the...

  18. Further comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1997-05-23

    The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).

  19. Sequential lineups: shift in criterion or decision strategy?

    Science.gov (United States)

    Gronlund, Scott D

    2004-04-01

    R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

  20. Relations between the simultaneous and sequential transfer of two nucleons

    International Nuclear Information System (INIS)

    Satchler, G.R.

    1982-01-01

    The author discusses the perturbative treatment of simultaneous and sequential two-nucleon transfer reactions with special regards to the DWBA. As examples the (t,p), (p,t), and (α,d) reactions are considered. (HSI)

  1. Retrieval of sea surface velocities using sequential Ocean Colour ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    pended sediment dispersion patterns, in sequential two time lapsed images. .... face advective velocities consists essentially of iden- tifying the ... matrix is time consuming, a significant reduction .... Chauhan, P. 2002 Personal Communication.

  2. Process tomography via sequential measurements on a single quantum system

    CSIR Research Space (South Africa)

    Bassa, H

    2015-09-01

    Full Text Available The authors utilize a discrete (sequential) measurement protocol to investigate quantum process tomography of a single two-level quantum system, with an unknown initial state, undergoing Rabi oscillations. The ignorance of the dynamical parameters...

  3. Generalized infimum and sequential product of quantum effects

    International Nuclear Information System (INIS)

    Li Yuan; Sun Xiuhong; Chen Zhengli

    2007-01-01

    The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

  4. Sequential Low Cost Interventions Double Hand Hygiene Rates ...

    African Journals Online (AJOL)

    Sequential Low Cost Interventions Double Hand Hygiene Rates Among Medical Teams in a Resource Limited Setting. Results of a Hand Hygiene Quality Improvement Project Conducted At University Teaching Hospital of Kigali (Chuk), Kigali, Rwanda.

  5. The impact of eyewitness identifications from simultaneous and sequential lineups.

    Science.gov (United States)

    Wright, Daniel B

    2007-10-01

    Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

  6. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    2016-09-03

    Sep 3, 2016 ... Sequential mixed methods research is an effective approach for ... show the effectiveness of the research method. ... qualitative data before quantitative datasets ..... whereby both types of data are collected simultaneously.

  7. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    Science.gov (United States)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  8. Sequential hepato-splenic scintigraphy for measurement of hepatic blood flow

    International Nuclear Information System (INIS)

    Biersack, H.J.; Knopp, R.; Dahlem, R.; Winkler, C.; Thelen, M.; Schulz, D.; Schmidt, R.

    1977-01-01

    The arterial and portal components of total liver blood flow were determined quantitatively in 31 patients by means of a new, non-invasive method. Sequential hepato-splenic scintigraphy has been employed, using a scintillation camera linked to a computer system. In normals, the proportion of portal flow was 71%, whereas in patients with portal hypertension it averaged 21%. Our experience indicates that the procedure can be of considerable value in the pre-operative diagnosis and postoperative follow-up of portal hypertension. (orig.) [de

  9. Sequential hepato-splenic scintigraphy for measurement of hepatic blood flow

    Energy Technology Data Exchange (ETDEWEB)

    Biersack, H J; Knopp, R; Dahlem, R; Winkler, C [Bonn Univ. (Germany, F.R.). Inst. fuer Klinische und Experimentelle Nuklearmedizin; Thelen, M [Bonn Univ. (Germany, F.R.). Radiologische Klinik; Schulz, D; Schmidt, R [Bonn Univ. (Germany, F.R.). Chirurgische Klinik und Poliklinik

    1977-01-01

    The arterial and portal components of total liver blood flow were determined quantitatively in 31 patients by means of a new, non-invasive method. Sequential hepato-splenic scintigraphy has been employed, using a scintillation camera linked to a computer system. In normals, the proportion of portal flow was 71%, whereas in patients with portal hypertension it averaged 21%. Our experience indicates that the procedure can be of considerable value in the pre-operative diagnosis and postoperative follow-up of portal hypertension.

  10. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  11. Further Developments on Optimum Structural Design Using MSC/Nastran and Sequential Quadratic Programming

    DEFF Research Database (Denmark)

    Holzleitner, Ludwig

    1996-01-01

    , here the shape of two dimensional parts with different thickness areas will be optimized. As in the previos paper, a methodology for structural optimization using the commercial finite element package MSC/NASTRAN for structural analysis is described. Three different methods for design sensitivity......This work is closely connected to the paper: K.G. MAHMOUD, H.W. ENGL and HOLZLEITNER: "OPTIMUM STRUCTURAL DESIGN USING MSC/NASTRAN AND SEQUENTIAL QUADRATIC PROGRAMMING", Computers & Structures, Vol. 52, No. 3, pp. 437-447, (1994). In contrast to that paper, where thickness optimization is described...

  12. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  13. Multichannel, sequential or combined X-ray spectrometry

    International Nuclear Information System (INIS)

    Florestan, J.

    1979-01-01

    X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr

  14. A Survey of Multi-Objective Sequential Decision-Making

    OpenAIRE

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...

  15. Configural and component processing in simultaneous and sequential lineup procedures

    OpenAIRE

    Flowe, HD; Smith, HMJ; Karoğlu, N; Onwuegbusi, TO; Rai, L

    2015-01-01

    Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences...

  16. Sequential weak continuity of null Lagrangians at the boundary

    Czech Academy of Sciences Publication Activity Database

    Kalamajska, A.; Kraemer, S.; Kružík, Martin

    2014-01-01

    Roč. 49, 3/4 (2014), s. 1263-1278 ISSN 0944-2669 R&D Projects: GA ČR GAP201/10/0357 Institutional support: RVO:67985556 Keywords : null Lagrangians * nonhomogeneous nonlinear mappings * sequential weak/in measure continuity Subject RIV: BA - General Mathematics Impact factor: 1.518, year: 2014 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-sequential weak continuity of null lagrangians at the boundary.pdf

  17. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  18. Sequential contrast-enhanced MR imaging of the penis.

    Science.gov (United States)

    Kaneko, K; De Mouy, E H; Lee, B E

    1994-04-01

    To determine the enhancement patterns of the penis at magnetic resonance (MR) imaging. Sequential contrast material-enhanced MR images of the penis in a flaccid state were obtained in 16 volunteers (12 with normal penile function and four with erectile dysfunction). Subjects with normal erectile function showed gradual and centrifugal enhancement of the corpora cavernosa, while those with erectile dysfunction showed poor enhancement with abnormal progression. Sequential contrast-enhanced MR imaging provides additional morphologic information for the evaluation of erectile dysfunction.

  19. Quantitative analysis of bowel gas by plain abdominal radiograph combined with computer image processing

    International Nuclear Information System (INIS)

    Gao Yan; Peng Kewen; Zhang Houde; Shen Bixian; Xiao Hanxin; Cai Juan

    2003-01-01

    Objective: To establish a method for quantitative analysis of bowel gas by plain abdominal radiograph and computer graphics. Methods: Plain abdominal radiographs in supine position from 25 patients with irritable bowel syndrome (IBS) and 20 health controls were studied. A gastroenterologist and a radiologist independently conducted the following procedure on each radiograph. After the outline of bowel gas was traced by axe pen, the radiograph was digitized by a digital camera and transmitted to the computer with Histogram software. The total gas area was determined as the pixel value on images. The ratio of the bowel gas quantity to the pixel value in the region surrounded by a horizontal line tangential to the superior pubic symphysis margin, a horizontal line tangential to the tenth dorsal vertebra inferior margin, and the lateral line tangential to the right and left anteriosuperior iliac crest, was defined as the gas volume score (GVS). To examine the sequential reproducibility, a second plain abdominal radiograph was performed in 5 normal controls 1 week later, and the GVS were compared. Results: Bowel gas was easily identified on the plain abdominal radiograph. Both large and small intestine located in the selected region. Both observers could finish one radiographic measurement in less than 10 mins. The correlation coefficient between the two observers was 0.986. There was no statistical difference on GVS between the two sequential radiographs in 5 health controls. Conclusion: Quantification of bowel gas based on plain abdominal radiograph and computer is simple, rapid, and reliable

  20. Minimal computational-space implementation of multiround quantum protocols

    International Nuclear Information System (INIS)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Chiribella, Giulio

    2011-01-01

    A single-party strategy in a multiround quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here, we provide an efficient realization in terms of computational-space resources.

  1. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  2. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

  3. Program completion of a web-based tailored lifestyle intervention for adults: differences between a sequential and a simultaneous approach.

    Science.gov (United States)

    Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J

    2012-03-08

    Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P simultaneous condition: OR = 1.04; P sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout

  4. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  5. Sequential topographical portrayal of myocardial blood flow

    Energy Technology Data Exchange (ETDEWEB)

    Richeson, J.F.; Waag, R.C.; Zwierzynski, D.; Schenk, E.A. (Univ. of Rochester School of Medicine and Dentistry, NY (USA))

    1989-08-01

    Methods to portray myocardial blood flow in a two-dimensional continuum are advantageous in that they allow blood flow history to be overlaid on histological or histochemical descriptions of the consequences of ischemia. We describe here autoradiographic methods that allow such portrayals at three separate times during the evolution of ischemic injury. A computer-based image-analysis system was used to derive such flow maps by taking advantage of the physical characteristics of radioactive isotopes.

  6. Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.

    Science.gov (United States)

    Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq

    2016-01-01

    This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.

  7. Quantum steady computation

    Energy Technology Data Exchange (ETDEWEB)

    Castagnoli, G. (Dipt. di Informatica, Sistemistica, Telematica, Univ. di Genova, Viale Causa 13, 16145 Genova (IT))

    1991-08-10

    This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

  8. Quantum steady computation

    International Nuclear Information System (INIS)

    Castagnoli, G.

    1991-01-01

    This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition

  9. Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.

    Science.gov (United States)

    Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A

    The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.

  10. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  11. Efficient Mining and Detection of Sequential Intrusion Patterns for Network Intrusion Detection Systems

    Science.gov (United States)

    Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli

    In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.

  12. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

    KAUST Repository

    Abediseid, Walid

    2013-06-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter - the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity. © 2013 IEEE.

  13. Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation

    Science.gov (United States)

    Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.

    2018-03-01

    A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.

  14. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    Science.gov (United States)

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  15. Sequential x-ray diffraction topography at 1-BM x-ray optics testing beamline at the advanced photon source

    Energy Technology Data Exchange (ETDEWEB)

    Stoupin, Stanislav, E-mail: sstoupin@aps.anl.gov; Shvyd’ko, Yuri; Trakhtenberg, Emil; Liu, Zunping; Lang, Keenan; Huang, Xianrong; Wieczorek, Michael; Kasman, Elina; Hammonds, John; Macrander, Albert; Assoufid, Lahsen [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States)

    2016-07-27

    We report progress on implementation and commissioning of sequential X-ray diffraction topography at 1-BM Optics Testing Beamline of the Advanced Photon Source to accommodate growing needs of strain characterization in diffractive crystal optics and other semiconductor single crystals. The setup enables evaluation of strain in single crystals in the nearly-nondispersive double-crystal geometry. Si asymmetric collimator crystals of different crystallographic orientations were designed, fabricated and characterized using in-house capabilities. Imaging the exit beam using digital area detectors permits rapid sequential acquisition of X-ray topographs at different angular positions on the rocking curve of a crystal under investigation. Results on sensitivity and spatial resolution are reported based on experiments with high-quality Si and diamond crystals. The new setup complements laboratory-based X-ray topography capabilities of the Optics group at the Advanced Photon Source.

  16. Cloud Computing : Research Issues and Implications

    OpenAIRE

    Marupaka Rajenda Prasad; R. Lakshman Naik; V. Bapuji

    2013-01-01

    Cloud computing is a rapidly developing and excellent promising technology. It has aroused the concern of the computer society of whole world. Cloud computing is Internet-based computing, whereby shared information, resources, and software, are provided to terminals and portable devices on-demand, like the energy grid. Cloud computing is the product of the combination of grid computing, distributed computing, parallel computing, and ubiquitous computing. It aims to build and forecast sophisti...

  17. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    Iooss, Christine

    1991-01-01

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

  18. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  19. Configural and component processing in simultaneous and sequential lineup procedures.

    Science.gov (United States)

    Flowe, Heather D; Smith, Harriet M J; Karoğlu, Nilda; Onwuegbusi, Tochukwu O; Rai, Lovedeep

    2016-01-01

    Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences. We had participants view a crime video, and then they attempted to identify the perpetrator from a simultaneous or sequential lineup. The test faces were presented either upright or inverted, as previous research has shown that inverting test faces disrupts configural processing. The size of the inversion effect for faces was the same across lineup procedures, indicating that configural processing underlies face recognition in both procedures. Discrimination accuracy was comparable across lineup procedures in both the upright and inversion condition. Theoretical implications of the results are discussed.

  20. Visual short-term memory for sequential arrays.

    Science.gov (United States)

    Kumar, Arjun; Jiang, Yuhong

    2005-04-01

    The capacity of visual short-term memory (VSTM) for a single visual display has been investigated in past research, but VSTM for multiple sequential arrays has been explored only recently. In this study, we investigate the capacity of VSTM across two sequential arrays separated by a variable stimulus onset asynchrony (SOA). VSTM for spatial locations (Experiment 1), colors (Experiments 2-4), orientations (Experiments 3 and 4), and conjunction of color and orientation (Experiment 4) were tested, with the SOA across the two sequential arrays varying from 100 to 1,500 msec. We find that VSTM for the trailing array is much better than VSTM for the leading array, but when averaged across the two arrays VSTM has a constant capacity independent of the SOA. We suggest that multiple displays compete for retention in VSTM and that separating information into two temporally discrete groups does not enhance the overall capacity of VSTM.

  1. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

    International Nuclear Information System (INIS)

    Bilohuscin, J.

    2016-01-01

    In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

  2. Sequential analysis in neonatal research-systematic review.

    Science.gov (United States)

    Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

    2018-05-01

    As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

  3. Exploring data with RapidMiner

    CERN Document Server

    Chisholm, Andrew

    2013-01-01

    A step-by-step tutorial style using examples so that users of different levels will benefit from the facilities offered by RapidMiner.If you are a computer scientist or an engineer who has real data from which you want to extract value, this book is ideal for you. You will need to have at least a basic awareness of data mining techniques and some exposure to RapidMiner.

  4. The Computational Materials Repository

    DEFF Research Database (Denmark)

    Landis, David D.; Hummelshøj, Jens S.; Nestorov, Svetlozar

    2012-01-01

    The possibilities for designing new materials based on quantum physics calculations are rapidly growing, but these design efforts lead to a significant increase in the amount of computational data created. The Computational Materials Repository (CMR) addresses this data challenge and provides...

  5. The pursuit of balance in sequential randomized trials

    Directory of Open Access Journals (Sweden)

    Raymond P. Guiteras

    2016-06-01

    Full Text Available In many randomized trials, subjects enter the sample sequentially. Because the covariates for all units are not known in advance, standard methods of stratification do not apply. We describe and assess the method of DA-optimal sequential allocation (Atkinson, 1982 for balancing stratification covariates across treatment arms. We provide simulation evidence that the method can provide substantial improvements in precision over commonly employed alternatives. We also describe our experience implementing the method in a field trial of a clean water and handwashing intervention in Dhaka, Bangladesh, the first time the method has been used. We provide advice and software for future researchers.

  6. TELEGRAPHS TO INCANDESCENT LAMPS: A SEQUENTIAL PROCESS OF INNOVATION

    Directory of Open Access Journals (Sweden)

    Laurence J. Malone

    2000-01-01

    Full Text Available This paper outlines a sequential process of technological innovation in the emergence of the electrical industry in the United States from 1830 to 1880. Successive inventions that realize the commercial possibilities of electricity provided the foundation for an industry where technical knowledge, invention and diffusion were ultimately consolidated within the managerial structure of new firms. The genesis of the industry is traced, sequentially, through the development of the telegraph, arc light and incandescent lamp. Exploring the origins of the telegraph and incandescent lamp reveals a process where a series of inventions and firms result from successful efforts touse scientific principles to create new commodities and markets.

  7. Properties of simultaneous and sequential two-nucleon transfer

    International Nuclear Information System (INIS)

    Pinkston, W.T.; Satchler, G.R.

    1982-01-01

    Approximate forms of the first- and second-order distorted-wave Born amplitudes are used to study the overall structure, particularly the selection rules, of the amplitudes for simultaneous and sequential transfer of two nucleons. The role of the spin-state assumed for the intermediate deuterons in sequential (t, p) reactions is stressed. The similarity of one-step and two-step amplitudes for (α, d) reactions is exhibited, and the consequent absence of any obvious J-dependence in their interference is noted. (orig.)

  8. Event-shape analysis: Sequential versus simultaneous multifragment emission

    International Nuclear Information System (INIS)

    Cebra, D.A.; Howden, S.; Karn, J.; Nadasen, A.; Ogilvie, C.A.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S.; Norbeck, E.

    1990-01-01

    The Michigan State University 4π array has been used to select central-impact-parameter events from the reaction 40 Ar+ 51 V at incident energies from 35 to 85 MeV/nucleon. The event shape in momentum space is an observable which is shown to be sensitive to the dynamics of the fragmentation process. A comparison of the experimental event-shape distribution to sequential- and simultaneous-decay predictions suggests that a transition in the breakup process may have occurred. At 35 MeV/nucleon, a sequential-decay simulation reproduces the data. For the higher energies, the experimental distributions fall between the two contrasting predictions

  9. Sequential approach to Colombeau's theory of generalized functions

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1987-07-01

    J.F. Colombeau's generalized functions are constructed as equivalence classes of the elements of a specially chosen ultrapower of the class of the C ∞ -functions. The elements of this ultrapower are considered as sequences of C ∞ -functions, so in a sense, the sequential construction presented here refers to the original Colombeau theory just as, for example, the Mikusinski sequential approach to the distribution theory refers to the original Schwartz theory of distributions. The paper could be used as an elementary introduction to the Colombeau theory in which recently a solution was found to the problem of multiplication of Schwartz distributions. (author). Refs

  10. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  11. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  12. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  13. Predecessor and permutation existence problems for sequential dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C. L. (Christopher L.); Hunt, H. B. (Harry B.); Marathe, M. V. (Madhav V.); Rosenkrantz, D. J. (Daniel J.); Stearns, R. E. (Richard E.)

    2002-01-01

    A class of finite discrete dynamical systems, called Sequential Dynamical Systems (SDSs), was introduced in BMR99, BR991 as a formal model for analyzing simulation systems. An SDS S is a triple (G, F,n ),w here (i) G(V,E ) is an undirected graph with n nodes with each node having a state, (ii) F = (fi, fi, . . ., fn), with fi denoting a function associated with node ui E V and (iii) A is a permutation of (or total order on) the nodes in V, A configuration of an SDS is an n-vector ( b l, bz, . . ., bn), where bi is the value of the state of node vi. A single SDS transition from one configuration to another is obtained by updating the states of the nodes by evaluating the function associated with each of them in the order given by n. Here, we address the complexity of two basic problems and their generalizations for SDSs. Given an SDS S and a configuration C, the PREDECESSOR EXISTENCE (or PRE) problem is to determine whether there is a configuration C' such that S has a transition from C' to C. (If C has no predecessor, C is known as a garden of Eden configuration.) Our results provide separations between efficiently solvable and computationally intractable instances of the PRE problem. For example, we show that the PRE problem can be solved efficiently for SDSs with Boolean state values when the node functions are symmetric and the underlying graph is of bounded treewidth. In contrast, we show that allowing just one non-symmetric node function renders the problem NP-complete even when the underlying graph is a tree (which has a treewidth of 1). We also show that the PRE problem is efficiently solvable for SDSs whose state values are from a field and whose node functions are linear. Some of the polynomial algorithms also extend to the case where we want to find an ancestor configuration that precedes a given configuration by a logarithmic number of steps. Our results extend some of the earlier results by Sutner [Su95] and Green [@87] on the complexity of

  14. Simultaneous capture and sequential detection of two malarial biomarkers on magnetic microparticles.

    Science.gov (United States)

    Markwalter, Christine F; Ricks, Keersten M; Bitting, Anna L; Mudenda, Lwiindi; Wright, David W

    2016-12-01

    We have developed a rapid magnetic microparticle-based detection strategy for malarial biomarkers Plasmodium lactate dehydrogenase (pLDH) and Plasmodium falciparum histidine-rich protein II (PfHRPII). In this assay, magnetic particles functionalized with antibodies specific for pLDH and PfHRPII as well as detection antibodies with distinct enzymes for each biomarker are added to parasitized lysed blood samples. Sandwich complexes for pLDH and PfHRPII form on the surface of the magnetic beads, which are washed and sequentially re-suspended in detection enzyme substrate for each antigen. The developed simultaneous capture and sequential detection (SCSD) assay detects both biomarkers in samples as low as 2.0parasites/µl, an order of magnitude below commercially available ELISA kits, has a total incubation time of 35min, and was found to be reproducible between users over time. This assay provides a simple and efficient alternative to traditional 96-well plate ELISAs, which take 5-8h to complete and are limited to one analyte. Further, the modularity of the magnetic bead-based SCSD ELISA format could serve as a platform for application to other diseases for which multi-biomarker detection is advantageous. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    Science.gov (United States)

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  16. In-situ sequential laser transfer and laser reduction of graphene oxide films

    Science.gov (United States)

    Papazoglou, S.; Petridis, C.; Kymakis, E.; Kennou, S.; Raptis, Y. S.; Chatzandroulis, S.; Zergioti, I.

    2018-04-01

    Achieving high quality transfer of graphene on selected substrates is a priority in device fabrication, especially where drop-on-demand applications are involved. In this work, we report an in-situ, fast, simple, and one step process that resulted in the reduction, transfer, and fabrication of reduced graphene oxide-based humidity sensors, using picosecond laser pulses. By tuning the laser illumination parameters, we managed to implement the sequential printing and reduction of graphene oxide flakes. The overall process lasted only a few seconds compared to a few hours that our group has previously published. DC current measurements, X-Ray Photoelectron Spectroscopy, X-Ray Diffraction, and Raman Spectroscopy were employed in order to assess the efficiency of our approach. To demonstrate the applicability and the potential of the technique, laser printed reduced graphene oxide humidity sensors with a limit of detection of 1700 ppm are presented. The results demonstrated in this work provide a selective, rapid, and low-cost approach for sequential transfer and photochemical reduction of graphene oxide micro-patterns onto various substrates for flexible electronics and sensor applications.

  17. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  18. Rapid Prototyping Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The ARDEC Rapid Prototyping (RP) Laboratory was established in December 1992 to provide low cost RP capabilities to the ARDEC engineering community. The Stratasys,...

  19. Sequential rituximab and omalizumab for the treatment of eosinophilic granulomatosis with polyangiitis (Churg-Strauss syndrome).

    Science.gov (United States)

    Aguirre-Valencia, David; Posso-Osorio, Iván; Bravo, Juan-Carlos; Bonilla-Abadía, Fabio; Tobón, Gabriel J; Cañas, Carlos A

    2017-09-01

    Eosinophilic granulomatosis with polyangiitis (EGPA), formerly known as Churg-Strauss syndrome (CSS), is a small vessel vasculitis associated with eosinophilia and asthma. Clinical manifestations commonly seen in patients presenting with EGPA range from upper airway and lung involvement to neurological, cardiac, cutaneous, and renal manifestations. Treatment for severe presentations includes steroids, cyclophosphamide, plasmapheresis, and recently, rituximab. Rituximab is associated with a good response in the treatment of vasculitis, but a variable response for the control of allergic symptoms. Here, we report a 16-year-old female patient with severe EGPA (gastrointestinal and cutaneous vasculitis, rhinitis and asthma) refractory to conventional treatment. She was treated with rituximab, which enabled rapid control of the vasculitis component of the disease, but there was no response to rhinitis and asthma. Additionally, she developed severe bronchospasm during rituximab infusion. Sequential rituximab and omalizumab were initiated, leading to remission of all manifestations of vasculitis, rhinitis, and asthma, in addition to bronchospasm related to rituximab infusion.

  20. Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte

    2004-01-01

    are proposed, but all remain applicable to other domain problems. The well-known multi-resolution AAM optimisation is extended to include sequential relaxations on texture resolution, model coverage and model parameter constraints. Fully unsupervised analysis is obtained by exploiting model parameter...... that show that the method produces accurate, robust and rapid segmentations in a cross sectional study of 17 subjects, establishing its feasibility as a fully automated clinical tool for analysis and segmentation.......This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case...

  1. Site competition on metal surfaces: an electron spectroscopic study of sequential adsorption on W(110)

    International Nuclear Information System (INIS)

    Steinkilberg, M.; Menzel, D.

    1977-01-01

    Using UPS and XPS, the sequential adsorption of hydrogen + carbon monoxide, and of hydrogen + oxygen, on W(110) has been studied at room temperature. Adsorption of CO on a H-covered surface is rapid and leads to total displacement of hydrogen. The resulting CO layer however, is different from that formed on the clean surface under identical conditions, in that it consists of a higher percentage of virgin CO, while considerably more β-CO forms on the clean surface. Oxygen does not adsorb on a H-covered surface, nor displace hydrogen. It is concluded that hydrogen most probably occupies the same sites utilized by dissociative adsorption of CO and oxygen, while virgin CO can also occupy different sites; its adsorption can thus lead to interactional weakening of the H-surface bond. (Auth.)

  2. Programming Cell Adhesion for On-Chip Sequential Boolean Logic Functions.

    Science.gov (United States)

    Qu, Xiangmeng; Wang, Shaopeng; Ge, Zhilei; Wang, Jianbang; Yao, Guangbao; Li, Jiang; Zuo, Xiaolei; Shi, Jiye; Song, Shiping; Wang, Lihua; Li, Li; Pei, Hao; Fan, Chunhai

    2017-08-02

    Programmable remodelling of cell surfaces enables high-precision regulation of cell behavior. In this work, we developed in vitro constructed DNA-based chemical reaction networks (CRNs) to program on-chip cell adhesion. We found that the RGD-functionalized DNA CRNs are entirely noninvasive when interfaced with the fluidic mosaic membrane of living cells. DNA toehold with different lengths could tunably alter the release kinetics of cells, which shows rapid release in minutes with the use of a 6-base toehold. We further demonstrated the realization of Boolean logic functions by using DNA strand displacement reactions, which include multi-input and sequential cell logic gates (AND, OR, XOR, and AND-OR). This study provides a highly generic tool for self-organization of biological systems.

  3. Droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling for simpler and faster PCR assay using wire-guided manipulations.

    Science.gov (United States)

    You, David J; Yoon, Jeong-Yeol

    2012-09-04

    A computer numerical control (CNC) apparatus was used to perform droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling on a single superhydrophobic surface and a multi-chambered PCB heater. Droplets were manipulated using "wire-guided" method (a pipette tip was used in this study). This methodology can be easily adapted to existing commercial robotic pipetting system, while demonstrated added capabilities such as vibrational mixing, high-speed centrifuging of droplets, simple DNA extraction utilizing the hydrophobicity difference between the tip and the superhydrophobic surface, and rapid thermocycling with a moving droplet, all with wire-guided droplet manipulations on a superhydrophobic surface and a multi-chambered PCB heater (i.e., not on a 96-well plate). Serial dilutions were demonstrated for diluting sample matrix. Centrifuging was demonstrated by rotating a 10 μL droplet at 2300 round per minute, concentrating E. coli by more than 3-fold within 3 min. DNA extraction was demonstrated from E. coli sample utilizing the disposable pipette tip to cleverly attract the extracted DNA from the droplet residing on a superhydrophobic surface, which took less than 10 min. Following extraction, the 1500 bp sequence of Peptidase D from E. coli was amplified using rapid droplet thermocycling, which took 10 min for 30 cycles. The total assay time was 23 min, including droplet centrifugation, droplet DNA extraction and rapid droplet thermocycling. Evaporation from of 10 μL droplets was not significant during these procedures, since the longest time exposure to air and the vibrations was less than 5 min (during DNA extraction). The results of these sequentially executed processes were analyzed using gel electrophoresis. Thus, this work demonstrates the adaptability of the system to replace many common laboratory tasks on a single platform (through re-programmability), in rapid succession (using droplets), and with a high level of

  4. Droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling for simpler and faster PCR assay using wire-guided manipulations

    Directory of Open Access Journals (Sweden)

    You David J

    2012-09-01

    Full Text Available Abstract A computer numerical control (CNC apparatus was used to perform droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling on a single superhydrophobic surface and a multi-chambered PCB heater. Droplets were manipulated using “wire-guided” method (a pipette tip was used in this study. This methodology can be easily adapted to existing commercial robotic pipetting system, while demonstrated added capabilities such as vibrational mixing, high-speed centrifuging of droplets, simple DNA extraction utilizing the hydrophobicity difference between the tip and the superhydrophobic surface, and rapid thermocycling with a moving droplet, all with wire-guided droplet manipulations on a superhydrophobic surface and a multi-chambered PCB heater (i.e., not on a 96-well plate. Serial dilutions were demonstrated for diluting sample matrix. Centrifuging was demonstrated by rotating a 10 μL droplet at 2300 round per minute, concentrating E. coli by more than 3-fold within 3 min. DNA extraction was demonstrated from E. coli sample utilizing the disposable pipette tip to cleverly attract the extracted DNA from the droplet residing on a superhydrophobic surface, which took less than 10 min. Following extraction, the 1500 bp sequence of Peptidase D from E. coli was amplified using rapid droplet thermocycling, which took 10 min for 30 cycles. The total assay time was 23 min, including droplet centrifugation, droplet DNA extraction and rapid droplet thermocycling. Evaporation from of 10 μL droplets was not significant during these procedures, since the longest time exposure to air and the vibrations was less than 5 min (during DNA extraction. The results of these sequentially executed processes were analyzed using gel electrophoresis. Thus, this work demonstrates the adaptability of the system to replace many common laboratory tasks on a single platform (through re-programmability, in rapid succession (using droplets

  5. A sequential hypothesis test based on a generalized Azuma inequality

    NARCIS (Netherlands)

    Reijsbergen, D.P.; Scheinhardt, Willem R.W.; de Boer, Pieter-Tjerk

    We present a new power-one sequential hypothesis test based on a bound for the probability that a bounded zero-mean martingale ever crosses a curve of the form $a(n+k)^b$. The proof of the bound is of independent interest.

  6. Sequential and simultaneous revascularization in adult orthotopic piggyback liver transplantation

    NARCIS (Netherlands)

    Polak, WG; Miyamoto, S; Nemes, BA; Peeters, PMJG; de Jong, KP; Porte, RJ; Slooff, MJH

    The aim of the study was to assess whether there is a difference in outcome after sequential or simultaneous revascularization during orthotopic liver transplantation (OLT) in terms of patient and graft survival, mortality, morbidity, and liver function. The study population consisted of 102 adult

  7. A generally applicable sequential alkaline phosphatase immunohistochemical double staining

    NARCIS (Netherlands)

    van der Loos, Chris M.; Teeling, Peter

    2008-01-01

    A universal type of sequential double alkaline phosphatase immunohistochemical staining is described that can be used for formalin-fixed, paraffin-embedded and cryostat tissue sections from human and mouse origin. It consists of two alkaline phosphatase detection systems including enzymatic

  8. Excessive pressure in multichambered cuffs used for sequential compression therapy

    NARCIS (Netherlands)

    Segers, P; Belgrado, JP; Leduc, A; Leduc, O; Verdonck, P

    2002-01-01

    Background and Purpose. Pneumatic compression devices, used as part of the therapeutic strategy for lymphatic drainage, often have cuffs with multiple chambers that are, inflated sequentially. The purpose of this study was to investigate (1) the relationship between cuff chamber pressure

  9. Retrieval of sea surface velocities using sequential Ocean Colour

    Indian Academy of Sciences (India)

    The Indian remote sensing satellite, IRS-P4 (Oceansat-I) launched on May 26th, 1999 carried two sensors on board, i.e., the Ocean Colour Monitor (OCM) and the Multi-frequency Scanning Microwave Radiometer (MSMR) dedicated for oceanographic research. Sequential data of IRS-P4 OCM has been analysed over parts ...

  10. Sequential and Biomechanical Factors Constrain Timing and Motion in Tapping

    NARCIS (Netherlands)

    Loehr, J.D.; Palmer, C.

    2009-01-01

    The authors examined how timing accuracy in tapping sequences is influenced by sequential effects of preceding finger movements and biomechanical interdependencies among fingers. Skilled pianists tapped Sequences at 3 rates; in each sequence, a finger whose motion was more or less independent of

  11. What determines the impact of context on sequential action?

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.; Verwey, Willem B.; Abrahamse, E.L.

    2015-01-01

    In the current study we build on earlier observations that memory-based sequential action is better in the original learning context than in other contexts. We examined whether changes in the perceptual context have differential impact across distinct processing phases (preparation versus execution

  12. The Efficacy of Sequential Therapy in Eradication of Helicobacter ...

    African Journals Online (AJOL)

    2017-05-22

    May 22, 2017 ... pylori (H. pylori) eradication rates of standard triple, sequential and quadruple therapies including claritromycin regimes in this study. Materials and Methods: A total of 160 patients with dyspeptic symptoms were enrolled to the study. The patients were randomized to four groups of treatment protocols.

  13. The efficacy of sequential therapy in eradication of Helicobacter ...

    African Journals Online (AJOL)

    ... the Helicobacter pylori (H. pylori) eradication rates of standard triple, sequential and quadruple therapies including claritromycin regimes in this study. Materials and Methods: A total of 160 patients with dyspeptic symptoms were enrolled to the study. The patients were randomized to four groups of treatment protocols.

  14. In Vivo Evaluation of Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Hansen, Peter Møller; Lange, Theis

    2012-01-01

    Ultrasound in vivo imaging using synthetic aperture sequential beamformation (SASB) is compared with conventional imaging in a double blinded study using side-by-side comparisons. The objective is to evaluate if the image quality in terms of penetration depth, spatial resolution, contrast...

  15. Quantum chromodynamics as the sequential fragmenting with inactivation

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    We investigate the relation between the modified leading log approximation of the perturbative QCD and the sequential binary fragmentation process. We will show that in the absence of inactivation, this process is equivalent to the QCD gluodynamics. The inactivation term yields a precise prescription of how to include the hadronization in the QCD equations. (authors)

  16. Quantum chromodynamics as the sequential fragmenting with inactivation

    Energy Technology Data Exchange (ETDEWEB)

    Botet, R. [Paris-11 Univ., 91 - Orsay (France). Lab. de Physique des Solides; Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)

    1996-12-31

    We investigate the relation between the modified leading log approximation of the perturbative QCD and the sequential binary fragmentation process. We will show that in the absence of inactivation, this process is equivalent to the QCD gluodynamics. The inactivation term yields a precise prescription of how to include the hadronization in the QCD equations. (authors). 15 refs.

  17. The Motivating Language of Principals: A Sequential Transformative Strategy

    Science.gov (United States)

    Holmes, William Tobias

    2012-01-01

    This study implemented a Sequential Transformative Mixed Methods design with teachers (as recipients) and principals (to give voice) in the examination of principal talk in two different school accountability contexts (Continuously Improving and Continuously Zigzag) using the conceptual framework of Motivating Language Theory. In phase one,…

  18. The sequential price of anarchy for atomic congestion games

    NARCIS (Netherlands)

    de Jong, Jasper; Uetz, Marc Jochen

    2014-01-01

    In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, here we consider games where players choose their actions sequentially. The

  19. Sequential infiltration synthesis for enhancing multiple-patterning lithography

    Science.gov (United States)

    Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih

    2017-06-20

    Simplified methods of multiple-patterning photolithography using sequential infiltration synthesis to modify the photoresist such that it withstands plasma etching better than unmodified resist and replaces one or more hard masks and/or a freezing step in MPL processes including litho-etch-litho-etch photolithography or litho-freeze-litho-etch photolithography.

  20. The sequential structure of brain activation predicts skill.

    Science.gov (United States)

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.