Minimal ancilla mediated quantum computation
International Nuclear Information System (INIS)
Proctor, Timothy J.; Kendon, Viv
2014-01-01
Schemes of universal quantum computation in which the interactions between the computational elements, in a computational register, are mediated by some ancillary system are of interest due to their relevance to the physical implementation of a quantum computer. Furthermore, reducing the level of control required over both the ancillary and register systems has the potential to simplify any experimental implementation. In this paper we consider how to minimise the control needed to implement universal quantum computation in an ancilla-mediated fashion. Considering computational schemes which require no measurements and hence evolve by unitary dynamics for the global system, we show that when employing an ancilla qubit there are certain fixed-time ancilla-register interactions which, along with ancilla initialisation in the computational basis, are universal for quantum computation with no additional control of either the ancilla or the register. We develop two distinct models based on locally inequivalent interactions and we then discuss the relationship between these unitary models and the measurement-based ancilla-mediated models known as ancilla-driven quantum computation. (orig.)
Minimal models of multidimensional computations.
Directory of Open Access Journals (Sweden)
Jeffrey D Fitzgerald
2011-03-01
Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.
Minimal mobile human computer interaction
el Ali, A.
2013-01-01
In the last 20 years, the widespread adoption of personal, mobile computing devices in everyday life, has allowed entry into a new technological era in Human Computer Interaction (HCI). The constant change of the physical and social context in a user's situation made possible by the portability of
On Time with Minimal Expected Cost!
DEFF Research Database (Denmark)
David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand
2014-01-01
(Priced) timed games are two-player quantitative games involving an environment assumed to be completely antogonistic. Classical analysis consists in the synthesis of strategies ensuring safety, time-bounded or cost-bounded reachability objectives. Assuming a randomized environment, the (priced......) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...
Construction schedules slack time minimizing
Krzemiński, Michał
2017-07-01
The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.
emMAW: computing minimal absent words in external memory.
Héliou, Alice; Pissis, Solon P; Puglisi, Simon J
2017-09-01
The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Taheri, Asghar; Zhalebaghi, Mohammad Hadi
2017-11-01
This paper presents a new control strategy based on finite-control-set model-predictive control (FCS-MPC) for Neutral-point-clamped (NPC) three-level converters. Containing some advantages like fast dynamic response, easy inclusion of constraints and simple control loop, makes the FCS-MPC method attractive to use as a switching strategy for converters. However, the large amount of required calculations is a problem in the widespread of this method. In this way, to resolve this problem this paper presents a modified method that effectively reduces the computation load compare with conventional FCS-MPC method and at the same time does not affect on control performance. The proposed method can be used for exchanging power between electrical grid and DC resources by providing active and reactive power compensations. Experiments on three-level converter for three Power Factor Correction (PFC), inductive and capacitive compensation modes verify the good and comparable performance. The results have been simulated using MATLAB/SIMULINK software. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Computers and the Environment: Minimizing the Carbon Footprint
Kaestner, Rich
2009-01-01
Computers can be good and bad for the environment; one can maximize the good and minimize the bad. When dealing with environmental issues, it's difficult to ignore the computing infrastructure. With an operations carbon footprint equal to the airline industry's, computer energy use is only part of the problem; everyone is also dealing with the use…
Computerprogram for the determination of minimal cardiac transit times
International Nuclear Information System (INIS)
Bosiljanoff, P.; Herzog, H.; Schmid, A.; Sommer, D.; Vyska, K.; Feinendegen, L.E.
1982-10-01
An Anger-Type gamma-camera is used to register the first pass of a radioactive tracer of blood flow through the heart. The acquired data are processed by a suitable computer program yielding time-activity curves for sequential heart segments, which are selected by the region of interest technique. The program prints the minimal cardiac transit times, in terms of total transit times, as well as segmental transit times for the right atrium, right ventricle, lung, left atrium and left ventricle. The measured values are normalized to a rate of 80/min and are compared to normal mean values. The deviation from the normal mean values is characterized by a coefficient F. Moreover, these findings are qualitatively rated. (orig./MG)
Towards minimal resources of measurement-based quantum computation
International Nuclear Information System (INIS)
Perdrix, Simon
2007-01-01
We improve the upper bound on the minimal resources required for measurement-only quantum computation (M A Nielsen 2003 Phys. Rev. A 308 96-100; D W Leung 2004 Int. J. Quantum Inform. 2 33; S Perdrix 2005 Int. J. Quantum Inform. 3 219-23). Minimizing the resources required for this model is a key issue for experimental realization of a quantum computer based on projective measurements. This new upper bound also allows one to reply in the negative to the open question presented by Perdrix (2004 Proc. Quantum Communication Measurement and Computing) about the existence of a trade-off between observable and ancillary qubits in measurement-only QC
Minimal Time Problem with Impulsive Controls
Energy Technology Data Exchange (ETDEWEB)
Kunisch, Karl, E-mail: karl.kunisch@uni-graz.at [University of Graz, Institute for Mathematics and Scientific Computing (Austria); Rao, Zhiping, E-mail: zhiping.rao@ricam.oeaw.ac.at [Austrian Academy of Sciences, Radon Institute of Computational and Applied Mathematics (Austria)
2017-02-15
Time optimal control problems for systems with impulsive controls are investigated. Sufficient conditions for the existence of time optimal controls are given. A dynamical programming principle is derived and Lipschitz continuity of an appropriately defined value functional is established. The value functional satisfies a Hamilton–Jacobi–Bellman equation in the viscosity sense. A numerical example for a rider-swing system is presented and it is shown that the reachable set is enlargered by allowing for impulsive controls, when compared to nonimpulsive controls.
Minimization of Retrieval Time During Software Reuse | Salami ...
African Journals Online (AJOL)
Minimization of Retrieval Time During Software Reuse. ... Retrieval of relevant software from the repository during software reuse can be time consuming if the repository contains many ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT
Controllers with Minimal Observation Power (Application to Timed Systems)
DEFF Research Database (Denmark)
Bulychev, Petr; Cassez, Franck; David, Alexandre
2012-01-01
We consider the problem of controller synthesis under imper- fect information in a setting where there is a set of available observable predicates equipped with a cost function. The problem that we address is the computation of a subset of predicates sufficient for control and whose cost is minimal...
Obendorf, Hartmut
2009-01-01
The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.
Cloud Computing-An Ultimate Technique to Minimize Computing cost for Developing Countries
Narendra Kumar; Shikha Jain
2012-01-01
The presented paper deals with how remotely managed computing and IT resources can be beneficial in the developing countries like India and Asian sub-continent countries. This paper not only defines the architectures and functionalities of cloud computing but also indicates strongly about the current demand of Cloud computing to achieve organizational and personal level of IT supports in very minimal cost with high class flexibility. The power of cloud can be used to reduce the cost of IT - r...
Directory of Open Access Journals (Sweden)
Fotev Vasko G.
2017-01-01
Full Text Available This article presents innovative method for increasing the speed of procedure which includes complex computational fluid dynamic calculations for finding the distance between flame openings of atmospheric gas burner that lead to minimal NO pollution. The method is based on standard features included in commercial computational fluid dynamic software and shortens computer working time roughly seven times in this particular case.
Radiocardiography of minimal transit times: a useful diagnostic procedure
International Nuclear Information System (INIS)
Schicha, H.; Vyska, K.; Becker, V.; Feinendegen, L.E.; Duesseldorf Univ., F.R. Germany)
1975-01-01
Contrary to mean transit times, minimal transit times are the differences between arrival times of an indicator. Arrival times in various cardiac compartments can be easily measured with radioisotopes and fast gamma cameras permitting data processing. This paper summarizes data selected from more than 1500 measurements made so far on normal individuals and patients with valvular heart disease, myocardial insufficiency, digitalis effect, atrial fibrillation, hypothyroidism, hyperthyroidism, effort-syndrome and coronary artery disease. (author)
Putzer, David; Klug, Sebastian; Moctezuma, Jose Luis; Nogler, Michael
2014-12-01
Time-of-flight (TOF) cameras can guide surgical robots or provide soft tissue information for augmented reality in the medical field. In this study, a method to automatically track the soft tissue envelope of a minimally invasive hip approach in a cadaver study is described. An algorithm for the TOF camera was developed and 30 measurements on 8 surgical situs (direct anterior approach) were carried out. The results were compared to a manual measurement of the soft tissue envelope. The TOF camera showed an overall recognition rate of the soft tissue envelope of 75%. On comparing the results from the algorithm with the manual measurements, a significant difference was found (P > .005). In this preliminary study, we have presented a method for automatically recognizing the soft tissue envelope of the surgical field in a real-time application. Further improvements could result in a robotic navigation device for minimally invasive hip surgery. © The Author(s) 2014.
Free time minimizers for the three-body problem
Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor
2018-03-01
Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.
Affordable CZT SPECT with dose-time minimization (Conference Presentation)
Hugg, James W.; Harris, Brian W.; Radley, Ian
2017-03-01
PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.
Congestion relief by travel time minimization in near real time : Detroit area I-75 corridor study.
2008-12-01
"This document summarizes the activities concerning the project: Congestion Relief by : Travel Time Minimization in Near Real Time -- Detroit Area I-75 Corridor Study since : the inception of the project (Nov. 22, 2006 through September 30, 2008). : ...
Minimal computational-space implementation of multiround quantum protocols
International Nuclear Information System (INIS)
Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Chiribella, Giulio
2011-01-01
A single-party strategy in a multiround quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here, we provide an efficient realization in terms of computational-space resources.
Siting Samplers to Minimize Expected Time to Detection
Energy Technology Data Exchange (ETDEWEB)
Walter, Travis; Lorenzetti, David M.; Sohn, Michael D.
2012-05-02
We present a probabilistic approach to designing an indoor sampler network for detecting an accidental or intentional chemical or biological release, and demonstrate it for a real building. In an earlier paper, Sohn and Lorenzetti(1) developed a proof of concept algorithm that assumed samplers could return measurements only slowly (on the order of hours). This led to optimal detect to treat architectures, which maximize the probability of detecting a release. This paper develops a more general approach, and applies it to samplers that can return measurements relatively quickly (in minutes). This leads to optimal detect to warn architectures, which minimize the expected time to detection. Using a model of a real, large, commercial building, we demonstrate the approach by optimizing networks against uncertain release locations, source terms, and sampler characteristics. Finally, we speculate on rules of thumb for general sampler placement.
Optimizing Ship Speed to Minimize Total Fuel Consumption with Multiple Time Windows
Directory of Open Access Journals (Sweden)
Jae-Gon Kim
2016-01-01
Full Text Available We study the ship speed optimization problem with the objective of minimizing the total fuel consumption. We consider multiple time windows for each port call as constraints and formulate the problem as a nonlinear mixed integer program. We derive intrinsic properties of the problem and develop an exact algorithm based on the properties. Computational experiments show that the suggested algorithm is very efficient in finding an optimal solution.
Minimal time spiking in various ChR2-controlled neuron models.
Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel
2018-02-01
We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.
Computational intelligence approach for NOx emissions minimization in a coal-fired utility boiler
International Nuclear Information System (INIS)
Zhou Hao; Zheng Ligang; Cen Kefa
2010-01-01
The current work presented a computational intelligence approach used for minimizing NO x emissions in a 300 MW dual-furnaces coal-fired utility boiler. The fundamental idea behind this work included NO x emissions characteristics modeling and NO x emissions optimization. First, an objective function aiming at estimating NO x emissions characteristics from nineteen operating parameters of the studied boiler was represented by a support vector regression (SVR) model. Second, four levels of primary air velocities (PA) and six levels of secondary air velocities (SA) were regulated by using particle swarm optimization (PSO) so as to achieve low NO x emissions combustion. To reduce the time demanding, a more flexible stopping condition was used to improve the computational efficiency without the loss of the quality of the optimization results. The results showed that the proposed approach provided an effective way to reduce NO x emissions from 399.7 ppm to 269.3 ppm, which was much better than a genetic algorithm (GA) based method and was slightly better than an ant colony optimization (ACO) based approach reported in the earlier work. The main advantage of PSO was that the computational cost, typical of less than 25 s under a PC system, is much less than those required for ACO. This meant the proposed approach would be more applicable to online and real-time applications for NO x emissions minimization in actual power plant boilers.
Minimizing energy consumption for wireless computers in Moby Dick
Havinga, Paul J.M.; Smit, Gerardus Johannes Maria
1997-01-01
The Moby Dick project is a joint European project to develop and define the architecture of a new generation of mobile hand-held computers, called Pocket Companions. The Pocket Companion is a hand-held device that is resource-poor, i.e. small amount of memory, limited battery life, low processing
Minimizing Overhead for Secure Computation and Fully Homomorphic Encryption: Overhead
2015-11-01
application for this technology is mobile devices: the preparation work can be performed while the phone is plugged into a power source, then it can later...handle large realistic security parameters. Therefore, we looked into the possibility of augmenting the SAGE system with a backend that could handle...limited mobile devices and yet have ready access to cloud-based computing resources. The techniques we propose form part of a growing line of work aimed
Minimal features of a computer and its basic software to executs NEPTUNIX 2 numerical step
International Nuclear Information System (INIS)
Roux, Pierre.
1982-12-01
NEPTUNIX 2 is a package which carries out the simulation of complex processes described by numerous non linear algebro-differential equations. Main features are: non linear or time dependent parameters, implicit form, stiff systems, dynamic change of equations leading to discontinuities on some variables. Thus the mathematical model is built with an equation set F(x,x',t,l) = 0, where t is the independent variable, x' the derivative of x and l an ''algebrized'' logical variable. The NEPTUNIX 2 package is divided into two successive major steps: a non numerical step and a numerical step. The non numerical step must be executed on a series 370 IBM computer or a compatible computer. This step generates a FORTRAN language model picture fitted for the computer carrying out the numerical step. The numerical step consists in building and running a mathematical model simulator. This execution step of NEPTUNIX 2 has been designed in order to be transportable on many computers. The present manual describes minimal features of such host computer used for executing the NEPTUNIX 2 numerical step [fr
Time-Predictable Computer Architecture
Directory of Open Access Journals (Sweden)
Schoeberl Martin
2009-01-01
Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.
12 CFR 1102.27 - Computing time.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1102.27 Section 1102.27 Banks... for Proceedings § 1102.27 Computing time. (a) General rule. In computing any period of time prescribed... time begins to run is not included. The last day so computed is included, unless it is a Saturday...
Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping
2012-05-01
In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.
Exciting times: Towards a totally minimally invasive paediatric urology service
Lazarus, John
2011-01-01
Following on from the first paediatric laparoscopic nephrectomy in 1992, the growth of minimally invasive ablative and reconstructive procedures in paediatric urology has been dramatic. This article reviews the literature related to laparoscopic dismembered pyeloplasty, optimising posterior urethral valve ablation and intravesical laparoscopic ureteric reimplantation.
Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.
Heald, Emerson F.
1978-01-01
Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)
12 CFR 622.21 - Computing time.
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Computing time. 622.21 Section 622.21 Banks and... Formal Hearings § 622.21 Computing time. (a) General rule. In computing any period of time prescribed or... run is not to be included. The last day so computed shall be included, unless it is a Saturday, Sunday...
DEFF Research Database (Denmark)
Lauridsen, M M; Schaffalitzky de Muckadell, O B; Vilstrup, H
2015-01-01
Minimal hepatic encephalopathy (MHE) is a frequent complication to liver cirrhosis that causes poor quality of life, a great burden to caregivers, and can be treated. For diagnosis and grading the international guidelines recommend the use of psychometric tests of different modalities (computer...... based vs. paper and pencil). To compare results of the Continuous Reaction time (CRT) and the Portosystemic Encephalopathy (PSE) tests in a large unselected cohort of cirrhosis patients without clinically detectable brain impairment and to clinically characterize the patients according to their test...
Celano, Donna; Neuman, Susan B.
2010-01-01
Many low-income children do not have the opportunity to develop the computer skills necessary to succeed in our technological economy. Their only access to computers and the Internet--school, afterschool programs, and community organizations--is woefully inadequate. Educators must work to close this knowledge gap and to ensure that low-income…
12 CFR 908.27 - Computing time.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 908.27 Section 908.27 Banks and... PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD General Rules § 908.27 Computing time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event...
Implementation of generalized measurements with minimal disturbance on a quantum computer
International Nuclear Information System (INIS)
Decker, T.; Grassl, M.
2006-01-01
We consider the problem of efficiently implementing a generalized measurement on a quantum computer. Using methods from representation theory, we exploit symmetries of the states we want to identify respectively symmetries of the measurement operators. In order to allow the information to be extracted sequentially, the disturbance of the quantum state due to the measurement should be minimal. (Abstract Copyright [2006], Wiley Periodicals, Inc.)
Approximate k-NN delta test minimization method using genetic algorithms: Application to time series
Mateo, F; Gadea, Rafael; Sovilj, Dusan
2010-01-01
In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...
Minimizing total weighted completion time in a proportionate flow shop
Shakhlevich, N.V.; Hoogeveen, J.A.; Pinedo, M.L.
1998-01-01
We study the special case of the m machine flow shop problem in which the processing time of each operation of job j is equal to pj; this variant of the flow shop problem is known as the proportionate flow shop problem. We show that for any number of machines and for any regular performance
Teaching at the Bedside. Maximal Impact in Minimal Time.
Carlos, William G; Kritek, Patricia A; Clay, Alison S; Luks, Andrew M; Thomson, Carey C
2016-04-01
Academic physicians encounter many demands on their time including patient care, quality and performance requirements, research, and education. In an era when patient volume is prioritized and competition for research funding is intense, there is a risk that medical education will become marginalized. Bedside teaching, a responsibility of academic physicians regardless of professional track, is challenged in particular out of concern that it generates inefficiency, and distractions from direct patient care, and can distort physician-patient relationships. At the same time, the bedside is a powerful location for teaching as learners more easily engage with educational content when they can directly see its practical relevance for patient care. Also, bedside teaching enables patients and family members to engage directly in the educational process. Successful bedside teaching can be aided by consideration of four factors: climate, attention, reasoning, and evaluation. Creating a safe environment for learning and patient care is essential. We recommend that educators set expectations about use of medical jargon and engagement of the patient and family before they enter the patient room with trainees. Keep learners focused by asking relevant questions of all members of the team and by maintaining a collective leadership style. Assess and model clinical reasoning through a hypothesis-driven approach that explores the rationale for clinical decisions. Focused, specific, real-time feedback is essential for the learner to modify behaviors for future patient encounters. Together, these strategies may alleviate challenges associated with bedside teaching and ensure it remains a part of physician practice in academic medicine.
12 CFR 1780.11 - Computing time.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1780.11 Section 1780.11 Banks... time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event that commences the designated period of time is not included. The last day so...
Energy Technology Data Exchange (ETDEWEB)
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-03-27
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.
Free energy minimization to predict RNA secondary structures and computational RNA design.
Churkin, Alexander; Weinbrand, Lina; Barash, Danny
2015-01-01
Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.
Directory of Open Access Journals (Sweden)
Amir Salehipour
2012-01-01
Full Text Available This paper presents a novel application of operations research to support decision making in blood distribution management. The rapid and dynamic increasing demand, criticality of the product, storage, handling, and distribution requirements, and the different geographical locations of hospitals and medical centers have made blood distribution a complex and important problem. In this study, a real blood distribution problem containing 24 hospitals was tackled by the authors, and an exact approach was presented. The objective of the problem is to distribute blood and its products among hospitals and medical centers such that the total waiting time of those requiring the product is minimized. Following the exact solution, a hybrid heuristic algorithm is proposed. Computational experiments showed the optimal solutions could be obtained for medium size instances, while for larger instances the proposed hybrid heuristic is very competitive.
B.Bavishna*1, Mrs.M.Agalya2 & Dr.G.Kavitha3
2018-01-01
A lot of research has been done in the field of cloud computing in computing domain. For its effective performance, variety of algorithms has been proposed. The role of virtualization is significant and its performance is dependent on VM Migration and allocation. More of the energy is absorbed in cloud; therefore, the utilization of numerous algorithms is required for saving energy and efficiency enhancement in the proposed work. In the proposed work, green algorithm has been considered with ...
Time Management in the Operating Room: An Analysis of the Dedicated Minimally Invasive Surgery Suite
Hsiao, Kenneth C.; Machaidze, Zurab
2004-01-01
Background: Dedicated minimally invasive surgery suites are available that contain specialized equipment to facilitate endoscopic surgery. Laparoscopy performed in a general operating room is hampered by the multitude of additional equipment that must be transported into the room. The objective of this study was to compare the preparation times between procedures performed in traditional operating rooms versus dedicated minimally invasive surgery suites to see whether operating room efficiency is improved in the specialized room. Methods: The records of 50 patients who underwent laparoscopic procedures between September 2000 and April 2002 were retrospectively reviewed. Twenty-three patients underwent surgery in a general operating room and 18 patients in an minimally invasive surgery suite. Nine patients were excluded because of cystoscopic procedures undergone prior to laparoscopy. Various time points were recorded from which various time intervals were derived, such as preanesthesia time, anesthesia induction time, and total preparation time. A 2-tailed, unpaired Student t test was used for statistical analysis. Results: The mean preanesthesia time was significantly faster in the minimally invasive surgery suite (12.2 minutes) compared with that in the traditional operating room (17.8 minutes) (P=0.013). Mean anesthesia induction time in the minimally invasive surgery suite (47.5 minutes) was similar to time in the traditional operating room (45.7 minutes) (P=0.734). The average total preparation time for the minimally invasive surgery suite (59.6 minutes) was not significantly faster than that in the general operating room (63.5 minutes) (P=0.481). Conclusion: The amount of time that elapses between the patient entering the room and anesthesia induction is statically shorter in a dedicated minimally invasive surgery suite. Laparoscopic surgery is performed more efficiently in a dedicated minimally invasive surgery suite versus a traditional operating room. PMID
Real-time geometry-aware augmented reality in minimally invasive surgery.
Chen, Long; Tang, Wen; John, Nigel W
2017-10-01
The potential of augmented reality (AR) technology to assist minimally invasive surgery (MIS) lies in its computational performance and accuracy in dealing with challenging MIS scenes. Even with the latest hardware and software technologies, achieving both real-time and accurate augmented information overlay in MIS is still a formidable task. In this Letter, the authors present a novel real-time AR framework for MIS that achieves interactive geometric aware AR in endoscopic surgery with stereo views. The authors' framework tracks the movement of the endoscopic camera and simultaneously reconstructs a dense geometric mesh of the MIS scene. The movement of the camera is predicted by minimising the re-projection error to achieve a fast tracking performance, while the three-dimensional mesh is incrementally built by a dense zero mean normalised cross-correlation stereo-matching method to improve the accuracy of the surface reconstruction. The proposed system does not require any prior template or pre-operative scan and can infer the geometric information intra-operatively in real time. With the geometric information available, the proposed AR framework is able to interactively add annotations, localisation of tumours and vessels, and measurement labelling with greater precision and accuracy compared with the state-of-the-art approaches.
Energy Technology Data Exchange (ETDEWEB)
Meneses, Anderson A.M. [Federal University of Western Para (Brazil); Physics Institute, Rio de Janeiro State University (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Almeida, Andre P. de, E-mail: apalmeid@gmail.com [Physics Institute, Rio de Janeiro State University (Brazil); Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Almeida, Carlos E. de [Radiological Sciences Laboratory, Rio de Janeiro State University (Brazil); Barroso, Regina C. [Physics Institute, Rio de Janeiro State University (Brazil)
2012-07-15
The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-{mu}CT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-{mu}CT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: Black-Right-Pointing-Pointer Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation {mu}CT imaging. Black-Right-Pointing-Pointer The present work is part of a research on the effects of radiotherapy on the thoracic region. Black-Right-Pointing-Pointer Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.
International Nuclear Information System (INIS)
Meneses, Anderson A.M.; Giusti, Alessandro; Almeida, André P. de; Nogueira, Liebert; Braz, Delson; Almeida, Carlos E. de; Barroso, Regina C.
2012-01-01
The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-μCT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-μCT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: ► Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation μCT imaging. ► The present work is part of a research on the effects of radiotherapy on the thoracic region. ► Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.
Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.
Saller, Maximilian A C; Habershon, Scott
2017-07-11
Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
On the Minimization of Fluctuations in the Response Times of Autoregulatory Gene Networks
Murugan, Rajamanickam; Kreiman, Gabriel
2011-01-01
The temporal dynamics of the concentrations of several proteins are tightly regulated, particularly for critical nodes in biological networks such as transcription factors. An important mechanism to control transcription factor levels is through autoregulatory feedback loops where the protein can bind its own promoter. Here we use theoretical tools and computational simulations to further our understanding of transcription-factor autoregulatory loops. We show that the stochastic dynamics of feedback and mRNA synthesis can significantly influence the speed of response of autoregulatory genetic networks toward external stimuli. The fluctuations in the response-times associated with the accumulation of the transcription factor in the presence of negative or positive autoregulation can be minimized by confining the ratio of mRNA/protein lifetimes within 1:10. This predicted range of mRNA/protein lifetime agrees with ranges observed empirically in prokaryotes and eukaryotes. The theory can quantitatively and systematically account for the influence of regulatory element binding and unbinding dynamics on the transcription-factor concentration rise-times. The simulation results are robust against changes in several system parameters of the gene expression machinery. PMID:21943410
General purpose computers in real time
International Nuclear Information System (INIS)
Biel, J.R.
1989-01-01
I see three main trends in the use of general purpose computers in real time. The first is more processing power. The second is the use of higher speed interconnects between computers (allowing more data to be delivered to the processors). The third is the use of larger programs running in the computers. Although there is still work that needs to be done, I believe that all indications are that the online need for general purpose computers should be available for the SCC and LHC machines. 2 figs
Applied time series analysis and innovative computing
Ao, Sio-Iong
2010-01-01
This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.
International Nuclear Information System (INIS)
Tselios, Kostas; Simos, T.E.
2007-01-01
In this Letter a new explicit fourth-order seven-stage Runge-Kutta method with a combination of minimal dispersion and dissipation error and maximal accuracy and stability limit along the imaginary axes, is developed. This method was produced by a general function that was constructed to satisfy all the above requirements and, from which, all the existing fourth-order six-stage RK methods can be produced. The new method is more efficient than the other optimized methods, for acoustic computations
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
International Nuclear Information System (INIS)
Li Qiang; Doi Kunio
2006-01-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists detect various lesions in medical images. In CAD schemes, classifiers play a key role in achieving a high lesion detection rate and a low false-positive rate. Although many popular classifiers such as linear discriminant analysis and artificial neural networks have been employed in CAD schemes for reduction of false positives, a rule-based classifier has probably been the simplest and most frequently used one since the early days of development of various CAD schemes. However, with existing rule-based classifiers, there are major disadvantages that significantly reduce their practicality and credibility. The disadvantages include manual design, poor reproducibility, poor evaluation methods such as resubstitution, and a large overtraining effect. An automated rule-based classifier with a minimized overtraining effect can overcome or significantly reduce the extent of the above-mentioned disadvantages. In this study, we developed an 'optimal' method for the selection of cutoff thresholds and a fully automated rule-based classifier. Experimental results performed with Monte Carlo simulation and a real lung nodule CT data set demonstrated that the automated threshold selection method can completely eliminate overtraining effect in the procedure of cutoff threshold selection, and thus can minimize overall overtraining effect in the constructed rule-based classifier. We believe that this threshold selection method is very useful in the construction of automated rule-based classifiers with minimized overtraining effect
International Nuclear Information System (INIS)
Chaux, Pierre-Yves
2013-01-01
Preventive risk assessment of a complex system rely on a dynamic models which describe the link between the system failure and the scenarios of failure and repair events from its components. The qualitative analyses of a binary dynamic and repairable system is aiming at computing and analyse the scenarios that lead to the system failure. Since such systems describe a large set of those, only the most representative ones, called Minimal Cut Sequences (MCS), are of interest for the safety engineer. The lack of a formal definition for the MCS has generated multiple definitions either specific to a given model (and thus not generic) or informal. This work proposes i) a formal framework and definition for the MCS while staying independent of the reliability model used, ii) the methodology to compute them using property extracted from their formal definition, iii) an extension of the formal framework for multi-states components in order to perform the qualitative analyses of Boolean logic Driven Markov Processes (BDMP) models. Under the hypothesis that the scenarios implicitly described by any reliability model can always be represented by a finite automaton, this work is defining the coherency for dynamic and repairable systems as the way to give a minimal representation of all scenarios that are leading to the system failure. (author)
Directory of Open Access Journals (Sweden)
Hamidreza Haddad
2012-04-01
Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.
Instruction timing for the CDC 7600 computer
International Nuclear Information System (INIS)
Lipps, H.
1975-01-01
This report provides timing information for all instructions of the Control Data 7600 computer, except for instructions of type 01X, to enable the optimization of 7600 programs. The timing rules serve as background information for timing charts which are produced by a program (TIME76) of the CERN Program Library. The rules that co-ordinate the different sections of the CPU are stated in as much detail as is necessary to time the flow of instructions for a given sequence of code. Instruction fetch, instruction issue, and access to small core memory are treated at length, since details are not available from the computer manuals. Annotated timing charts are given for 24 examples, chosen to display the full range of timing considerations. (Author)
Minimizing the negative effects of device mobility in cell-based ad-hoc wireless computational grids
CSIR Research Space (South Africa)
Mudali, P
2006-09-01
Full Text Available This paper provides an outline of research being conducted to minimize the disruptive effects of device mobility in wireless computational grid networks. The proposed wireless grid framework uses the existing GSM cellular architecture, with emphasis...
One-machine job-scheduling with non-constant capacity - Minimizing weighted completion times
Amaddeo, H.F.; Amaddeo, H.F.; Nawijn, W.M.; van Harten, Aart
1997-01-01
In this paper an n-job one-machine scheduling problem is considered, in which the machine capacity is time-dependent and jobs are characterized by their work content. The objective is to minimize the sum of weighted completion times. A necessary optimality condition is presented and we discuss some
Duijns, S.; Dijk, van J.G.B.; Spaans, B.; Jukema, J.; Boer, de W.F.; Piersma, Th.
2009-01-01
Different spatial distributions of food abundance and predators may urge birds to make a trade-off between food intake and danger. Such a trade-off might be solved in different ways in migrant birds that either follow a time-minimizing or energy-minimizing strategy; these strategies have been
Duijns, Sjoerd; van Dijk, Jacintha G. B.; Spaans, Bernard; Jukema, Joop; de Boer, Willem F.; Piersma, Theunis
2009-01-01
Different spatial distributions Of food abundance and predators may urge birds to make a trade-off between food intake and danger. Such a trade-off might be solved in different ways in migrant birds that either follow a time-minimizing or energy-minimizing strategy; these strategies have been
Fast algorithms for computing phylogenetic divergence time.
Crosby, Ralph W; Williams, Tiffani L
2017-12-06
The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.
Computer network time synchronization the network time protocol
Mills, David L
2006-01-01
What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol
Meaning Making Through Minimal Linguistic Forms in Computer-Mediated Communication
Directory of Open Access Journals (Sweden)
Muhammad Shaban Rafi
2014-05-01
Full Text Available The purpose of this study was to investigate the linguistic forms, which commonly constitute meanings in the digital environment. The data were sampled from 200 Bachelor of Science (BS students (who had Urdu as their primary language of communication and English as one of the academic languages or the most prestigious second language of five universities situated in Lahore, Pakistan. The procedure for analysis was conceived within much related theoretical work on text analysis. The study reveals that cyber-language is organized through patterns of use, which can be broadly classified into minimal linguistic forms constituting a meaning-making resource. In addition, the expression of syntactic mood, and discourse roles the participants technically assume tend to contribute to the theory of meaning in the digital environment. It is hoped that the study would make some contribution to the growing literature on multilingual computer-mediated communication (CMC.
International Nuclear Information System (INIS)
Lima da Silva, Aline; Heck, Nestor Cesar
2003-01-01
Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron
An optimization based method for line planning to minimize travel time
DEFF Research Database (Denmark)
Bull, Simon Henry; Lusby, Richard Martin; Larsen, Jesper
2015-01-01
The line planning problem is to select a number of lines from a potential pool which provides sufficient passenger capacity and meets operational requirements, with some objective measure of solution line quality. We model the problem of minimizing the average passenger system time, including...
Analysis of labor employment assessment on production machine to minimize time production
Hernawati, Tri; Suliawati; Sari Gumay, Vita
2018-03-01
Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.
Directory of Open Access Journals (Sweden)
John Reilly
2018-03-01
Full Text Available Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc. and generalized displacement (deflection, rotation, etc. to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature–deformation–displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i the range of raw temperatures on the structure, and (ii the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University.
Reilly, John; Glisic, Branko
2018-03-01
Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM) analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc.) and generalized displacement (deflection, rotation, etc.) to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature-deformation-displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i) the range of raw temperatures on the structure, and (ii) the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University.
Power Minimization for Parallel Real-Time Systems with Malleable Jobs and Homogeneous Frequencies
Paolillo, Antonio; Goossens, Joël; Hettiarachchi, Pradeep M.; Fisher, Nathan
2014-01-01
In this work, we investigate the potential benefit of parallelization for both meeting real-time constraints and minimizing power consumption. We consider malleable Gang scheduling of implicit-deadline sporadic tasks upon multiprocessors. By extending schedulability criteria for malleable jobs to DVFS-enabled multiprocessor platforms, we are able to derive an offline polynomial-time optimal processor/frequency-selection algorithm. Simulations of our algorithm on randomly generated task system...
Time series modeling, computation, and inference
Prado, Raquel
2010-01-01
The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit
Directory of Open Access Journals (Sweden)
Usama Hamed Issa
2013-12-01
Full Text Available The construction projects involve various risk factors which have various impacts on time objective that may lead to time-overrun. This study suggests and applies a new technique for minimizing risk factors effect on time using lean construction principles. The lean construction is implemented in this study using the last planner system through execution of an industrial project in Egypt. Evaluating the effect of using the new tool is described in terms of two measurements: Percent Expected Time-overrun (PET and Percent Plan Completed (PPC. The most important risk factors are identified and assessed, while PET is quantified at the project start and during the project execution using a model for time-overrun quantification. The results showed that total project time is reduced by 15.57% due to decreasing PET values, while PPC values improved. This is due to minimizing and mitigating the effect of most of the risk factors in this project due to implementing lean construction techniques. The results proved that the quantification model is suitable for evaluating the effect of using lean construction techniques. In addition, the results showed that average value of PET due to factors affected by lean techniques represents 67% from PET values due to all minimized risk factors.
ℓ0 Gradient Minimization Based Image Reconstruction for Limited-Angle Computed Tomography.
Directory of Open Access Journals (Sweden)
Wei Yu
Full Text Available In medical and industrial applications of computed tomography (CT imaging, limited by the scanning environment and the risk of excessive X-ray radiation exposure imposed to the patients, reconstructing high quality CT images from limited projection data has become a hot topic. X-ray imaging in limited scanning angular range is an effective imaging modality to reduce the radiation dose to the patients. As the projection data available in this modality are incomplete, limited-angle CT image reconstruction is actually an ill-posed inverse problem. To solve the problem, image reconstructed by conventional filtered back projection (FBP algorithm frequently results in conspicuous streak artifacts and gradual changed artifacts nearby edges. Image reconstruction based on total variation minimization (TVM can significantly reduce streak artifacts in few-view CT, but it suffers from the gradual changed artifacts nearby edges in limited-angle CT. To suppress this kind of artifacts, we develop an image reconstruction algorithm based on ℓ0 gradient minimization for limited-angle CT in this paper. The ℓ0-norm of the image gradient is taken as the regularization function in the framework of developed reconstruction model. We transformed the optimization problem into a few optimization sub-problems and then, solved these sub-problems in the manner of alternating iteration. Numerical experiments are performed to validate the efficiency and the feasibility of the developed algorithm. From the statistical analysis results of the performance evaluations peak signal-to-noise ratio (PSNR and normalized root mean square distance (NRMSD, it shows that there are significant statistical differences between different algorithms from different scanning angular ranges (p<0.0001. From the experimental results, it also indicates that the developed algorithm outperforms classical reconstruction algorithms in suppressing the streak artifacts and the gradual changed
Directory of Open Access Journals (Sweden)
He Cheng
2014-02-01
Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm
Cameron, Katherine; Murray, Alan
2008-05-01
This paper investigates whether spike-timing-dependent plasticity (STDP) can minimize the effect of mismatch within the context of a depth-from-motion algorithm. To improve noise rejection, this algorithm contains a spike prediction element, whose performance is degraded by analog very large scale integration (VLSI) mismatch. The error between the actual spike arrival time and the prediction is used as the input to an STDP circuit, to improve future predictions. Before STDP adaptation, the error reflects the degree of mismatch within the prediction circuitry. After STDP adaptation, the error indicates to what extent the adaptive circuitry can minimize the effect of transistor mismatch. The circuitry is tested with static and varying prediction times and chip results are presented. The effect of noisy spikes is also investigated. Under all conditions the STDP adaptation is shown to improve performance.
Kamanu, Frederick Kinyua
2012-12-01
The mycolic acid bacteria are a distinct suprageneric group of asporogenous Grampositive, high GC-content bacteria, distinguished by the presence of mycolic acids in their cell envelope. They exhibit great diversity in their cell and morphology; although primarily non-pathogens, this group contains three major pathogens Mycobacterium leprae, Mycobacterium tuberculosis complex, and Corynebacterium diphtheria. Although the mycolic acid bacteria are a clearly defined group of bacteria, the taxonomic relationships between its constituent genera and species are less well defined. Two approaches were tested for their suitability in describing the taxonomy of the group. First, a Multilocus Sequence Typing (MLST) experiment was assessed and found to be superior to monophyletic (16S small ribosomal subunit) in delineating a total of 52 mycolic acid bacterial species. Phylogenetic inference was performed using the neighbor-joining method. To further refine phylogenetic analysis and to take advantage of the widespread availability of bacterial genome data, a computational framework that simulates DNA-DNA hybridisation was developed and validated using multiscale bootstrap resampling. The tool classifies microbial genomes based on whole genome DNA, and was deployed as a web-application using PHP and Javascript. It is accessible online at http://cbrc.kaust.edu.sa/dna_hybridization/ A third study was a computational and statistical methods in the identification and analysis of a putative minimal mycolic acid bacterial genome so as to better understand (1) the genomic requirements to encode a mycolic acid bacterial cell and (2) the role and type of genes and genetic elements that lead to the massive increase in genome size in environmental mycolic acid bacteria. Using a reciprocal comparison approach, a total of 690 orthologous gene clusters forming a putative minimal genome were identified across 24 mycolic acid bacterial species. In order to identify new potential drug
Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors
Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.
1994-10-01
This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.
Real-Time Thevenin Impedance Computation
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Jóhannsson, Hjörtur
2013-01-01
operating state, and strict time constraints are difficult to adhere to as the complexity of the grid increases. Several suggested approaches for real-time stability assessment require Thevenin impedances to be determined for the observed system conditions. By combining matrix factorization, graph reduction......, and parallelization, we develop an algorithm for computing Thevenin impedances an order of magnitude faster than previous approaches. We test the factor-and-solve algorithm with data from several power grids of varying complexity, and we show how the algorithm allows realtime stability assessment of complex power...
Optimal post-warranty maintenance policy with repair time threshold for minimal repair
International Nuclear Information System (INIS)
Park, Minjae; Mun Jung, Ki; Park, Dong Ho
2013-01-01
In this paper, we consider a renewable minimal repair–replacement warranty policy and propose an optimal maintenance model after the warranty is expired. Such model adopts the repair time threshold during the warranty period and follows with a certain type of system maintenance policy during the post-warranty period. As for the criteria for optimality, we utilize the expected cost rate per unit time during the life cycle of the system, which has been frequently used in many existing maintenance models. Based on the cost structure defined for each failure of the system, we formulate the expected cost rate during the life cycle of the system, assuming that a renewable minimal repair–replacement warranty policy with the repair time threshold is provided to the user during the warranty period. Once the warranty is expired, the maintenance of the system is the user's sole responsibility. The life cycle of the system is defined on the perspective of the user and the expected cost rate per unit time is derived in this context. We obtain the optimal maintenance policy during the maintenance period following the expiration of the warranty period by minimizing such a cost rate. Numerical examples using actual failure data are presented to exemplify the applicability of the methodologies proposed in this paper.
2014-01-01
Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295
Directory of Open Access Journals (Sweden)
Shih-Wei Lin
2014-01-01
Full Text Available Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP, which aims to minimize total service time, and proposes an iterated greedy (IG algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.
Computation Offloading for Frame-Based Real-Time Tasks under Given Server Response Time Guarantees
Directory of Open Access Journals (Sweden)
Anas S. M. Toma
2014-11-01
Full Text Available Computation offloading has been adopted to improve the performance of embedded systems by offloading the computation of some tasks, especially computation-intensive tasks, to servers or clouds. This paper explores computation offloading for real-time tasks in embedded systems, provided given response time guarantees from the servers, to decide which tasks should be offloaded to get the results in time. We consider frame-based real-time tasks with the same period and relative deadline. When the execution order of the tasks is given, the problem can be solved in linear time. However, when the execution order is not specified, we prove that the problem is NP-complete. We develop a pseudo-polynomial-time algorithm for deriving feasible schedules, if they exist. An approximation scheme is also developed to trade the error made from the algorithm and the complexity. Our algorithms are extended to minimize the period/relative deadline of the tasks for performance maximization. The algorithms are evaluated with a case study for a surveillance system and synthesized benchmarks.
Directory of Open Access Journals (Sweden)
Aiyun Gao
2017-01-01
Full Text Available A real-time optimal control of parallel hybrid electric vehicles (PHEVs with the equivalent consumption minimization strategy (ECMS is presented in this paper, whose purpose is to achieve the total equivalent fuel consumption minimization and to maintain the battery state of charge (SOC within its operation range at all times simultaneously. Vehicle and assembly models of PHEVs are established, which provide the foundation for the following calculations. The ECMS is described in detail, in which an instantaneous cost function including the fuel energy and the electrical energy is proposed, whose emphasis is the computation of the equivalent factor. The real-time optimal control strategy is designed through regarding the minimum of the total equivalent fuel consumption as the control objective and the torque split factor as the control variable. The validation of the control strategy proposed is demonstrated both in the MATLAB/Simulink/Advisor environment and under actual transportation conditions by comparing the fuel economy, the charge sustainability, and parts performance with other three control strategies under different driving cycles including standard, actual, and real-time road conditions. Through numerical simulations and real vehicle tests, the accuracy of the approach used for the evaluation of the equivalent factor is confirmed, and the potential of the proposed control strategy in terms of fuel economy and keeping the deviations of SOC at a low level is illustrated.
Time reversibility, computer simulation, algorithms, chaos
Hoover, William Graham
2012-01-01
A small army of physicists, chemists, mathematicians, and engineers has joined forces to attack a classic problem, the "reversibility paradox", with modern tools. This book describes their work from the perspective of computer simulation, emphasizing the author's approach to the problem of understanding the compatibility, and even inevitability, of the irreversible second law of thermodynamics with an underlying time-reversible mechanics. Computer simulation has made it possible to probe reversibility from a variety of directions and "chaos theory" or "nonlinear dynamics" has supplied a useful vocabulary and a set of concepts, which allow a fuller explanation of irreversibility than that available to Boltzmann or to Green, Kubo and Onsager. Clear illustration of concepts is emphasized throughout, and reinforced with a glossary of technical terms from the specialized fields which have been combined here to focus on a common theme. The book begins with a discussion, contrasting the idealized reversibility of ba...
Assessing and minimizing contamination in time of flight based validation data
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
DEFF Research Database (Denmark)
Lauridsen, Mette Enok Munk; Thiele, Maja; Kimer, N
2013-01-01
Abstract Existing tests for minimal/covert hepatic encephalopathy (m/cHE) are time- and expertise consuming and primarily useable for research purposes. An easy-to-use, fast and reliable diagnostic and grading tool is needed. We here report on the background, experience, and ongoing research......-10) percentile) as a parameter of reaction time variability. The index is a measure of alertness stability and is used to assess attention and cognition deficits. The CRTindex identifies half of patients in a Danish cohort with chronic liver disease, as having m/cHE, a normal value safely precludes HE, it has...
Takiyama, Ken
2015-01-01
Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this t...
Amore Bonapasta, Stefano; Checcacci, Paolo; Guerra, Francesco; Mirasolo, Vita M; Moraldi, Luca; Ferrara, Angelo; Annecchiarico, Mario; Coratti, Andrea
2016-06-01
The optimal delay in the start of chemotherapy following rectal cancer surgery has not yet been identified. However, postponed adjuvant therapy has been proven to be connected with a significant survival detriment. We aimed to investigate whether the time to initiation of adjuvant treatment can be influenced by the application of minimally invasive surgery rather than traditional open surgery. By comprehensively evaluating the available inherent literature, several factors appear to be associated with delayed postoperative chemotherapy. Some of them are strictly related to surgical short-term outcomes. Laparoscopy results in shortened length of hospital stay, reduced surgical morbidity and lower rate of wound infection compared to conventional surgery. Probably due to such advantages, the application of minimally-invasive surgery to treat rectal malignancies seems to impact favorably the possibility to start adjuvant chemotherapy within an adequate timeframe following surgical resection, with potential improvement in patient survival.
Harato, Kengo; Maeno, Shinichi; Tanikawa, Hidenori; Kaneda, Kazuya; Morishige, Yutaro; Nomoto, So; Niki, Yasuo
2016-08-01
It was hypothesized that surgical time of beginners would be much longer than that of experts. Our purpose was to investigate and clarify the important manoeuvres for beginners to minimize surgical time in primary total knee arthroplasty (TKA) as a multicentre study. A total of 300 knees in 248 patients (averaged 74.6 years) were enrolled. All TKAs were done using the same instruments and the same measured resection technique at 14 facilities by 25 orthopaedic surgeons. Surgeons were divided into three surgeon groups (four experts, nine medium-volume surgeons and 12 beginners). The surgical technique was divided into five phases. Detailed surgical time and ratio of the time in each phase to overall surgical time were recorded and compared among the groups in each phase. A total of 62, 119, and 119 TKAs were done by beginners, medium-volume surgeons, and experts, respectively. Significant differences in surgical time among the groups were seen in each phase. Concerning the ratio of the time, experts and medium-volume surgeons seemed cautious in fixation of the permanent component compared to other phases. Interestingly, even in ratio, beginners and medium-volume surgeons took more time in exposure of soft tissue compared to experts. (0.14 in beginners, 0.13 in medium-volume surgeons, 0.11 in experts, P time in exposure and closure of soft tissue compared to experts. Improvement in basic technique is essential to minimize surgical time among beginners. First of all, surgical instructors should teach basic techniques in primary TKA for beginners. Therapeutic studies, Level IV.
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh
2015-12-01
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.
Computed Tomography Helps to Plan Minimally Invasive Aortic Valve Replacement Operations.
Stoliński, Jarosław; Plicner, Dariusz; Grudzień, Grzegorz; Kruszec, Paweł; Fijorek, Kamil; Musiał, Robert; Andres, Janusz
2016-05-01
This study evaluated the role of multidetector computed tomography (MDCT) in preparation for minimally invasive aortic valve replacement (MIAVR). An analysis of 187 patients scheduled for MIAVR between June 2009 and December 2014 was conducted. In the study group (n = 86), MDCT of the thorax, aorta, and femoral arteries was performed before the operation. In the control group (n = 101), patients qualified for MIAVR without receiving preoperative MDCT. The surgical strategy was changed preoperatively in 12.8% of patients from the study group and in 2.0% of patients from the control group (p = 0.010) and intraoperatively in 9.9% of patients from the control group and in none from the study group (p = 0.002). No conversion to median sternotomy was necessary in the study group; among the controls, there were 4.0% conversions. On the basis of the MDCT measurements, optimal access to the aortic valve was achieved when the angle between the aortic valve plane and the line to the second intercostal space was 91.9 ± 10.0 degrees and to the third intercostal space was 94.0 ± 1.4 degrees, with the distance to the valve being 94.8 ± 13.8 mm and 84.5 ± 9.9 mm for the second and third intercostal spaces, respectively. The right atrium covering the site of the aortotomy was present in 42.9% of cases when MIAVR had been performed through the third intercostal space and in 1.3% when through the second intercostal space (p = 0.001). Preoperative MDCT of the thorax, aorta, and femoral arteries makes it possible to plan MIAVR operations. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Computing Refined Buneman Trees in Cubic Time
DEFF Research Database (Denmark)
Brodal, G.S.; Fagerberg, R.; Östlin, A.
2003-01-01
Reconstructing the evolutionary tree for a set of n species based on pairwise distances between the species is a fundamental problem in bioinformatics. Neighbor joining is a popular distance based tree reconstruction method. It always proposes fully resolved binary trees despite missing evidence...... in the underlying distance data. Distance based methods based on the theory of Buneman trees and refined Buneman trees avoid this problem by only proposing evolutionary trees whose edges satisfy a number of constraints. These trees might not be fully resolved but there is strong combinatorial evidence for each...... proposed edge. The currently best algorithm for computing the refined Buneman tree from a given distance measure has a running time of O(n 5) and a space consumption of O(n 4). In this paper, we present an algorithm with running time O(n 3) and space consumption O(n 2). The improved complexity of our...
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Ohnsorge, J A K; Weisskopf, M; Siebert, C H
2005-01-01
Optoelectronic navigation for computer-assisted orthopaedic surgery (CAOS) is based on a firm connection of bone with passive reflectors or active light-emitting diodes in a specific three-dimensional pattern. Even a so-called "minimally-invasive" dynamic reference base (DRB) requires fixation with screws or clamps via incision of the skin. Consequently an originally percutaneous intervention would unnecessarily be extended to an open procedure. Thus, computer-assisted navigation is rarely applied. Due to their tree-like design most DRB's interfere with the surgeon's actions and therefore are at permanent risk to be accidentally dislocated. Accordingly, the optic communication between the camera and the operative site may repeatedly be interrupted. The aim of the research was the development of a less bulky, more comfortable, stable and safely trackable device that can be fixed truly percutaneously. With engineering support of the industrial partner the radiolucent epiDRB was developed. It can be fixed with two or more pins and gains additional stability from its epicutaneous position. The intraoperative applicability and reliability was experimentally tested. Its low centre of gravity and its flat design allow the device to be located directly in the area of interest. Thanks to its epicutaneous position and its particular shape the epiDRB may perpetually be tracked by the navigation system without hindering the surgeon's actions. Hence, the risk of being displaced by accident is minimised and the line of sight remains unaffected. With the newly developed epiDRB computer-assisted navigation becomes easier and safer to handle even in punctures and other percutaneous procedures at the spine as much as at the extremities without an unproportionate amount of additional trauma. Due to the special design referencing of more than one vertebral body is possible at one time, thus decreasing radiation exposure and increasing efficiency.
Real-time minimal-bit-error probability decoding of convolutional codes
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Real-time stereo generation for surgical vision during minimal invasive robotic surgery
Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod
2016-03-01
This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.
Majdani, Omid; Bartling, Soenke H; Leinung, Martin; Stöver, Timo; Lenarz, Minoo; Dullin, Christian; Lenarz, Thomas
2008-02-01
High-precision intraoperative navigation using high-resolution flat-panel volume computed tomography makes feasible the possibility of minimally invasive cochlear implant surgery, including cochleostomy. Conventional cochlear implant surgery is typically performed via mastoidectomy with facial recess to identify and avoid damage to vital anatomic landmarks. To accomplish this procedure via a minimally invasive approach--without performing mastoidectomy--in a precise fashion, image-guided technology is necessary. With such an approach, surgical time and expertise may be reduced, and hearing preservation may be improved. Flat-panel volume computed tomography was used to scan 4 human temporal bones. A drilling channel was planned preoperatively from the mastoid surface to the round window niche, providing a margin of safety to all functional important structures (e.g., facial nerve, chorda tympani, incus). Postoperatively, computed tomographic imaging and conventional surgical exploration of the drilled route to the cochlea were performed. All 4 specimens showed a cochleostomy located at the scala tympani anterior inferior to the round window. The chorda tympani was damaged in 1 specimen--this was preoperatively planned as a narrow facial recess was encountered. Using flat-panel volume computed tomography for image-guided surgical navigation, we were able to perform minimally invasive cochlear implant surgery defined as a narrow, single-channel mastoidotomy with cochleostomy. Although this finding is preliminary, it is technologically achievable.
Real-Time Accumulative Computation Motion Detectors
Directory of Open Access Journals (Sweden)
Saturnino Maldonado-Bascón
2009-12-01
Full Text Available The neurally inspired accumulative computation (AC method and its application to motion detection have been introduced in the past years. This paper revisits the fact that many researchers have explored the relationship between neural networks and finite state machines. Indeed, finite state machines constitute the best characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The article shows how to reach real-time performance after using a model described as a finite state machine. This paper introduces two steps towards that direction: (a A simplification of the general AC method is performed by formally transforming it into a finite state machine. (b A hardware implementation in FPGA of such a designed AC module, as well as an 8-AC motion detector, providing promising performance results. We also offer two case studies of the use of AC motion detectors in surveillance applications, namely infrared-based people segmentation and color-based people tracking, respectively.
Dynamics of symmetry breaking during quantum real-time evolution in a minimal model system.
Heyl, Markus; Vojta, Matthias
2014-10-31
One necessary criterion for the thermalization of a nonequilibrium quantum many-particle system is ergodicity. It is, however, not sufficient in cases where the asymptotic long-time state lies in a symmetry-broken phase but the initial state of nonequilibrium time evolution is fully symmetric with respect to this symmetry. In equilibrium, one particular symmetry-broken state is chosen as a result of an infinitesimal symmetry-breaking perturbation. From a dynamical point of view the question is: Can such an infinitesimal perturbation be sufficient for the system to establish a nonvanishing order during quantum real-time evolution? We study this question analytically for a minimal model system that can be associated with symmetry breaking, the ferromagnetic Kondo model. We show that after a quantum quench from a completely symmetric state the system is able to break its symmetry dynamically and discuss how these features can be observed experimentally.
Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time
Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.
2018-03-01
A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.
Directory of Open Access Journals (Sweden)
Sung-Yen Lin
2014-08-01
Full Text Available Total knee arthroplasty (TKA in patients with knee arthritis and retained implants in the ipsilateral femur is a challenge for knee surgeons. Use of a conventional intramedullary femoral cutting guide is not practical because of the obstruction of the medullary canal by implants. Previous studies have shown that computer-assisted surgery (CAS can help restore alignment in conventional TKA for patients with knee arthritis with retained femoral implants or extra-articular deformity, without the need for implant removal or osteotomy. However, little has been published regarding outcomes with the use of navigation in minimally invasive surgery (MIS-TKA for patients with this complex knee arthritis. MIS has been proven to provide less postoperative pain and faster recovery than conventional TKA, but MIS-TKA in patients with retained femoral implants poses a greater risk in limb malalignment. The purpose of this study is to report the outcome of CAS-MIS-TKA in patients with knee arthritis and retained femoral implants. Between April 2006 and March 2008, eight patients with knee arthritis and retained femoral implants who underwent the CAS-MIS-TKA were retrospectively reviewed. Three of the eight patients had extra-articular deformity, including two femur bones and one tibia bone, in the preoperative examination. The anteroposterior, lateral, and long-leg weight-bearing radiographs carried out at 3-month follow-up was used to determine the mechanical axis of lower limb and the position of components. The mean preoperative femorotibial angle in patients without extra-articular deformity was 3.8° of varus and was corrected to 4.6° of valgus. With the use of navigation in MIS-TKA, the two patients in this study with extra-articular femoral deformity also obtained an ideal postoperative mechanical axis within 2° of normal alignment. Overall, there was a good restoration of postoperative mechanical alignment in all cases, with a mean angle of 0.4° of
A strategy for reducing turnaround time in design optimization using a distributed computer system
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
Cluster Computing for Embedded/Real-Time Systems
Katz, D.; Kepner, J.
1999-01-01
Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.
Directory of Open Access Journals (Sweden)
Stephen M Plaza
Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.
A Heuristic Scheduling Algorithm for Minimizing Makespan and Idle Time in a Nagare Cell
Directory of Open Access Journals (Sweden)
M. Muthukumaran
2012-01-01
Full Text Available Adopting a focused factory is a powerful approach for today manufacturing enterprise. This paper introduces the basic manufacturing concept for a struggling manufacturer with limited conventional resources, providing an alternative solution to cell scheduling by implementing the technique of Nagare cell. Nagare cell is a Japanese concept with more objectives than cellular manufacturing system. It is a combination of manual and semiautomatic machine layout as cells, which gives maximum output flexibility for all kind of low-to-medium- and medium-to-high- volume productions. The solution adopted is to create a dedicated group of conventional machines, all but one of which are already available on the shop floor. This paper focuses on the development of heuristic scheduling algorithm in step-by-step method. The algorithm states that the summation of processing time of all products on each machine is calculated first and then the sum of processing time is sorted by the shortest processing time rule to get the assignment schedule. Based on the assignment schedule Nagare cell layout is arranged for processing the product. In addition, this algorithm provides steps to determine the product ready time, machine idle time, and product idle time. And also the Gantt chart, the experimental analysis, and the comparative results are illustrated with five (1×8 to 5×8 scheduling problems. Finally, the objective of minimizing makespan and idle time with greater customer satisfaction is studied through.
Kim, Byungjoon B J; Delbridge, Theodore R; Kendrick, Dawn B
2017-07-10
Purpose Two different systems for streaming patients were considered to improve efficiency measures such as waiting times (WTs) and length of stay (LOS) for a current emergency department (ED). A typical fast track area (FTA) and a fast track with a wait time threshold (FTW) were designed and compared effectiveness measures from the perspective of total opportunity cost of all patients' WTs in the ED. The paper aims to discuss these issues. Design/methodology/approach This retrospective case study used computerized ED patient arrival to discharge time logs (between July 1, 2009 and June 30, 2010) to build computer simulation models for the FTA and fast track with wait time threshold systems. Various wait time thresholds were applied to stream different acuity-level patients. National average wait time for each acuity level was considered as a threshold to stream patients. Findings The fast track with a wait time threshold (FTW) showed a statistically significant shorter total wait time than the current system or a typical FTA system. The patient streaming management would improve the service quality of the ED as well as patients' opportunity costs by reducing the total LOS in the ED. Research limitations/implications The results of this study were based on computer simulation models with some assumptions such as no transfer times between processes, an arrival distribution of patients, and no deviation of flow pattern. Practical implications When the streaming of patient flow can be managed based on the wait time before being seen by a physician, it is possible for patients to see a physician within a tolerable wait time, which would result in less crowded in the ED. Originality/value A new streaming scheme of patients' flow may improve the performance of fast track system.
Directory of Open Access Journals (Sweden)
Kim Jaewhan
2010-04-01
Full Text Available Abstract Background Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH, and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Methods Study objectives were to conduct 1 Time-and-Motion (TM simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2 a Cost Minimization Analysis (CMA relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1 Learning (initial use instructions, 2 Preparation (arrange device for use, 3 Administration (actual simulation manikin injection, and 4 Storage (maintain product viability between doses, in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages, non-drug medical supplies, and drug product costs. Results Norditropin® NordiFlex and Norditropin® NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsværd, Denmark took less weekly Total Time (p ® Pen (GTP, Pfizer, Inc, New York, New York or HumatroPen® (HTP, Eli Lilly and Company, Indianapolis, Indiana. Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes, NNP (2.48 minutes GTP (4.11 minutes, HTP (8.64 minutes, p Conclusions Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs.
International Nuclear Information System (INIS)
Blanca, Ernest
1974-10-01
Alpha-numeric boolean expressions, written in the form of sums of products and/or products of sums with many brackets, may be minimized in two steps: syntaxic recognition analysis using precedence operator grammar, syntaxic reduction analysis. These two phases of execution and the different programs of the corresponding machine algorithm are described. Examples of minimization of alpha-numeric boolean expressions written with the help of brackets, utilisation note of the program CHOPIN and theoretical considerations related to language, grammar, precedence operator grammar, sequential systems, boolean sets, boolean representations and treatments of boolean expressions, boolean matrices and their use in grammar theory, are discussed and described. (author) [fr
Energy Technology Data Exchange (ETDEWEB)
Malinowski, Jacek
2004-05-01
A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)
International Nuclear Information System (INIS)
Malinowski, Jacek
2004-01-01
A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-12-07
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.
Software Accelerates Computing Time for Complex Math
2014-01-01
Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.
MINIMIZING THE PREPARATION TIME OF A TUBES MACHINE: EXACT SOLUTION AND HEURISTICS
Directory of Open Access Journals (Sweden)
Robinson S.V. Hoto
Full Text Available ABSTRACT In this paper we optimize the preparation time of a tubes machine. Tubes are hard tubes made by gluing strips of paper that are packed in paper reels, and some of them may be reused between the production of one and another tube. We present a mathematical model for the minimization of changing reels and movements and also implementations for the heuristics Nearest Neighbor, an improvement of a nearest neighbor (Best Nearest Neighbor, refinements of the Best Nearest Neighbor heuristic and a heuristic of permutation called Best Configuration using the IDE (integrated development environment WxDev C++. The results obtained by simulations improve the one used by the company.
Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid
2017-04-01
Emergency department (ED) is the main unit of a hospital that provides emergency treatment. Operating 24 hours a day with limited number of resources invites more problems to the current chaotic situation in some hospitals in Malaysia. Delays in getting treatments that caused patients to wait for a long period of time are among the frequent complaints against government hospitals. Therefore, the ED management needs a model that can be used to examine and understand resource capacity which can assist the hospital managers to reduce patients waiting time. Simulation model was developed based on 24 hours data collection. The model developed using Arena simulation replicates the actual ED's operations of a public hospital in Selangor, Malaysia. The OptQuest optimization in Arena is used to find the possible combinations of a number of resources that can minimize patients waiting time while increasing the number of patients served. The simulation model was modified for improvement based on results from OptQuest. The improvement model significantly improves ED's efficiency with an average of 32% reduction in average patients waiting times and 25% increase in the total number of patients served.
7 CFR 1.603 - How are time periods computed?
2010-01-01
... 7 Agriculture 1 2010-01-01 2010-01-01 false How are time periods computed? 1.603 Section 1.603... Licenses General Provisions § 1.603 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2...
50 CFR 221.3 - How are time periods computed?
2010-10-01
... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false How are time periods computed? 221.3... Provisions § 221.3 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2) The last day of the...
6 CFR 13.27 - Computation of time.
2010-01-01
... 6 Domestic Security 1 2010-01-01 2010-01-01 false Computation of time. 13.27 Section 13.27 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY PROGRAM FRAUD CIVIL REMEDIES § 13.27 Computation of time. (a) In computing any period of time under this part or in an order issued...
International Nuclear Information System (INIS)
Pinto, L.M.V.G.; Pereira, M.V.F.; Nunes, A.
1989-01-01
A computational model for determining an economical transmission expansion plan, based in the decomposition techniques is presented. The algorithm was used in the Brazilian South System and was able to find an optimal solution, with a low computational resource. Some expansions of this methodology are been investigated: the probabilistic one and the expansion with financier restriction. (C.G.C.). 4 refs, 7 figs
Takiyama, Ken
2015-01-01
Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.
Directory of Open Access Journals (Sweden)
Ken eTakiyama
2015-02-01
Full Text Available Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.
Directory of Open Access Journals (Sweden)
Naci Üngür
2015-06-01
Full Text Available Purpose: The aim of this study is to retrospectively assess the contrubition of the minimal preparation CT to the diagnosis of colorectal cancer in the patients who were refered to department of gatroenterology with colorectal cancer prediagnosis and have consequent colonoscopically visible mass and histopathological proof.Materials and methods: 100 consecutive cases referred from department of gastroenterology between september 2008 and december 2012 with confirmed colonoscopical mass diagnosis were included to our study (Age range: 18–90 Sex: females 41 and 59 males. Radiological findings were statistically compared with pathological findings as a gold standard.Results: Of these patients with coloscopically visible mass, minimal preparation CT revealed asymmetric wall thickening(n:89, extracolonic mass (n:3, and symmetric wall thickening(n:2 and normal wall thickness (n:6. 79 cases had enlarged lymph nodes in pericolonic mesenteric fat tissue while remaning have no lymph nodes(21. 54 cases had stranding in pericolonic mesenteric fat tissue and remanining individuals showed normal fat density. The masses were located in rectum (n:54, sigmoid colon (n:17, descending colon (n:10, transverse colon (n:2, ascending colon (n:14, and cecum (n:3.Conclusion: In colorectal and extracolonic mass investigation we recommend minimal preparation CT, which is highly sensitive and more acceptible by patients.
COMPUTATIONAL MODELS USED FOR MINIMIZING THE NEGATIVE IMPACT OF ENERGY ON THE ENVIRONMENT
Directory of Open Access Journals (Sweden)
Oprea D.
2012-04-01
Full Text Available Optimizing energy system is a problem that is extensively studied for many years by scientists. This problem can be studied from different views and using different computer programs. The work is characterized by one of the following calculation methods used in Europe for modelling, power system optimization. This method shall be based on reduce action of energy system on environment. Computer program used and characterized in this article is GEMIS.
International Nuclear Information System (INIS)
Nakagawa, Yuri; Matsumura, Kaname; Iwasa, Motoh; Kaito, Masahiko; Adachi, Yukihiko; Takeda, Kan
2004-01-01
The early diagnosis and treatment of cognitive impairment in cirrhotic patients is needed to improve the patients' daily living. In this study, alterations of regional cerebral blood flow (rCBF) were evaluated in cirrhotic patients using statistical parametric mapping (SPM). The relationships between rCBF and neuropsychological test, severity of disease and biochemical data were also assessed. 99m Tc-ethyl cysteinate dimer single photon emission computed tomography was performed in 20 patients with non-alcoholic liver cirrhosis without overt hepatic encephalopathy (HE) and in 20 age-matched healthy subjects. Neuropsychological tests were performed in 16 patients; of these 7 had minimal HE. Regional CBF images were also analyzed in these groups using SPM. On SPM analysis, cirrhotic patients showed regions of significant hypoperfusion in the superior and middle frontal gyri, and inferior parietal lobules compared with the control group. These areas included parts of the premotor and parietal associated areas of the cortex. Among the cirrhotic patients, those with minimal HE had regions of significant hypoperfusion in the cingulate gyri bilaterally as compared with those without minimal HE. Abnormal function in the above regions may account for the relatively selective neuropsychological deficits in the cognitive status of patients with cirrhosis. These findings may be important in the identification and management of cirrhotic patients with minimal HE. (author)
Energy Technology Data Exchange (ETDEWEB)
Huppertz, Alexander, E-mail: Alexander.Huppertz@charite.de [Imaging Science Institute Charite Berlin, Robert-Koch-Platz 7, D-10115 Berlin (Germany); Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Radmer, Sebastian, E-mail: s.radmer@immanuel.de [Department of Orthopedic Surgery and Rheumatology, Immanuel-Krankenhaus, Koenigstr. 63, D-14109, Berlin (Germany); Asbach, Patrick, E-mail: Patrick.Asbach@charite.de [Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Juran, Ralf, E-mail: ralf.juran@charite.de [Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Schwenke, Carsten, E-mail: carsten.schwenke@scossis.de [Biostatistician, Scossis Statistical Consulting, Zeltinger Str. 58G, D-13465 Berlin (Germany); Diederichs, Gerd, E-mail: gerd.diederichs@charite.de [Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Hamm, Bernd, E-mail: Bernd.Hamm@charite.de [Department of Radiology, Medical Physics, Charite-University Hospitals of Berlin, Chariteplatz 1, D-10117 Berlin (Germany); Sparmann, Martin, E-mail: m.sparmann@immanuel.de [Department of Orthopedic Surgery and Rheumatology, Immanuel-Krankenhaus, Koenigstr. 63, D-14109, Berlin (Germany)
2011-06-15
Computed tomography (CT) was used for preoperative planning of minimal-invasive total hip arthroplasty (THA). 92 patients (50 males, 42 females, mean age 59.5 years) with a mean body-mass-index (BMI) of 26.5 kg/m{sup 2} underwent 64-slice CT to depict the pelvis, the knee and the ankle in three independent acquisitions using combined x-, y-, and z-axis tube current modulation. Arthroplasty planning was performed using 3D-Hip Plan (Symbios, Switzerland) and patient radiation dose exposure was determined. The effects of BMI, gender, and contralateral THA on the effective dose were evaluated by an analysis-of-variance. A process-cost-analysis from the hospital perspective was done. All CT examinations were of sufficient image quality for 3D-THA planning. A mean effective dose of 4.0 mSv (SD 0.9 mSv) modeled by the BMI (p < 0.0001) was calculated. The presence of a contralateral THA (9/92 patients; p = 0.15) and the difference between males and females were not significant (p = 0.08). Personnel involved were the radiologist (4 min), the surgeon (16 min), the radiographer (12 min), and administrative personnel (4 min). A CT operation time of 11 min and direct per-patient costs of 52.80 Euro were recorded. Preoperative CT for THA was associated with a slight and justifiable increase of radiation exposure in comparison to conventional radiographs and low per-patient costs.
International Nuclear Information System (INIS)
Sidky, Emil Y; Pan Xiaochuan
2008-01-01
An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories
Computer Aided Continuous Time Stochastic Process Modelling
DEFF Research Database (Denmark)
Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay
2001-01-01
A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...
Real-time data-intensive computing
Energy Technology Data Exchange (ETDEWEB)
Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander; MacDowell, Alastair A.; Padmore, Howard A.; Shapiro, David; Tamura, Nobumichi [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Beattie, Keith; Krishnan, Harinarayan; Patton, Simon J.; Perciano, Talita; Stromsness, Rune; Tull, Craig E.; Ushizima, Daniela [Computational Research Division, Lawrence Berkeley National Laboratory Berkeley CA 94720 (United States); Correa, Joaquin; Deslippe, Jack R. [National Energy Research Scientific Computing Center, Berkeley, CA 94720 (United States); Dart, Eli; Tierney, Brian L. [Energy Sciences Network, Berkeley, CA 94720 (United States); Daurer, Benedikt J.; Maia, Filipe R. N. C. [Uppsala University, Uppsala (Sweden); and others
2016-07-27
Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficient closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.
Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.
Battiti, Roberto
1990-01-01
This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from
minimUML: A Minimalist Approach to UML Diagramming for Early Computer Science Education
Turner, Scott A.; Perez-Quinones, Manuel A.; Edwards, Stephen H.
2005-01-01
In introductory computer science courses, the Unified Modeling Language (UML) is commonly used to teach basic object-oriented design. However, there appears to be a lack of suitable software to support this task. Many of the available programs that support UML focus on developing code and not on enhancing learning. Programs designed for…
Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems
Directory of Open Access Journals (Sweden)
Syed Imtiaz Husain
2009-01-01
Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.
A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange
Douik, Ahmed S.
2014-05-11
In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.
Kotlyar, Michael; Lindgren, Bruce R; Vuchetich, John P; Le, Chap; Mills, Anne M; Amiot, Elizabeth; Hatsukami, Dorothy K
2017-08-01
Smokers are often advised to use nicotine lozenge when craving or withdrawal symptoms occur. This may be too late to prevent lapses. This study assessed if nicotine lozenge use prior to a common smoking trigger can minimize trigger induced increases in craving and withdrawal symptoms. Eighty-four smokers completed two laboratory sessions in random order. At one session, nicotine lozenge was given immediately after a stressor (to approximate current recommended use - i.e., after craving and withdrawal symptoms occur); at the other session subjects were randomized to receive nicotine lozenge at time points ranging from immediately to 30min prior to the stressor. Withdrawal symptoms and urge to smoke were measured using the Minnesota Nicotine Withdrawal Scale and the Questionnaire of Smoking Urges (QSU). Relative to receiving lozenge after the stressor, a smaller increase in pre-stressor to post-stressor withdrawal symptom scores occurred when lozenge was used immediately (p=0.03) and 10min prior (p=0.044) to the stressor. Results were similar for factors 1 and 2 of the QSU when lozenge was used immediately prior to the stressor (pnicotine lozenge prior to a smoking trigger can decrease trigger induced craving and withdrawal symptoms. Future studies are needed to determine if such use would increase cessation rates. Clinicaltrials.gov # NCT01522963. Copyright © 2017 Elsevier Ltd. All rights reserved.
Minimal cardiac transit-times in the diagnosis of heart disease
International Nuclear Information System (INIS)
Freundlieb, C.; Vyska, K.; Hoeck, A.; Schicha, H.; Becker, V.; Feinendegen, L.E.
1976-01-01
Using Indium-113m and the Gamma Retina V (Fucks-Knipping Camera), the minimal cardiac transit times (MTTs) were measured radiocardiographically from the right auricle to the aortic root. This analysis served to determine the relation between stroke volume and the segment volume of the part of circulation between the right auricle and the aortic root. In 39 patients with myocardial insufficiency of different clinical degree the effectiveness of digitalization was, up to a period of 5 years, measured by means of the volume relation mentioned above. The following conclusions can be drawn from the results: digitalization of patients with myocardial insufficiency leads to an improvement of the impaired relation of central volumes. In patients with diminished cardiac reserve the improvement is drastic and often results in a nearly complete normalization. The data remain constant during therapy even for an observation period of 5 years. Digitalization of patients with congestive heart failure only leads to a partial improvement. In contrast to patients with diminished cardiac reserve this effect is temporary. The different behaviour of the relation between stroke volume and segment volume in patients with diminished cardiac reserve and congestive heart failure under prolonged administration of digitalis points to the necessity of treatment with digitalis in the early stage of myocardial disease. (orig.) [de
A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange
Douik, Ahmed S.; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim; Sorour, Sameh; Tembine, Hamidou
2014-01-01
In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.
Biondi, Antonio; Grosso, Giuseppe; Mistretta, Antonio; Marventano, Stefano; Toscano, Chiara; Drago, Filippo; Gangi, Santi; Basile, Francesco
2013-01-01
In the late '80s the successes of the laparoscopic surgery for gallbladder disease laid the foundations on the modern use of this surgical technique in a variety of diseases. In the last 20 years, laparoscopic colorectal surgery had become a popular treatment option for colorectal cancer patients. Many studies emphasized on the benefits stating the significant advantages of the laparoscopic approach compared with the open surgery of reduced blood loss, early return of intestinal motility, lower overall morbidity, and shorter duration of hospital stay, leading to a general agreement on laparoscopic surgery as an alternative to conventional open surgery for colon cancer. The reduced hospital stay may also decrease the cost of the laparoscopic surgery for colorectal cancer, despite th higher operative spending compared with open surgery. The average reduction in total direct costs is difficult to define due to the increasing cost over time, making challenging the comparisons between studies conducted during a time range of more than 10 years. However, despite the theoretical advantages of laparoscopic surgery, it is still not considered the standard treatment for colorectal cancer patients due to technical limitations or the characteristics of the patients that may affect short and long term outcomes. The laparoscopic approach to colectomy is slowly gaining acceptance for the management of colorectal pathology. Laparoscopic surgery for colon cancer demonstrates better short-term outcome, oncologic safety, and equivalent long-term outcome of open surgery. For rectal cancer, laparoscopic technique can be more complex depending on the tumor location. The advantages of minimally invasive surgery may translate better care quality for oncological patients and lead to increased cost saving through the introduction of active enhanced recovery programs which are likely cost-effective from the perspective of the hospital health-care providers.
Real time computer system with distributed microprocessors
International Nuclear Information System (INIS)
Heger, D.; Steusloff, H.; Syrbe, M.
1979-01-01
The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de
Spying on real-time computers to improve performance
International Nuclear Information System (INIS)
Taff, L.M.
1975-01-01
The sampled program-counter histogram, an established technique for shortening the execution times of programs, is described for a real-time computer. The use of a real-time clock allows particularly easy implementation. (Auth.)
Contribution to the minimization of time for the solution of algebraic differential equations system
International Nuclear Information System (INIS)
Michael, Samir.
1982-11-01
This note deals with the solution of large algebraic-differential systems involved in physical sciences specially in electronics and nuclear physics. The theoretical aspects for the stability of multistep methods is presented in detail. The stability condition is developed and we present our own conditions of stability. These conditions give rise to many new formulae that have very small truncation error. However for a real time simulation, it is necessary to obtain a very high computation speed. For this purpose, we have considered a multiprocessor machine and we have investigated the parallelization of the algorithm of generalized GEAR method. For a linear system, the method of GAUSS-JORDAN is used with some modifications. A new algorithm is presented for parallel matrix multiplication. This research work has been applied to the resolution of a system of equations corresponding to an experiment of gamma thermometry in a nuclear reactor (four thermometers in this case) [fr
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Population dynamics of minimally cognitive individuals. Part 2: Dynamics of time-dependent knowledge
Energy Technology Data Exchange (ETDEWEB)
Schmieder, R.W.
1995-07-01
The dynamical principle for a population of interacting individuals with mutual pairwise knowledge, presented by the author in a previous paper for the case of constant knowledge, is extended to include the possibility that the knowledge is time-dependent. Several mechanisms are presented by which the mutual knowledge, represented by a matrix K, can be altered, leading to dynamical equations for K(t). The author presents various examples of the transient and long time asymptotic behavior of K(t) for populations of relatively isolated individuals interacting infrequently in local binary collisions. Among the effects observed in the numerical experiments are knowledge diffusion, learning transients, and fluctuating equilibria. This approach will be most appropriate to small populations of complex individuals such as simple animals, robots, computer networks, agent-mediated traffic, simple ecosystems, and games. Evidence of metastable states and intermittent switching leads them to envision a spectroscopy associated with such transitions that is independent of the specific physical individuals and the population. Such spectra may serve as good lumped descriptors of the collective emergent behavior of large classes of populations in which mutual knowledge is an important part of the dynamics.
Ward, Erin P; Shiavazzi, Daniele; Sood, Divya; Marsden, Allison; Lane, John; Owens, Erik; Barleben, Andrew
2017-01-01
Currently, the gold standard diagnostic examination for significant aortoiliac lesions is angiography. Fractional flow reserve (FFR) has a growing body of literature in coronary artery disease as a minimally invasive diagnostic procedure. Improvements in numerical hemodynamics have allowed for an accurate and minimally invasive approach to estimating FFR, utilizing cross-sectional imaging. We aim to demonstrate a similar approach to aortoiliac occlusive disease (AIOD). A retrospective review evaluated 7 patients with claudication and cross-sectional imaging showing AIOD. FFR was subsequently measured during conventional angiogram with pull-back pressures in a retrograde fashion. To estimate computed tomography (CT) FFR, CT angiography (CTA) image data were analyzed using the SimVascular software suite to create a computational fluid dynamics model of the aortoiliac system. Inlet flow conditions were derived based on cardiac output, while 3-element Windkessel outlet boundary conditions were optimized to match the expected systolic and diastolic pressures, with outlet resistance distributed based on Murray's law. The data were evaluated with a Student's t-test and receiver operating characteristic curve. All patients had evidence of AIOD on CT and FFR was successfully measured during angiography. The modeled data were found to have high sensitivity and specificity between the measured and CT FFR (P = 0.986, area under the curve = 1). The average difference between the measured and calculated FFRs was 0.136, with a range from 0.03 to 0.30. CT FFR successfully identified aortoiliac lesions with significant pressure drops that were identified with angiographically measured FFR. CT FFR has the potential to provide a minimally invasive approach to identify flow-limiting stenosis for AIOD. Copyright © 2016 Elsevier Inc. All rights reserved.
An applied optimization based method for line planning to minimize travel time
DEFF Research Database (Denmark)
Bull, Simon Henry; Rezanova, Natalia Jurjevna; Lusby, Richard Martin
The line planning problem in rail is to select a number of lines froma potential pool which provides sufficient passenger capacity and meetsoperational requirements, with some objective measure of solution linequality. We model the problem of minimizing the average passenger systemtime, including...
29 CFR 1921.22 - Computation of time.
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Computation of time. 1921.22 Section 1921.22 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... WORKERS' COMPENSATION ACT Miscellaneous § 1921.22 Computation of time. Sundays and holidays shall be...
43 CFR 45.3 - How are time periods computed?
2010-10-01
... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false How are time periods computed? 45.3... IN FERC HYDROPOWER LICENSES General Provisions § 45.3 How are time periods computed? (a) General... run is not included. (2) The last day of the period is included. (i) If that day is a Saturday, Sunday...
5 CFR 890.101 - Definitions; time computations.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Definitions; time computations. 890.101....101 Definitions; time computations. (a) In this part, the terms annuitant, carrier, employee, employee... in section 8901 of title 5, United States Code, and supplement the following definitions: Appropriate...
Time computations in anuran auditory systems
Directory of Open Access Journals (Sweden)
Gary J Rose
2014-05-01
Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.
Advanced real time radioscopy and computed tomography
International Nuclear Information System (INIS)
Sauerwein, Ch.; Nuding, W.; Grimm, R.; Wiacker, H.
1996-01-01
The paper describes three x-ray inspection systems. One radioscopic system is designed for the inspection of castings. The next integrates a radioscopic and a tomographic mode. The radioscopy has a high resolution camera and real time image processor. Radiation sources are a 450 kV industrial and a 200 kV microfocus tube. The third system is a tomographic system with 30 scintillation detectors for the inspection of nuclear waste containers. (author)
Real time computer controlled weld skate
Wall, W. A., Jr.
1977-01-01
A real time, adaptive control, automatic welding system was developed. This system utilizes the general case geometrical relationships between a weldment and a weld skate to precisely maintain constant weld speed and torch angle along a contoured workplace. The system is compatible with the gas tungsten arc weld process or can be adapted to other weld processes. Heli-arc cutting and machine tool routing operations are possible applications.
DEFF Research Database (Denmark)
Lauridsen, Mette Enok Munk; Jepsen, Peter; Vilstrup, Hendrik
2011-01-01
Abstract Minimal hepatic encephalopathy (MHE) is intermittently present in up to 2/3 of patients with chronic liver disease. It impairs their daily living and can be treated. However, there is no consensus on diagnostic criteria except that psychometric methods are required. We compared two easy...... appropriately to a sensory stimulus. The choice of test depends on the information needed in the clinical and scientific care and study of the patients....
DEFF Research Database (Denmark)
Oberscheider, Marco; Zazgornik, Jan; Henriksen, Christian Bugge
2013-01-01
Efficient transport of timber for supplying industrial conversion and biomass power plants is a crucial factor for competitiveness in the forest industry. Throughout the recent years minimizing driving times has been the main focus of optimizations in this field. In addition to this aim the objec...
Klaassen, M.R.J.; Lindstrom, A.
1996-01-01
Lindstrom & Alerstam (1992 Am. Nat. 140, 477-491) presented a model that predicts optimal departure fuel loads as a function of the rate of fuel deposition in time-minimizing migrants. The basis of the model is that the coverable distance per unit of fuel deposited, diminishes with increasing fuel
Recent achievements in real-time computational seismology in Taiwan
Lee, S.; Liang, W.; Huang, B.
2012-12-01
Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information ROS completes a 3D simulation real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).
Directory of Open Access Journals (Sweden)
Jianbo Qian
2013-01-01
Full Text Available We consider single machine scheduling problems with learning/deterioration effects and time-dependent processing times, with due date assignment consideration, and our objective is to minimize the weighted number of tardy jobs. By reducing all versions of the problem to an assignment problem, we solve them in O(n4 time. For some important special cases, the time complexity can be improved to be O(n2 using dynamic programming techniques.
Time-of-Flight Cameras in Computer Graphics
DEFF Research Database (Denmark)
Kolb, Andreas; Barth, Erhardt; Koch, Reinhard
2010-01-01
Computer Graphics, Computer Vision and Human Machine Interaction (HMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real-time geometry...
29 CFR 4245.8 - Computation of time.
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Computation of time. 4245.8 Section 4245.8 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION INSOLVENCY, REORGANIZATION, TERMINATION, AND OTHER RULES APPLICABLE TO MULTIEMPLOYER PLANS NOTICE OF INSOLVENCY § 4245.8 Computation of...
New real-time MR image-guided surgical robotic system for minimally invasive precision surgery
Energy Technology Data Exchange (ETDEWEB)
Hashizume, M.; Yasunaga, T.; Konishi, K. [Kyushu University, Department of Advanced Medical Initiatives, Faculty of Medical Sciences, Fukuoka (Japan); Tanoue, K.; Ieiri, S. [Kyushu University Hospital, Department of Advanced Medicine and Innovative Technology, Fukuoka (Japan); Kishi, K. [Hitachi Ltd, Mechanical Engineering Research Laboratory, Hitachinaka-Shi, Ibaraki (Japan); Nakamoto, H. [Hitachi Medical Corporation, Application Development Office, Kashiwa-Shi, Chiba (Japan); Ikeda, D. [Mizuho Ikakogyo Co. Ltd, Tokyo (Japan); Sakuma, I. [The University of Tokyo, Graduate School of Engineering, Bunkyo-Ku, Tokyo (Japan); Fujie, M. [Waseda University, Graduate School of Science and Engineering, Shinjuku-Ku, Tokyo (Japan); Dohi, T. [The University of Tokyo, Graduate School of Information Science and Technology, Bunkyo-Ku, Tokyo (Japan)
2008-04-15
To investigate the usefulness of a newly developed magnetic resonance (MR) image-guided surgical robotic system for minimally invasive laparoscopic surgery. The system consists of MR image guidance [interactive scan control (ISC) imaging, three-dimensional (3-D) navigation, and preoperative planning], an MR-compatible operating table, and an MR-compatible master-slave surgical manipulator that can enter the MR gantry. Using this system, we performed in vivo experiments with MR image-guided laparoscopic puncture on three pigs. We used a mimic tumor made of agarose gel and with a diameter of approximately 2 cm. All procedures were successfully performed. The operator only advanced the probe along the guidance device of the manipulator, which was adjusted on the basis of the preoperative plan, and punctured the target while maintaining the operative field using robotic forceps. The position of the probe was monitored continuously with 3-D navigation and 2-D ISC images, as well as the MR-compatible laparoscope. The ISC image was updated every 4 s; no artifact was detected. A newly developed MR image-guided surgical robotic system is feasible for an operator to perform safe and precise minimally invasive procedures. (orig.)
New real-time MR image-guided surgical robotic system for minimally invasive precision surgery
International Nuclear Information System (INIS)
Hashizume, M.; Yasunaga, T.; Konishi, K.; Tanoue, K.; Ieiri, S.; Kishi, K.; Nakamoto, H.; Ikeda, D.; Sakuma, I.; Fujie, M.; Dohi, T.
2008-01-01
To investigate the usefulness of a newly developed magnetic resonance (MR) image-guided surgical robotic system for minimally invasive laparoscopic surgery. The system consists of MR image guidance [interactive scan control (ISC) imaging, three-dimensional (3-D) navigation, and preoperative planning], an MR-compatible operating table, and an MR-compatible master-slave surgical manipulator that can enter the MR gantry. Using this system, we performed in vivo experiments with MR image-guided laparoscopic puncture on three pigs. We used a mimic tumor made of agarose gel and with a diameter of approximately 2 cm. All procedures were successfully performed. The operator only advanced the probe along the guidance device of the manipulator, which was adjusted on the basis of the preoperative plan, and punctured the target while maintaining the operative field using robotic forceps. The position of the probe was monitored continuously with 3-D navigation and 2-D ISC images, as well as the MR-compatible laparoscope. The ISC image was updated every 4 s; no artifact was detected. A newly developed MR image-guided surgical robotic system is feasible for an operator to perform safe and precise minimally invasive procedures. (orig.)
MINIMALLY INVASIVE SURGERY FOR GASTRIC CANCER: TIME TO CHANGE THE PARADIGM.
Barchi, Leandro Cardoso; Jacob, Carlos Eduardos; Bresciani, Cláudio José Caldas; Yagi, Osmar Kenji; Mucerino, Donato Roberto; Lopasso, Fábio Pinatel; Mester, Marcelo; Ribeiro-Júnior, Ulysses; Dias, André Roncon; Ramos, Marcus Fernando Kodama Pertille; Cecconello, Ivan; Zilberstein, Bruno
2016-01-01
Minimally invasive surgery widely used to treat benign disorders of the digestive system, has become the focus of intense study in recent years in the field of surgical oncology. Since then, the experience with this kind of approach has grown, aiming to provide the same oncological outcomes and survival to conventional surgery. Regarding gastric cancer, surgery is still considered the only curative treatment, considering the extent of resection and lymphadenectomy performed. Conventional surgery remains the main modality performed worldwide. Notwithstanding, the role of the minimally invasive access is yet to be clarified. To evaluate and summarize the current status of minimally invasive resection of gastric cancer. A literature review was performed using Medline/PubMed, Cochrane Library and SciELO with the following headings: gastric cancer, minimally invasive surgery, robotic gastrectomy, laparoscopic gastrectomy, stomach cancer. The language used for the research was English. 28 articles were considered, including randomized controlled trials, meta-analyzes, prospective and retrospective cohort studies. Minimally invasive gastrectomy may be considered as a technical option in the treatment of early gastric cancer. As for advanced cancer, recent studies have demonstrated the safety and feasibility of the laparoscopic approach. Robotic gastrectomy will probably improve outcomes obtained with laparoscopy. However, high cost is still a barrier to its use on a large scale. A cirurgia minimamente invasiva amplamente usada para tratar doenças benignas do aparelho digestivo, tornou-se o foco de intenso estudo nos últimos anos no campo da oncologia cirúrgica. Desde então, a experiência com este tipo de abordagem tem crescido, com o objetivo de fornecer os mesmos resultados oncológicos e sobrevivência à cirurgia convencional. Em relação ao câncer gástrico, o tratamento cirúrgico ainda é considerado o único tratamento curativo, considerando a extensão da
Noise-constrained switching times for heteroclinic computing
Neves, Fabio Schittler; Voit, Maximilian; Timme, Marc
2017-03-01
Heteroclinic computing offers a novel paradigm for universal computation by collective system dynamics. In such a paradigm, input signals are encoded as complex periodic orbits approaching specific sequences of saddle states. Without inputs, the relevant states together with the heteroclinic connections between them form a network of states—the heteroclinic network. Systems of pulse-coupled oscillators or spiking neurons naturally exhibit such heteroclinic networks of saddles, thereby providing a substrate for general analog computations. Several challenges need to be resolved before it becomes possible to effectively realize heteroclinic computing in hardware. The time scales on which computations are performed crucially depend on the switching times between saddles, which in turn are jointly controlled by the system's intrinsic dynamics and the level of external and measurement noise. The nonlinear dynamics of pulse-coupled systems often strongly deviate from that of time-continuously coupled (e.g., phase-coupled) systems. The factors impacting switching times in pulse-coupled systems are still not well understood. Here we systematically investigate switching times in dependence of the levels of noise and intrinsic dissipation in the system. We specifically reveal how local responses to pulses coact with external noise. Our findings confirm that, like in time-continuous phase-coupled systems, piecewise-continuous pulse-coupled systems exhibit switching times that transiently increase exponentially with the number of switches up to some order of magnitude set by the noise level. Complementarily, we show that switching times may constitute a good predictor for the computation reliability, indicating how often an input signal must be reiterated. By characterizing switching times between two saddles in conjunction with the reliability of a computation, our results provide a first step beyond the coding of input signal identities toward a complementary coding for
Stability control for approximate implicit time-stepping schemes with minimal residual iterations
Botchev, M.A.; Sleijpen, G.L.G.; Vorst, H.A. van der
1997-01-01
Implicit schemes for the integration of ODE's are popular when stabil- ity is more of concern than accuracy, for instance for the computation of a steady state solution. However, in particular for very large sys- tems the solution of the involved linear systems maybevery expensive. In this
Stability control for approximate implicit timestepping schemes with minimal residual iterations
Botchev, M.A.; Sleijpen, G.L.G.; Vorst, H.A. van der
1997-01-01
Implicit schemes for the integration of ODE's are popular when stabil ity is more of concern than accuracy, for instance for the computation of a steady state solution. However, in particular for very large sys tems the solution of the involved linear systems may be very expensive. In this
Directory of Open Access Journals (Sweden)
Guanlong Deng
2016-01-01
Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.
Relativistic Photoionization Computations with the Time Dependent Dirac Equation
2016-10-12
Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6795--16-9698 Relativistic Photoionization Computations with the Time Dependent Dirac... Photoionization Computations with the Time Dependent Dirac Equation Daniel F. Gordon and Bahman Hafizi Naval Research Laboratory 4555 Overlook Avenue, SW...Unclassified Unlimited Unclassified Unlimited 22 Daniel Gordon (202) 767-5036 Tunneling Photoionization Ionization of inner shell electrons by laser
DEFF Research Database (Denmark)
Li, Chendan; Federico, de Bosio; Chen, Fang
2017-01-01
In this paper, an economic dispatch problem for total operation cost minimization in DC microgrids is formulated. An operating cost is associated with each generator in the microgrid, including the utility grid, combining the cost-efficiency of the system with demand response requirements...... achieving higher control accuracy and faster response. The optimization problem is solved in a heuristic method. In order to test the proposed algorithm, a six-bus droop-controlled DC microgrid is used in the case studies. Simulation results show that under variable renewable energy generation, load...... of the utility. The power flow model is included in the optimization problem, thus the transmission losses can be considered for generation dispatch. By considering the primary (local) control of the grid-forming converters of a microgrid, optimal parameters can be directly applied to this control level, thus...
On the Minimization of Fluctuations in the Response Times of Autoregulatory Gene Networks
Murugan, Rajamanickam; Kreiman, Gabriel
2011-01-01
The temporal dynamics of the concentrations of several proteins are tightly regulated, particularly for critical nodes in biological networks such as transcription factors. An important mechanism to control transcription factor levels is through autoregulatory feedback loops where the protein can bind its own promoter. Here we use theoretical tools and computational simulations to further our understanding of transcription-factor autoregulatory loops. We show that the stochastic dynamics of f...
Ten scenarios from early radiation to late time acceleration with a minimally coupled dark energy
Energy Technology Data Exchange (ETDEWEB)
Fay, Stéphane, E-mail: steph.fay@gmail.com [Palais de la Découverte, Astronomy Department, Avenue Franklin Roosevelt, 75008 Paris (France)
2013-09-01
We consider General Relativity with matter, radiation and a minimally coupled dark energy defined by an equation of state w. Using dynamical system method, we find the equilibrium points of such a theory assuming an expanding Universe and a positive dark energy density. Two of these points correspond to classical radiation and matter dominated epochs for the Universe. For the other points, dark energy mimics matter, radiation or accelerates Universe expansion. We then look for possible sequences of epochs describing a Universe starting with some radiation dominated epoch(s) (mimicked or not by dark energy), then matter dominated epoch(s) (mimicked or not by dark energy) and ending with an accelerated expansion. We find ten sequences able to follow this Universe history without singular behaviour of w at some saddle points. Most of them are new in dark energy literature. To get more than these ten sequences, w has to be singular at some specific saddle equilibrium points. This is an unusual mathematical property of the equation of state in dark energy literature, whose physical consequences tend to be discarded by observations. This thus distinguishes the ten above sequences from an infinity of ways to describe Universe expansion.
Ten scenarios from early radiation to late time acceleration with a minimally coupled dark energy
International Nuclear Information System (INIS)
Fay, Stéphane
2013-01-01
We consider General Relativity with matter, radiation and a minimally coupled dark energy defined by an equation of state w. Using dynamical system method, we find the equilibrium points of such a theory assuming an expanding Universe and a positive dark energy density. Two of these points correspond to classical radiation and matter dominated epochs for the Universe. For the other points, dark energy mimics matter, radiation or accelerates Universe expansion. We then look for possible sequences of epochs describing a Universe starting with some radiation dominated epoch(s) (mimicked or not by dark energy), then matter dominated epoch(s) (mimicked or not by dark energy) and ending with an accelerated expansion. We find ten sequences able to follow this Universe history without singular behaviour of w at some saddle points. Most of them are new in dark energy literature. To get more than these ten sequences, w has to be singular at some specific saddle equilibrium points. This is an unusual mathematical property of the equation of state in dark energy literature, whose physical consequences tend to be discarded by observations. This thus distinguishes the ten above sequences from an infinity of ways to describe Universe expansion
A Distributed Computing Network for Real-Time Systems.
1980-11-03
7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 -1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG
Computer simulations of long-time tails: what's new?
Hoef, van der M.A.; Frenkel, D.
1995-01-01
Twenty five years ago Alder and Wainwright discovered, by simulation, the 'long-time tails' in the velocity autocorrelation function of a single particle in fluid [1]. Since then, few qualitatively new results on long-time tails have been obtained by computer simulations. However, within the
Computation of reactor control rod drop time under accident conditions
International Nuclear Information System (INIS)
Dou Yikang; Yao Weida; Yang Renan; Jiang Nanyan
1998-01-01
The computational method of reactor control rod drop time under accident conditions lies mainly in establishing forced vibration equations for the components under action of outside forces on control rod driven line and motion equation for the control rod moving in vertical direction. The above two kinds of equations are connected by considering the impact effects between control rod and its outside components. Finite difference method is adopted to make discretization of the vibration equations and Wilson-θ method is applied to deal with the time history problem. The non-linearity caused by impact is iteratively treated with modified Newton method. Some experimental results are used to validate the validity and reliability of the computational method. Theoretical and experimental testing problems show that the computer program based on the computational method is applicable and reliable. The program can act as an effective tool of design by analysis and safety analysis for the relevant components
RubiShort: Reducing scan time in 82Rb heart scans to minimize movements artifacts
DEFF Research Database (Denmark)
Madsen, Jeppe; Vraa, Kaspar J.; Harms, Hans
.013x, R2=0.98; %Reversible: y=1.008x, R2=0.95; TPD: y=1.000x, R2=0.99). Conclusion:, Scan time of myocardial perfusion scans using 82Rb can be reduced from 7 min. to 5 min. without loss of quantitative accuracy. Since patient motion is frequent in the last minutes of the scans, scan time reduction...
Scalable space-time adaptive simulation tools for computational electrocardiology
Krause, Dorian; Krause, Rolf
2013-01-01
This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...
Continuous-Time Symmetric Hopfield Nets are Computationally Universal
Czech Academy of Sciences Publication Activity Database
Šíma, Jiří; Orponen, P.
2003-01-01
Roč. 15, č. 3 (2003), s. 693-733 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : continuous-time Hopfield network * Liapunov function * analog computation * computational power * Turing universality Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003
Heterogeneous real-time computing in radio astronomy
Ford, John M.; Demorest, Paul; Ransom, Scott
2010-07-01
Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.
Highly reliable computer network for real time system
International Nuclear Information System (INIS)
Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.
1988-01-01
Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7
Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang
2017-01-01
By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.
TimeSet: A computer program that accesses five atomic time services on two continents
Petrakis, P. L.
1993-01-01
TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.
Computing return times or return periods with rare event algorithms
Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy
2018-04-01
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
The Role of Compensation Criteria to Minimize Face-Time Bias and Support Faculty Career Flexibility
Directory of Open Access Journals (Sweden)
Lydia Pleotis Howell MD
2016-02-01
Full Text Available Work-life balance is important to recruitment and retention of the younger generation of medical faculty, but medical school flexibility policies have not been fully effective. We have reported that our school’s policies are underutilized due to faculty concerns about looking uncommitted to career or team. Since policies include leaves and accommodations that reduce physical presence, faculty may fear “face-time bias,” which negatively affects evaluation of those not “seen” at work. Face-time bias is reported to negatively affect salary and career progress. We explored face-time bias on a leadership level and described development of compensation criteria intended to mitigate face-time bias, raise visibility, and reward commitment and contribution to team/group goals. Leaders from 6 partner departments participated in standardized interviews and group meetings. Ten compensation plans were analyzed, and published literature was reviewed. Leaders did not perceive face-time issues but saw team pressure and perception of availability as performance motivators. Compensation plans were multifactor productivity based with many quantifiable criteria; few addressed team contributions. Using these findings, novel compensation criteria were developed based on a published model to mitigate face-time bias associated with team perceptions. Criteria for organizational citizenship to raise visibility and reward group outcomes were included. We conclude that team pressure and perception of availability have the potential to lead to bias and may contribute to underuse of flexibility policies. Recognizing organizational citizenship and cooperative effort via specific criteria in a compensation plan may enhance a culture of flexibility. These novel criteria have been effective in one pilot department.
Development of real-time visualization system for Computational Fluid Dynamics on parallel computers
International Nuclear Information System (INIS)
Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun
1998-03-01
A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)
Kakar, Jaber; Alameer, Alaa; Chaaban, Anas; Sezgin, Aydin; Paulraj, Arogyaswami
2017-01-01
the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file
Minimal variation in anti-A and -B titers among healthy volunteers over time
DEFF Research Database (Denmark)
Sprogøe, Ulrik; Yazer, Mark; Rasmussen, Mads Hvidkjær
2017-01-01
BACKGROUND: Using potentially out-of-group blood components, like low titer A plasma and O whole blood, in the resuscitation of trauma patients is becoming increasingly popular. However, very little is known whether the donors’ anti-A and/or -B titers change over time and whether repeated titer m...
New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity
Pak, Chan-Gi; Lung, Shun-Fat
2017-01-01
A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.
Effect of the MCNP model definition on the computation time
International Nuclear Information System (INIS)
Šunka, Michal
2017-01-01
The presented work studies the influence of the method of defining the geometry in the MCNP transport code and its impact on the computational time, including the difficulty of preparing an input file describing the given geometry. Cases using different geometric definitions including the use of basic 2-dimensional and 3-dimensional objects and theirs combinations were studied. The results indicate that an inappropriate definition can increase the computational time by up to 59% (a more realistic case indicates 37%) for the same results and the same statistical uncertainty. (orig.)
TV time but not computer time is associated with cardiometabolic risk in Dutch young adults.
Altenburg, Teatske M; de Kroon, Marlou L A; Renders, Carry M; Hirasing, Remy; Chinapaw, Mai J M
2013-01-01
TV time and total sedentary time have been positively related to biomarkers of cardiometabolic risk in adults. We aim to examine the association of TV time and computer time separately with cardiometabolic biomarkers in young adults. Additionally, the mediating role of waist circumference (WC) is studied. Data of 634 Dutch young adults (18-28 years; 39% male) were used. Cardiometabolic biomarkers included indicators of overweight, blood pressure, blood levels of fasting plasma insulin, cholesterol, glucose, triglycerides and a clustered cardiometabolic risk score. Linear regression analyses were used to assess the cross-sectional association of self-reported TV and computer time with cardiometabolic biomarkers, adjusting for demographic and lifestyle factors. Mediation by WC was checked using the product-of-coefficient method. TV time was significantly associated with triglycerides (B = 0.004; CI = [0.001;0.05]) and insulin (B = 0.10; CI = [0.01;0.20]). Computer time was not significantly associated with any of the cardiometabolic biomarkers. We found no evidence for WC to mediate the association of TV time or computer time with cardiometabolic biomarkers. We found a significantly positive association of TV time with cardiometabolic biomarkers. In addition, we found no evidence for WC as a mediator of this association. Our findings suggest a need to distinguish between TV time and computer time within future guidelines for screen time.
Real-time computational photon-counting LiDAR
Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles
2018-03-01
The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.
Energy Technology Data Exchange (ETDEWEB)
Robinson, Philip; Burnett, Hugh; Nicholson, David A
2002-05-01
AIM: To assess the place of computed tomography (CT) of the colon in frail or elderly patients with symptoms suggestive of colon cancer. METHOD: A total of 195 patients (median age 76 years) underwent CT of the abdomen and pelvis following the administration of positive oral contrast medium but no bowel preparation. All had symptoms suggestive of colon cancer. CT findings were classified as normal/diverticular disease (DD), possible colon cancer, definite colon cancer or extracolonic pathology. Accuracy of CT was assessed against patient outcome. Association between symptoms and colon cancer was assessed by chi-squared test. RESULTS: There were 47 deaths and median follow up for those alive was 16 months. Overall sensitivity of CT was 100% and specificity 87% for detection of colon cancer. One hundred and ten normal/DD CT examinations had no significant bowel lesion on follow up. Of 12 cases defined as 'definite cancers' on CT, there were nine colon cancers, two extracolonic cancers, and one normal. Of 23 'possible cancers' on CT, there were two colon cancers, three DD masses and 18 normal/DD. Fifty examinations had extracolonic findings including 33 (17%) cases of significant abdominal disease. CT findings led to a halt in investigations in 115 cases (59%), colonoscopy in 18 (9%) cases and surgery in 16 (8%) cases. None of the symptoms present showed a significant association with colon cancer (all P > 0.05). CONCLUSION: Minimal preparation CT is a non-invasive and sensitive method for investigating colon cancer in frail or elderly patients. It has a 100% negative predictive value and also detects a large number of extracolonic lesions. Robinson, P. et al. (2002)
International Nuclear Information System (INIS)
Robinson, Philip; Burnett, Hugh; Nicholson, David A.
2002-01-01
AIM: To assess the place of computed tomography (CT) of the colon in frail or elderly patients with symptoms suggestive of colon cancer. METHOD: A total of 195 patients (median age 76 years) underwent CT of the abdomen and pelvis following the administration of positive oral contrast medium but no bowel preparation. All had symptoms suggestive of colon cancer. CT findings were classified as normal/diverticular disease (DD), possible colon cancer, definite colon cancer or extracolonic pathology. Accuracy of CT was assessed against patient outcome. Association between symptoms and colon cancer was assessed by chi-squared test. RESULTS: There were 47 deaths and median follow up for those alive was 16 months. Overall sensitivity of CT was 100% and specificity 87% for detection of colon cancer. One hundred and ten normal/DD CT examinations had no significant bowel lesion on follow up. Of 12 cases defined as 'definite cancers' on CT, there were nine colon cancers, two extracolonic cancers, and one normal. Of 23 'possible cancers' on CT, there were two colon cancers, three DD masses and 18 normal/DD. Fifty examinations had extracolonic findings including 33 (17%) cases of significant abdominal disease. CT findings led to a halt in investigations in 115 cases (59%), colonoscopy in 18 (9%) cases and surgery in 16 (8%) cases. None of the symptoms present showed a significant association with colon cancer (all P > 0.05). CONCLUSION: Minimal preparation CT is a non-invasive and sensitive method for investigating colon cancer in frail or elderly patients. It has a 100% negative predictive value and also detects a large number of extracolonic lesions. Robinson, P. et al. (2002)
Minimizing Experimental Setup Time and Effort at APS beamline 1-ID through Instrumentation Design
Energy Technology Data Exchange (ETDEWEB)
Benda, Erika; Almer, Jonathan; Kenesei, Peter; Mashayekhi, Ali; Okasinksi, John; Park, Jun-Sang; Ranay, Rogelio; Shastri, Sarvijt
2016-01-01
Sector 1-ID at the APS accommodates a number of dif-ferent experimental techniques in the same spatial enve-lope of the E-hutch end station. These include high-energy small and wide angle X-ray scattering (SAXS and WAXS), high-energy diffraction microscopy (HEDM, both near and far field modes) and high-energy X-ray tomography. These techniques are frequently combined to allow the users to obtain multimodal data, often attaining 1 μm spatial resolution and <0.05º angular resolution. Furthermore, these techniques are utilized while the sam-ple is thermo-mechanically loaded to mimic real operat-ing conditions. The instrumentation required for each of these techniques and environments has been designed and configured in a modular way with a focus on stability and repeatability between changeovers. This approach allows the end station to be more versatile, capable of collecting multi-modal data in-situ while reducing time and effort typically required for set up and alignment, resulting in more efficient beam time use. Key instrumentation de-sign features and layout of the end station are presented.
A time-minimizing hybrid method for fitting complex Moessbauer spectra
International Nuclear Information System (INIS)
Steiner, K.J.
2000-07-01
The process of fitting complex Moessbauer-spectra is known to be time-consuming. The fitting process involves a mathematical model for the combined hyperfine interaction which can be solved by an iteration method only. The iteration method is very sensitive to its input-parameters. In other words, with arbitrary input-parameters it is most unlikely that the iteration method will converge. Up to now a scientist has to spent her/his time to guess appropriate input parameters for the iteration process. The idea is to replace the guessing phase by a genetic algorithm. The genetic algorithm starts with an initial population of arbitrary input parameters. Each parameter set is called an individual. The first step is to evaluate the fitness of all individuals. Afterwards the current population is recombined to form a new population. The process of recombination involves the successive application of genetic operators which are selection, crossover, and mutation. These operators mimic the process of natural evolution, i.e. the concept of the survival of the fittest. Even though there is no formal proof that the genetic algorithm will eventually converge, there is an excellent chance that there will be a population with very good individuals after some generations. The hybrid method presented in the following combines a very modern version of a genetic algorithm with a conventional least-square routine solving the combined interaction Hamiltonian i.e. providing a physical solution with the original Moessbauer parameters by a minimum of input. (author)
Timing incorporation of different green manure crops to minimize the risk of nitrogen leaching
Directory of Open Access Journals (Sweden)
H. KÄNKÄNEN
2008-12-01
Full Text Available Seven field trials at four research sites were carried out to study the effect of incorporation time of different plant materials on soil mineral N content during two successive seasons. Annual hairy vetch (Vicia villosa Roth, red clover (Trifolium pratense L., westerwold ryegrass (Lolium multiflorum Lam. var. westerwoldicum and straw residues of N-fertilized spring barley (Hordeum vulgare were incorporated into the soil by ploughing in early September, late October and the following May, and by reduced tillage in May. Delaying incorporation of the green manure crop in autumn lessened the risk of N leaching. The higher the crop N and soil NO3-N content, the greater the risk of leaching. Incorporation in the following spring, which lessened the risk of N leaching as compared with early autumn ploughing, often had an adverse effect on the growth of the succeeding crop. After spring barley, the NO3-N content of the soil tended to be high, but the timing of incorporation did not have a marked effect on soil N. With exceptionally high soil mineral N content, N leaching was best inhibited by growing westerwold ryegrass in the first experimental year. ;
Imprecise results: Utilizing partial computations in real-time systems
Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.
1987-01-01
In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.
Evaluation of the minimal replication time of Cauliflower mosaic virus in different hosts
International Nuclear Information System (INIS)
Khelifa, Mounia; Masse, Delphine; Blanc, Stephane; Drucker, Martin
2010-01-01
Though the duration of a single round of replication is an important biological parameter, it has been determined for only few viruses. Here, this parameter was determined for Cauliflower mosaic virus (CaMV) in transfected protoplasts from different hosts: the highly susceptible Arabidopsis and turnip, and Nicotiana benthamiana, where CaMV accumulates only slowly. Four methods of differing sensitivity were employed: labelling of (1) progeny DNA and (2) capsid protein, (3) immunocapture PCR,, and (4) progeny-specific PCR. The first progeny virus was detected about 21 h after transfection. This value was confirmed by all methods, indicating that our estimate was not biased by the sensitivity of the detection method, and approximated the actual time required for one round of CaMV replication. Unexpectedly, the replication kinetics were similar in the three hosts; suggesting that slow accumulation of CaMV in Nicotiana plants is determined by non-optimal interactions in other steps of the infection cycle.
Machine scheduling to minimize weighted completion times the use of the α-point
Gusmeroli, Nicoló
2018-01-01
This work reviews the most important results regarding the use of the α-point in Scheduling Theory. It provides a number of different LP-relaxations for scheduling problems and seeks to explain their polyhedral consequences. It also explains the concept of the α-point and how the conversion algorithm works, pointing out the relations to the sum of the weighted completion times. Lastly, the book explores the latest techniques used for many scheduling problems with different constraints, such as release dates, precedences, and parallel machines. This reference book is intended for advanced undergraduate and postgraduate students who are interested in scheduling theory. It is also inspiring for researchers wanting to learn about sophisticated techniques and open problems of the field.
Energy Technology Data Exchange (ETDEWEB)
Iftimia, I; Talmadge, M; Halvorsen, P [Lahey Clinic, Burlington, MA (United States)
2015-06-15
Purpose: To implement an efficient and robust process for AccuBoost planning and treatment delivery that can be safely performed by a single Physicist while minimizing patient’s total session time. Methods: Following a thorough commissioning and validation process, templates were created in the brachytherapy planning system for each AccuBoost applicator. Tables of individual and total nominal dwell times for each applicator as a function of separation were generated to streamline planning while an Excel-based nomogram provided by the vendor functions as a secondary verification of the treatment parameters. Tables of surface dose as a function of separation and applicator, along with concise guidance documents for applicator selection, are readily available during the planning process. The entire process is described in a set of detailed Standard Operating Procedures which, in addition to the items described above, include a verbal time-out between the primary planner and the individual performing the secondary verification as well as direct visual confirmation of applicator placement using an articulated mirror. Prior to treatment initiation, a final time-out is conducted with the Radiation Oncologist. Chart documentation is finalized after the patient is released from compression following completion of the treatment. Results: With the aforementioned procedures, it has been possible to consistently limit the time required to prepare each treatment such that the patient is typically under compression for less than 10 minutes per orientation prior to the initiation of the treatment, which is particularly important for APBI cases. This process can be overseen by a single physicist assisted by a dosimetrist and has been optimized during the past 16 months, with 180 treatment sessions safely completed to date. Conclusion: This work demonstrates the implementation of an efficient and robust process for real-time-planned AccuBoost treatments that effectively minimizes
DEFF Research Database (Denmark)
Li, Chendan; de Bosio, Federico; Chaudhary, Sanjay Kumar
2015-01-01
In this paper, an optimal power flow problem is formulated in order to minimize the total operation cost by considering real-time pricing in DC microgrids. Each generation resource in the system, including the utility grid, is modeled in terms of operation cost, which combines the cost...... problem is solved in a heuristic way by using genetic algorithms. In order to test the proposed algorithm, a six-bus droop-controlled DC microgrid is used as a case-study. The obtained simulation results show that under variable renewable generation, load, and electricity prices, the proposed method can...
Minimal residual HIV viremia: verification of the Abbott Real-Time HIV-1 assay sensitivity
Directory of Open Access Journals (Sweden)
Alessandra Amendola
2010-06-01
Full Text Available Introduction: In the HIV-1 infection, the increase in number of CD4 T lymphocytes and the viral load decline are the main indicators of the effectiveness of antiretroviral therapy. On average, 85% of patients receiving effective treatment has a persistent suppression of plasma viral load below the detection limit (<50 copies/mL of clinically used viral load assays, regardless of treatment regimen in use. It is known, however, that, even when viremia is reduced below the sensitivity limit of current diagnostic assays, the virus persists in “reservoirs” and traces of free virions can be detected in plasma.There is a considerable interest to investigate the clinical significance of residual viremia. Advances in molecular diagnostics allows nowadays to couple a wide dynamic range to a high sensitivity.The Abbott Real-time HIV-1 test is linear from 40 to 107 copies/mL and provides, below 40 copies/mL, additional information such as “<40cp/mL, target detected” or “target not detected”. The HIV-1 detection is verified by the max-Ratio algorithm software.We assessed the test sensitivity when the qualitative response is considered as well. Methods: A ‘probit’ analysis was performed using dilutions of the HIV-1 RNA Working Reagent 1 for NAT assays (NIBSC code: 99/634, defined in IU/mL and different from that used by the manufacturer (VQA,Virology Quality Assurance Laboratory of the AIDS Clinical Trial Group for standardization and definition of performances.The sample input volume (0.6 mL was the same used in clinical routine. A total of 196 replicates at concentrations decreasing from 120 to 5 copies/mL, in three different sessions, have been tested.The ‘probit’ analysis (binomial dose-response model, 95% “hit-rate” has been carried out on the SAS 9.1.3 software package. Results: The sensitivity of the “<40cp/mL, target detected” response was equal to 28,76 copies/mL, with 95% confidence limits between 22,19 and 52,27 copies
Kakar, Jaber
2017-10-29
An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\\\\{1,2\\\\}$ and $K=\\\\{1,2,3\\\\}$ that satisfy $M+K\\\\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.
Real-time brain computer interface using imaginary movements
DEFF Research Database (Denmark)
El-Madani, Ahmad; Sørensen, Helge Bjarup Dissing; Kjær, Troels W.
2015-01-01
Background: Brain Computer Interface (BCI) is the method of transforming mental thoughts and imagination into actions. A real-time BCI system can improve the quality of life of patients with severe neuromuscular disorders by enabling them to communicate with the outside world. In this paper...
GRAPHIC, time-sharing magnet design computer programs at Argonne
International Nuclear Information System (INIS)
Lari, R.J.
1974-01-01
This paper describes three magnet design computer programs in use at the Zero Gradient Synchrotron of Argonne National Laboratory. These programs are used in the time sharing mode in conjunction with a Tektronix model 4012 graphic display terminal. The first program in called TRIM, the second MAGNET, and the third GFUN. (U.S.)
Instructional Advice, Time Advice and Learning Questions in Computer Simulations
Rey, Gunter Daniel
2010-01-01
Undergraduate students (N = 97) used an introductory text and a computer simulation to learn fundamental concepts about statistical analyses (e.g., analysis of variance, regression analysis and General Linear Model). Each learner was randomly assigned to one cell of a 2 (with or without instructional advice) x 2 (with or without time advice) x 2…
Neural Computations in a Dynamical System with Multiple Time Scales.
Mi, Yuanyuan; Lin, Xiaohan; Wu, Si
2016-01-01
Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.
Reduced computational cost in the calculation of worst case response time for real time systems
Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo
2009-01-01
Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...
Directory of Open Access Journals (Sweden)
Tarun R. Katapally, Nazeem Muhajarine
2014-06-01
Full Text Available Accelerometers are predominantly used to objectively measure the entire range of activity intensities – sedentary behaviour (SED, light physical activity (LPA and moderate to vigorous physical activity (MVPA. However, studies consistently report results without accounting for systematic accelerometer wear-time variation (within and between participants, jeopardizing the validity of these results. This study describes the development of a standardization methodology to understand and minimize measurement bias due to wear-time variation. Accelerometry is generally conducted over seven consecutive days, with participants' data being commonly considered 'valid' only if wear-time is at least 10 hours/day. However, even within ‘valid’ data, there could be systematic wear-time variation. To explore this variation, accelerometer data of Smart Cities, Healthy Kids study (www.smartcitieshealthykids.com were analyzed descriptively and with repeated measures multivariate analysis of variance (MANOVA. Subsequently, a standardization method was developed, where case-specific observed wear-time is controlled to an analyst specified time period. Next, case-specific accelerometer data are interpolated to this controlled wear-time to produce standardized variables. To understand discrepancies owing to wear-time variation, all analyses were conducted pre- and post-standardization. Descriptive analyses revealed systematic wear-time variation, both between and within participants. Pre- and post-standardized descriptive analyses of SED, LPA and MVPA revealed a persistent and often significant trend of wear-time’s influence on activity. SED was consistently higher on weekdays before standardization; however, this trend was reversed post-standardization. Even though MVPA was significantly higher on weekdays both pre- and post-standardization, the magnitude of this difference decreased post-standardization. Multivariable analyses with standardized SED, LPA and
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
DEFF Research Database (Denmark)
Chen, Shuheng; Hu, Weihao; Chen, Zhe
2016-01-01
In this paper, an efficient methodology is proposed to deal with segmented-time reconfiguration problem of distribution networks coupled with segmented-time reactive power control of distributed generators. The target is to find the optimal dispatching schedule of all controllable switches...... and distributed generators’ reactive powers in order to minimize comprehensive cost. Corresponding constraints, including voltage profile, maximum allowable daily switching operation numbers (MADSON), reactive power limits, and so on, are considered. The strategy of grouping branches is used to simplify...... (FAHPSO) is implemented in VC++ 6.0 program language. A modified version of the typical 70-node distribution network and several real distribution networks are used to test the performance of the proposed method. Numerical results show that the proposed methodology is an efficient method for comprehensive...
Real-time FPGA architectures for computer vision
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2000-03-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.
Distributed computing for real-time petroleum reservoir monitoring
Energy Technology Data Exchange (ETDEWEB)
Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)
2004-05-01
Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.
Real-time Tsunami Inundation Prediction Using High Performance Computers
Oishi, Y.; Imamura, F.; Sugawara, D.
2014-12-01
Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the
Soft Real-Time PID Control on a VME Computer
Karayan, Vahag; Sander, Stanley; Cageao, Richard
2007-01-01
microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.
Directory of Open Access Journals (Sweden)
SOUVIK PAL
2016-09-01
Full Text Available Cloud computing is an emerging paradigm of Internet-centric business computing where Cloud Service Providers (CSPs are providing services to the customer according to their needs. The key perception behind cloud computing is on-demand sharing of resources available in the resource pool provided by CSP, which implies new emerging business model. The resources are provisioned when jobs arrive. The job scheduling and minimization of waiting time are the challenging issue in cloud computing. When a large number of jobs are requested, they have to wait for getting allocated to the servers which in turn may increase the queue length and also waiting time. This paper includes system design for implementation which is concerned with Johnson Scheduling Algorithm that provides the optimal sequence. With that sequence, service times can be obtained. The waiting time and queue length can be reduced using queuing model with multi-server and finite capacity which improves the job scheduling model.
Directory of Open Access Journals (Sweden)
Woerner Michael
2011-08-01
Full Text Available Abstract Background Impingement can be a serious complication after total hip arthroplasty (THA, and is one of the major causes of postoperative pain, dislocation, aseptic loosening, and implant breakage. Minimally invasive THA and computer-navigated surgery were introduced several years ago. We have developed a novel, computer-assisted operation method for THA following the concept of "femur first"/"combined anteversion", which incorporates various aspects of performing a functional optimization of the cup position, and comprehensively addresses range of motion (ROM as well as cup containment and alignment parameters. Hence, the purpose of this study is to assess whether the artificial joint's ROM can be improved by this computer-assisted operation method. Second, the clinical and radiological outcome will be evaluated. Methods/Design A registered patient- and observer-blinded randomized controlled trial will be conducted. Patients between the ages of 50 and 75 admitted for primary unilateral THA will be included. Patients will be randomly allocated to either receive minimally invasive computer-navigated "femur first" THA or the conventional minimally invasive THA procedure. Self-reported functional status and health-related quality of life (questionnaires will be assessed both preoperatively and postoperatively. Perioperative complications will be registered. Radiographic evaluation will take place up to 6 weeks postoperatively with a computed tomography (CT scan. Component position will be evaluated by an independent external institute on a 3D reconstruction of the femur/pelvis using image-processing software. Postoperative ROM will be calculated by an algorithm which automatically determines bony and prosthetic impingements. Discussion In the past, computer navigation has improved the accuracy of component positioning. So far, there are only few objective data quantifying the risks and benefits of computer navigated THA. Therefore, this
Effects of computing time delay on real-time control systems
Shin, Kang G.; Cui, Xianzhong
1988-01-01
The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.
Sorting on STAR. [CDC computer algorithm timing comparison
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Computational intelligence in time series forecasting theory and engineering applications
Palit, Ajoy K
2005-01-01
Foresight in an engineering enterprise can make the difference between success and failure, and can be vital to the effective control of industrial systems. Applying time series analysis in the on-line milieu of most industrial plants has been problematic owing to the time and computational effort required. The advent of soft computing tools offers a solution. The authors harness the power of intelligent technologies individually and in combination. Examples of the particular systems and processes susceptible to each technique are investigated, cultivating a comprehensive exposition of the improvements on offer in quality, model building and predictive control and the selection of appropriate tools from the plethora available. Application-oriented engineers in process control, manufacturing, production industry and research centres will find much to interest them in this book. It is suitable for industrial training purposes, as well as serving as valuable reference material for experimental researchers.
Spike-timing-based computation in sound localization.
Directory of Open Access Journals (Sweden)
Dan F M Goodman
2010-11-01
Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.
A heterogeneous hierarchical architecture for real-time computing
Energy Technology Data Exchange (ETDEWEB)
Skroch, D.A.; Fornaro, R.J.
1988-12-01
The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.
Modern EMC analysis I time-domain computational schemes
Kantartzis, Nikolaos V
2008-01-01
The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of contemporary real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, the analysis covers the theory of the finite-difference time-domain, the transmission-line matrix/modeling, and the finite i
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2014-07-01
Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lagorce, Xavier; Benosman, Ryad
2015-11-01
There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.
Turrisi, Rob; Mallett, Kimberly A.; Cleveland, Michael J.; Varvil-Weld, Lindsey; Abar, Caitlin; Scaglione, Nichole; Hultgren, Brittney
2013-01-01
Objective: The study evaluated the timing and dosage of a parent-based intervention to minimize alcohol consumption for students with varying drinking histories. Method: First-year students (N = 1,900) completed Web assessments during the summer before college (baseline) and two follow-ups (fall of first and second years). Students were randomized to one of four conditions (pre-college matriculation [PCM], pre-college matriculation plus boosters [PCM+B], after college matriculation [ACM], and control conditions). Seven indicators of drinking (drink in past month, been drunk in past month, weekday [Sunday to Wednesday] drinking, Thursday drinking, weekend [Friday, Saturday] drinking, heavy episodic drinking in past 2 weeks, and peak blood alcohol concentration students. PMID:23200148
Turrisi, Rob; Mallett, Kimberly A; Cleveland, Michael J; Varvil-Weld, Lindsey; Abar, Caitlin; Scaglione, Nichole; Hultgren, Brittney
2013-01-01
The study evaluated the timing and dosage of a parent-based intervention to minimize alcohol consumption for students with varying drinking histories. First-year students (N = 1,900) completed Web assessments during the summer before college (baseline) and two follow-ups (fall of first and second years). Students were randomized to one of four conditions (pre-college matriculation [PCM], pre-college matriculation plus boosters [PCM+B], after college matriculation [ACM], and control conditions). Seven indicators of drinking (drink in past month, been drunk in past month, weekday [Sunday to Wednesday] drinking, Thursday drinking, weekend [Friday, Saturday] drinking, heavy episodic drinking in past 2 weeks, and peak blood alcohol concentration students.
Energy Technology Data Exchange (ETDEWEB)
Holmquist, F. (Dept. of Diagnostic Radiology, Malmoe Univ. Hospital, Univ. of Lund, Malmoe (Sweden)); Hansson, K.; Pasquariello, F. (Dept. of Internal Medicine, Lasarettet Trelleborg, Univ. of Lund, Trelleborg (Sweden)); Bjoerk, J. (Competence Center for Clinical Research, Univ. Hospital, Univ. of Lund, Lund (Sweden)); Nyman, U. (Dept. of Radiology, Lasarettet Trelleborg, Univ. of Lund, Trelleborg (Sweden))
2009-02-15
Background: In diagnosing acute pulmonary embolism (PE) in azotemic patients, scintigraphy and magnetic resonance imaging are frequently inconclusive or not available in many hospitals. Computed tomography is readily available, but relatively high doses (30-50 g I) of potentially nephrotoxic iodine contrast media (CM) are used. Purpose: To report on the diagnostic quality and possible contrast-induced nephropathy (CIN) after substantially reduced CM doses to diagnose PE in azotemic patients using 80-peak kilovoltage (kVp) 16-row multidetector computed tomography (MDCT) combined with CM doses tailored to body weight, fixed injection duration adapted to scan time, automatic bolus tracking, and saline chaser. Material and Methods: Patients with estimated glomerular filtration rate (eGFR) <50 ml/min were scheduled to undergo 80-kVp MDCT using 200 mg I/kg, and those with eGFR =50 ml/min, 120-kVp MDCT with 320 mg I/kg. Both protocols used an 80-kg maximum dose weight and a fixed 15-s injection time. Pulmonary artery density and contrast-to-noise ratio were measured assuming 70 Hounsfield units (HU) for a fresh clot. CIN was defined as a plasma creatinine rise >44.2 mumol/l from baseline. Results: 89/148 patients (63/68 females) underwent 80-/120-kVp protocols, respectively, with 95% of the examinations being subjectively excellent or adequate. Mean values in the 80-/120-kVp cohorts regarding age were 82/65 years, body weight 66/78 kg, effective mAs 277/117, CM dose 13/23 g I, pulmonary artery density 359/345 HU, image noise (1 standard deviation) 24/21 HU, contrast-to-noise ratio 13/13, and dose-length product 173/258 mGycm. Only 1/65 and 2/119 patients in the 80- and 120-kVp cohorts, respectively, with negative CT and no anticoagulation suffered non-fatal thromboembolism during 3-month follow-up. No patient developed CIN. Conclusion: 80-kVp 16-row MDCT with optimization of injection parameters may be performed with preserved diagnostic quality, using markedly reduced CM
Climate Data Provenance Tracking for Just-In-Time Computation
Fries, S.; Nadeau, D.; Doutriaux, C.; Williams, D. N.
2016-12-01
The "Climate Data Management System" (CDMS) was created in 1996 as part of the Climate Data Analysis Tools suite of software. It provides a simple interface into a wide variety of climate data formats, and creates NetCDF CF-Compliant files. It leverages the NumPy framework for high performance computation, and is an all-in-one IO and computation package. CDMS has been extended to track manipulations of data, and trace that data all the way to the original raw data. This extension tracks provenance about data, and enables just-in-time (JIT) computation. The provenance for each variable is packaged as part of the variable's metadata, and can be used to validate data processing and computations (by repeating the analysis on the original data). It also allows for an alternate solution for sharing analyzed data; if the bandwidth for a transfer is prohibitively expensive, the provenance serialization can be passed in a much more compact format and the analysis rerun on the input data. Data provenance tracking in CDMS enables far-reaching and impactful functionalities, permitting implementation of many analytical paradigms.
A note on computing average state occupation times
Directory of Open Access Journals (Sweden)
Jan Beyersmann
2014-05-01
Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.
Computer network time synchronization the network time protocol on earth and in space
Mills, David L
2010-01-01
Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib
Computational electrodynamics the finite-difference time-domain method
Taflove, Allen
2005-01-01
This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.
In this issue: Time to replace doctors’ judgement with computers
Directory of Open Access Journals (Sweden)
Simon de Lusignan
2015-11-01
Full Text Available Informaticians continue to rise to the challenge, set by the English Health Minister, of trying to replace doctors’ judgement with computers. This issue describes successes and where there are barriers. However, whilst there is progress this tends to be incremental and there are grand challenges to be overcome before computers can replace clinician. These grand challenges include: (1 improving usability so it is possible to more readily incorporate technology into clinical workflow; (2 rigorous new analytic methods that make use of the mass of available data, ‘Big data’, to create real-world evidence; (3 faster ways of meeting regulatory and legal requirements including ensuring privacy; (4 provision of reimbursement models to fund innovative technology that can substitute for clinical time and (5 recognition that innovations that improve quality also often increase cost. Informatics more is likely to support and augment clinical decision making rather than replace clinicians.
Directory of Open Access Journals (Sweden)
Yi Tang
2017-11-01
Full Text Available The inherent variability and randomness of large-scale wind power integration have brought great challenges to power flow control and dispatch. The distributed power flow controller (DPFC has the higher flexibility and capacity in power flow control in the system with wind generation. This paper proposes a multi-time scale coordinated scheduling model with DPFC to minimize wind power spillage. Configuration of DPFCs is initially determined by stochastic method. Afterward, two sequential procedures containing day-head and real-time scales are applied for determining maximum schedulable wind sources, optimal outputs of generating units and operation setting of DPFCs. The generating plan is obtained initially in day-ahead scheduling stage and modified in real-time scheduling model, while considering the uncertainty of wind power and fast operation of DPFC. Numerical simulation results in IEEE-RTS79 system illustrate that wind power is maximum scheduled with the optimal deployment and operation of DPFC, which confirms the applicability and effectiveness of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Kim, K.S. [Samsung Techwin Co., Ltd., Seoul (Korea); Kim, D.Y. [Bucheon College, Bucheon (Korea); Kim, S.H. [University of Seoul, Seoul (Korea)
2002-05-01
In this paper, the implementation of a new AF(Automatic Focusing) system for a digital still camera is introduced. The proposed system operates in real-time while adjusting focus after the measurement of distance to an object using a passive sensor, which is different from a typical method. In addition, measurement errors were minimized by using the data acquired empirically, and the optimal measuring time was obtained using EV(Exposure Value) which is calculated from CCD luminance signal. Moreover, this system adopted an auxiliary light source for focusing in absolute dark conditions, which is very hard for CCD image processing. Since this is an open-loop system adjusting focus immediately after the distance measurement, it guarantees real-time operation. The performance of this new AF system was verified by comparing the focusing value curve obtained from AF experiment with the one from the measurement by MF(Manual-Focusing). In both case, edge detector was used for various objects and backgrounds. (author). 9 refs., 11 figs., 5 tabs.
Directory of Open Access Journals (Sweden)
Antonio Costa
2014-07-01
Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.
Directory of Open Access Journals (Sweden)
Win-Chin Lin
2018-01-01
Full Text Available Two-stage production process and its applications appear in many production environments. Job processing times are usually assumed to be constant throughout the process. In fact, the learning effect accrued from repetitive work experiences, which leads to the reduction of actual job processing times, indeed exists in many production environments. However, the issue of learning effect is rarely addressed in solving a two-stage assembly scheduling problem. Motivated by this observation, the author studies a two-stage three-machine assembly flow shop problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion. Because this problem is proved to be NP-hard, a branch-and-bound method embedded with some developed dominance propositions and a lower bound is employed to search for optimal solutions. A cloud theory-based simulated annealing (CSA algorithm and an iterated greedy (IG algorithm with four different local search methods are used to find near-optimal solutions for small and large number of jobs. The performances of adopted algorithms are subsequently compared through computational experiments and nonparametric statistical analyses, including the Kruskal–Wallis test and a multiple comparison procedure.
Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho
2017-10-01
This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.
Real time simulation of large systems on mini-computer
International Nuclear Information System (INIS)
Nakhle, Michel; Roux, Pierre.
1979-01-01
Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr
NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation
Energy Technology Data Exchange (ETDEWEB)
Nikkel, D J
2009-07-21
This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3
FRANTIC: a computer code for time dependent unavailability analysis
International Nuclear Information System (INIS)
Vesely, W.E.; Goldberg, F.F.
1977-03-01
The FRANTIC computer code evaluates the time dependent and average unavailability for any general system model. The code is written in FORTRAN IV for the IBM 370 computer. Non-repairable components, monitored components, and periodically tested components are handled. One unique feature of FRANTIC is the detailed, time dependent modeling of periodic testing which includes the effects of test downtimes, test overrides, detection inefficiencies, and test-caused failures. The exponential distribution is used for the component failure times and periodic equations are developed for the testing and repair contributions. Human errors and common mode failures can be included by assigning an appropriate constant probability for the contributors. The output from FRANTIC consists of tables and plots of the system unavailability along with a breakdown of the unavailability contributions. Sensitivity studies can be simply performed and a wide range of tables and plots can be obtained for reporting purposes. The FRANTIC code represents a first step in the development of an approach that can be of direct value in future system evaluations. Modifications resulting from use of the code, along with the development of reliability data based on operating reactor experience, can be expected to provide increased confidence in its use and potential application to the licensing process
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
Variable dead time counters: 2. A computer simulation
International Nuclear Information System (INIS)
Hooton, B.W.; Lees, E.W.
1980-09-01
A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)
Directory of Open Access Journals (Sweden)
Wan-Yu Liu
2014-07-01
Full Text Available Torespondto the reduction of greenhouse gas emissions and global warming, this paper investigates the minimal-carbon-footprint time-dependent heterogeneous-fleet vehicle routing problem with alternative paths (MTHVRPP. This finds a route with the smallestcarbon footprint, instead of the shortestroute distance, which is the conventional approach, to serve a number of customers with a heterogeneous fleet of vehicles in cases wherethere may not be only one path between each pair of customers, and the vehicle speed differs at different times of the day. Inheriting from the NP-hardness of the vehicle routing problem, the MTHVRPP is also NP-hard. This paper further proposes a genetic algorithm (GA to solve this problem. The solution representedbyour GA determines the customer serving ordering of each vehicle type. Then, the capacity check is used to classify multiple routes of each vehicle type, and the path selection determines the detailed paths of each route. Additionally, this paper improves the energy consumption model used for calculating the carbon footprint amount more precisely. Compared with the results without alternative paths, our experimental results show that the alternative path in this experimenthas a significant impact on the experimental results in terms of carbon footprint.
Minotti, Luca; Savaré, Giuseppe
2018-02-01
We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.
International Nuclear Information System (INIS)
Naslain, R.; Thebault, J.; Hagenmuller, P.; Bernard, C.
1979-01-01
A thermodynamic approach based on the minimization of the total Gibbs free energy of the system is used to study the chemical vapour deposition (CVD) of boron from BCl 3 -H 2 or BBr 3 -H 2 mixtures on various types of substrates (at 1000 < T< 1900 K and 1 atm). In this approach it is assumed that states close to equilibrium are reached in the boron CVD apparatus. (Auth.)
Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective
Chen, Shyi-Ming
2013-01-01
Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...
International Nuclear Information System (INIS)
Nozari, Kourosh; Sadatian, S.D.
2008-01-01
We consider two alternative dark-energy models: a Lorentz-invariance preserving model with a non-minimally coupled scalar field and a Lorentz-invariance violating model with a minimally coupled scalar field. We study accelerated expansion and the dynamics of the equation of state parameter in these scenarios. While a minimally coupled scalar field does not have the capability to be a successful dark-energy candidate with line crossing of the cosmological constant, a non-minimally coupled scalar field in the presence of Lorentz invariance or a minimally coupled scalar field with Lorentz-invariance violation have this capability. In the latter case, accelerated expansion and phantom divide line crossing are the results of the interactive nature of this Lorentz-violating scenario. (orig.)
Vijayganapathy, Sundaramoorthy; Karthikeyan, Vilvapathy Senguttuvan; Mallya, Ashwin; Sreenivas, Jayaram
2017-06-01
Wunderlich Syndrome (WS) is an uncommon condition where acute onset of spontaneous bleeding occurs into the subcapsular and perirenal spaces. It can prove fatal if not recognized and treated aggressively at the appropriate time. A 32-year-old male diagnosed elsewhere as acute renal failure presented with tender left loin mass, fever and hypovolemic shock with serum creatinine 8.4 mg/dl. He was started on higher antibiotics and initiated on haemodialysis. Ultrasonogram (USG), Non-Contrast Computed Tomography (NCCT) and Magnetic Resonance Imaging (MRI) showed bilateral perirenal subcapsular haematomas - right 3.6 x 3.1 cm and left 10.3 x 10.3 cm compressing and displacing left kidney, fed by capsular branch of left renal artery on CT angiogram. Initial aspirate was bloody but he persisted to have febrile spikes, renal failure and urosepsis and he was managed conservatively. Repeat NCCT 10 days later revealed left perinephric abscess and Percutaneous Drainage (PCD) was done. Patient improved, serum creatinine stabilized at 2 mg/dl without haemodialysis and PCD was removed after two weeks. To conclude, bilateral idiopathic spontaneous retroperitoneal haemorrhage with renal failure is a rare presentation. This case highlights the need for high index of suspicion, the role of repeated imaging and successful minimally invasive management with timely PCD and supportive care.
Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek
2009-09-01
High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.
Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong
2018-04-12
Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.
Wang, Gang-Jin; Xie, Chi; Han, Feng; Sun, Bo
2012-08-01
In this study, we employ a dynamic time warping method to study the topology of similarity networks among 35 major currencies in international foreign exchange (FX) markets, measured by the minimal spanning tree (MST) approach, which is expected to overcome the synchronous restriction of the Pearson correlation coefficient. In the empirical process, firstly, we subdivide the analysis period from June 2005 to May 2011 into three sub-periods: before, during, and after the US sub-prime crisis. Secondly, we choose NZD (New Zealand dollar) as the numeraire and then, analyze the topology evolution of FX markets in terms of the structure changes of MSTs during the above periods. We also present the hierarchical tree associated with the MST to study the currency clusters in each sub-period. Our results confirm that USD and EUR are the predominant world currencies. But USD gradually loses the most central position while EUR acts as a stable center in the MST passing through the crisis. Furthermore, an interesting finding is that, after the crisis, SGD (Singapore dollar) becomes a new center currency for the network.
International Nuclear Information System (INIS)
Horesh, L; Haber, E
2009-01-01
The l 1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.
Energy Technology Data Exchange (ETDEWEB)
Forini, V. [Institut für Physik, Humboldt-Universität zu Berlin, IRIS Adlershof,Zum Großen Windkanal 6, 12489 Berlin (Germany); Tseytlin, A.A. [Theoretical Physics Group, Blackett Laboratory, Imperial College,London, SW7 2AZ (United Kingdom); Vescovi, E. [Institut für Physik, Humboldt-Universität zu Berlin, IRIS Adlershof,Zum Großen Windkanal 6, 12489 Berlin (Germany); Institute of Physics, University of São Paulo,Rua do Matão 1371, 05508-090 São Paulo (Brazil)
2017-03-01
We revisit the computation of the 1-loop string correction to the “latitude' minimal surface in AdS{sub 5}×S{sup 5} representing 1/4 BPS Wilson loop in planar N=4 SYM theory previously addressed in https://arxiv.org/abs/1512.00841 and https://arxiv.org/abs/1601.04708. We resolve the problem of matching with the subleading term in the strong coupling expansion of the exact gauge theory result (derived previously from localization) using a different method to compute determinants of 2d string fluctuation operators. We apply perturbation theory in a small parameter (angle of the latitude) corresponding to an expansion near the AdS{sub 2} minimal surface representing 1/2 BPS circular Wilson loop. This allows us to compute the corrections to the heat kernels and zeta-functions of the operators in terms of the known heat kernels on AdS{sub 2}. We apply the same method also to two other examples of Wilson loop surfaces: generalized cusp and k-wound circle.
Neural Computations in a Dynamical System with Multiple Time Scales
Directory of Open Access Journals (Sweden)
Yuanyuan Mi
2016-09-01
Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.
Chemistry, physics and time: the computer modelling of glassmaking.
Martlew, David
2003-01-01
A decade or so ago the remains of an early flat glass furnace were discovered in St Helens. Continuous glass production only became feasible after the Siemens Brothers demonstrated their continuous tank furnace at Dresden in 1870. One manufacturer of flat glass enthusiastically adopted the new technology and secretly explored many variations on this theme during the next fifteen years. Study of the surviving furnace remains using today's computer simulation techniques showed how, in 1887, that technology was adapted to the special demands of window glass making. Heterogeneous chemical reactions at high temperatures are required to convert the mixture of granular raw materials into the homogeneous glass needed for windows. Kinetics (and therefore the economics) of glassmaking is dominated by heat transfer and chemical diffusion as refractory grains are converted to highly viscous molten glass. Removal of gas bubbles in a sufficiently short period of time is vital for profitability, but the glassmaker must achieve this in a reaction vessel which is itself being dissolved by the molten glass. Design and operational studies of today's continuous tank furnaces need to take account of these factors, and good use is made of computer simulation techniques to shed light on the way furnaces behave and how improvements may be made. This paper seeks to show how those same techniques can be used to understand how the early Siemens continuous tank furnaces were designed and operated, and how the Victorian entrepreneurs succeeded in managing the thorny problems of what was, in effect, a vulnerable high temperature continuous chemical reactor.
Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht
2010-01-01
Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently
Time Synchronization Strategy Between On-Board Computer and FIMS on STSAT-1
Directory of Open Access Journals (Sweden)
Seong Woo Kwak
2004-06-01
Full Text Available STSAT-1 was launched on sep. 2003 with the main payload of Far Ultra-violet Imaging Spectrograph(FIMS. The mission of FIMS is to observe universe and aurora. In this paper, we suggest a simple and reliable strategy adopted in STSAT-1 to synchronize time between On-board Computer(OBC and FIMS. For the characteristics of STSAT-1, this strategy is devised to maintain reliability of satellite system and to reduce implementation cost by using minimized electronic circuits. We suggested two methods with different synchronization resolutions to cope with unexpected faults in space. The backup method with low resolution can be activated when the main has some problems.
Directory of Open Access Journals (Sweden)
Sonia López
2016-09-01
Full Text Available This study is part of a research project that aims to characterize the epistemological, psychological and didactic presuppositions of science teachers (Biology, Physics, Chemistry that implement Computational Modeling and Simulation (CMS activities as a part of their teaching practice. We present here a synthesis of a literature review on the subject, evidencing how in the last two decades this form of computer usage for science teaching has boomed in disciplines such as Physics and Chemistry, but in a lesser degree in Biology. Additionally, in the works that dwell on the use of CMS in Biology, we identified a lack of theoretical bases that support their epistemological, psychological and/or didactic postures. Accordingly, this generates significant considerations for the fields of research and teacher instruction in Science Education.
Directory of Open Access Journals (Sweden)
Angela Hsiang-Ling Chen
2016-09-01
Full Text Available Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP, improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max. The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.
Directory of Open Access Journals (Sweden)
Xinfeng Ruan
2013-01-01
Full Text Available We study option pricing with risk-minimization criterion in an incomplete market where the dynamics of the risky underlying asset are governed by a jump diffusion equation. We obtain the Radon-Nikodym derivative in the minimal martingale measure and a partial integrodifferential equation (PIDE of European call option. In a special case, we get the exact solution for European call option by Fourier transformation methods. Finally, we employ the pricing kernel to calculate the optimal portfolio selection by martingale methods.
Time-Domain Terahertz Computed Axial Tomography NDE System
Zimdars, David
2012-01-01
NASA has identified the need for advanced non-destructive evaluation (NDE) methods to characterize aging and durability in aircraft materials to improve the safety of the nation's airline fleet. 3D THz tomography can play a major role in detection and characterization of flaws and degradation in aircraft materials, including Kevlar-based composites and Kevlar and Zylon fabric covers for soft-shell fan containment where aging and durability issues are critical. A prototype computed tomography (CT) time-domain (TD) THz imaging system has been used to generate 3D images of several test objects including a TUFI tile (a thermal protection system tile used on the Space Shuttle and possibly the Orion or similar capsules). This TUFI tile had simulated impact damage that was located and the depth of damage determined. The CT motion control gan try was designed and constructed, and then integrated with a T-Ray 4000 control unit and motion controller to create a complete CT TD-THz imaging system prototype. A data collection software script was developed that takes multiple z-axis slices in sequence and saves the data for batch processing. The data collection software was integrated with the ability to batch process the slice data with the CT TD-THz image reconstruction software. The time required to take a single CT slice was decreased from six minutes to approximately one minute by replacing the 320 ps, 100-Hz waveform acquisition system with an 80 ps, 1,000-Hz waveform acquisition system. The TD-THZ computed tomography system was built from pre-existing commercial off-the-shelf subsystems. A CT motion control gantry was constructed from COTS components that can handle larger samples. The motion control gantry allows inspection of sample sizes of up to approximately one cubic foot (.0.03 cubic meters). The system reduced to practice a CT-TDTHz system incorporating a COTS 80- ps/l-kHz waveform scanner. The incorporation of this scanner in the system allows acquisition of 3D
Balancing related methods for minimal realization of periodic systems
Varga, A.
1999-01-01
We propose balancing related numerically reliable methods to compute minimal realizations of linear periodic systems with time-varying dimensions. The first method belongs to the family of square-root methods with guaranteed enhanced computational accuracy and can be used to compute balanced minimal order realizations. An alternative balancing-free square-root method has the advantage of a potentially better numerical accuracy in case of poorly scaled original systems. The key numerical co...
Region-oriented CT image representation for reducing computing time of Monte Carlo simulations
International Nuclear Information System (INIS)
Sarrut, David; Guigues, Laurent
2008-01-01
Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy
Directory of Open Access Journals (Sweden)
Andreas Christe
Full Text Available OBJECTIVES: The aim of this phantom study was to minimize the radiation dose by finding the best combination of low tube current and low voltage that would result in accurate volume measurements when compared to standard CT imaging without significantly decreasing the sensitivity of detecting lung nodules both with and without the assistance of CAD. METHODS: An anthropomorphic chest phantom containing artificial solid and ground glass nodules (GGNs, 5-12 mm was examined with a 64-row multi-detector CT scanner with three tube currents of 100, 50 and 25 mAs in combination with three tube voltages of 120, 100 and 80 kVp. This resulted in eight different protocols that were then compared to standard CT sensitivity (100 mAs/120 kVp. For each protocol, at least 127 different nodules were scanned in 21-25 phantoms. The nodules were analyzed in two separate sessions by three independent, blinded radiologists and computer-aided detection (CAD software. RESULTS: The mean sensitivity of the radiologists for identifying solid lung nodules on a standard CT was 89.7% ± 4.9%. The sensitivity was not significantly impaired when the tube and current voltage were lowered at the same time, except at the lowest exposure level of 25 mAs/80 kVp [80.6% ± 4.3% (p = 0.031]. Compared to the standard CT, the sensitivity for detecting GGNs was significantly lower at all dose levels when the voltage was 80 kVp; this result was independent of the tube current. The CAD significantly increased the radiologists' sensitivity for detecting solid nodules at all dose levels (5-11%. No significant volume measurement errors (VMEs were documented for the radiologists or the CAD software at any dose level. CONCLUSIONS: Our results suggest a CT protocol with 25 mAs and 100 kVp is optimal for detecting solid and ground glass nodules in lung cancer screening. The use of CAD software is highly recommended at all dose levels.
Time-of-Flight Sensors in Computer Graphics
DEFF Research Database (Denmark)
Kolb, Andreas; Barth, Erhardt; Koch, Reinhard
2009-01-01
, including Computer Graphics, Computer Vision and Man Machine Interaction (MMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real...
Cluster Computing For Real Time Seismic Array Analysis.
Martini, M.; Giudicepietro, F.
A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by
International Nuclear Information System (INIS)
McKenney, R.A.; Gardner, T.G.
1992-01-01
Disequilibrium between slope form and hydrologic and erosion processes on reclaimed surface coal mines in the humid temperate northeastern US, can result in gully erosion and sediment loads which are elevated above natural, background values. Initial sheetwash erosion is surpassed by gully erosion on reclamation sites which are not in equilibrium with post-mining hydrology. Long-term stability can be attained by designing a channel profile which is in equilibrium with the increased peak discharges found on reclaimed surface mines. The Stable Slope and Sediment transport model (SSAST) was developed to design stable longitudinal channel profiles for post-mining hydrologic and erosional processes. SSAST is an event based computer model that calculates the stable slope for a channel segment based on the post-mine hydrology and median grain size of a reclaimed surface mine. Peak discharge, which drives post-mine erosion, is calculated from a 10-year, 24-hour storm using the Soil Conservation Service curve number method. Curve number calibrated for Pennsylvania surface mines are used. Reclamation sites are represented by the rectangle of triangle which most closely fits the shape of the site while having the same drainage area and length. Sediment transport and slope stability are calculated using a modified Bagnold's equation with a correction factor for the irregular particle shapes formed during the mining process. Data from three reclaimed Pennsylvania surface mines were used to calibrate and verify SSAST. Analysis indicates that SSAST can predict longitudinal channel profiles for stable reclamation of surface mines in the humid, temperate northeastern US
International Nuclear Information System (INIS)
Vasudevan, M.; Arumugam, R.; Paramasivam, S.
2006-01-01
Field oriented control (FOC) and direct torque control (DTC) are becoming the industrial standards for induction motors torque and flux control. This paper aims to give a contribution for a detailed comparison between these two control techniques, emphasizing their advantages and disadvantages. The performance of these two control schemes is evaluated in terms of torque and flux ripple and their transient response to step variations of the torque command. Moreover, a new torque and flux ripple minimization technique is also proposed to improve the performance of the DTC drive. Based on the experimental results, the analysis has been presented
Decreasing Computational Time for VBBinaryLensing by Point Source Approximation
Tirrell, Bethany M.; Visgaitis, Tiffany A.; Bozza, Valerio
2018-01-01
The gravitational lens of a binary system produces a magnification map that is more intricate than a single object lens. This map cannot be calculated analytically and one must rely on computational methods to resolve. There are generally two methods of computing the microlensed flux of a source. One is based on ray-shooting maps (Kayser, Refsdal, & Stabell 1986), while the other method is based on an application of Green’s theorem. This second method finds the area of an image by calculating a Riemann integral along the image contour. VBBinaryLensing is a C++ contour integration code developed by Valerio Bozza, which utilizes this method. The parameters at which the source object could be treated as a point source, or in other words, when the source is far enough from the caustic, was of interest to substantially decrease the computational time. The maximum and minimum values of the caustic curves produced, were examined to determine the boundaries for which this simplification could be made. The code was then run for a number of different maps, with separation values and accuracies ranging from 10-1 to 10-3, to test the theoretical model and determine a safe buffer for which minimal error could be made for the approximation. The determined buffer was 1.5+5q, with q being the mass ratio. The theoretical model and the calculated points worked for all combinations of the separation values and different accuracies except the map with accuracy and separation equal to 10-3 for y1 max. An alternative approach has to be found in order to accommodate a wider range of parameters.
Time-Of-Flight Camera, Optical Tracker and Computed Tomography in Pairwise Data Registration.
Directory of Open Access Journals (Sweden)
Bartlomiej Pycinski
Full Text Available A growing number of medical applications, including minimal invasive surgery, depends on multi-modal or multi-sensors data processing. Fast and accurate 3D scene analysis, comprising data registration, seems to be crucial for the development of computer aided diagnosis and therapy. The advancement of surface tracking system based on optical trackers already plays an important role in surgical procedures planning. However, new modalities, like the time-of-flight (ToF sensors, widely explored in non-medical fields are powerful and have the potential to become a part of computer aided surgery set-up. Connection of different acquisition systems promises to provide a valuable support for operating room procedures. Therefore, the detailed analysis of the accuracy of such multi-sensors positioning systems is needed.We present the system combining pre-operative CT series with intra-operative ToF-sensor and optical tracker point clouds. The methodology contains: optical sensor set-up and the ToF-camera calibration procedures, data pre-processing algorithms, and registration technique. The data pre-processing yields a surface, in case of CT, and point clouds for ToF-sensor and marker-driven optical tracker representation of an object of interest. An applied registration technique is based on Iterative Closest Point algorithm.The experiments validate the registration of each pair of modalities/sensors involving phantoms of four various human organs in terms of Hausdorff distance and mean absolute distance metrics. The best surface alignment was obtained for CT and optical tracker combination, whereas the worst for experiments involving ToF-camera.The obtained accuracies encourage to further develop the multi-sensors systems. The presented substantive discussion concerning the system limitations and possible improvements mainly related to the depth information produced by the ToF-sensor is useful for computer aided surgery developers.
12 CFR 516.10 - How does OTS compute time periods under this part?
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false How does OTS compute time periods under this part? 516.10 Section 516.10 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPLICATION PROCESSING PROCEDURES § 516.10 How does OTS compute time periods under this part? In computing...
How Many Times Should One Run a Computational Simulation?
DEFF Research Database (Denmark)
Seri, Raffaello; Secchi, Davide
2017-01-01
This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces statisti......This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...
Algorithm for finding minimal cut sets in a fault tree
International Nuclear Information System (INIS)
Rosenberg, Ladislav
1996-01-01
This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm
International Nuclear Information System (INIS)
Mitarai, O.; Sagara, A.; Chikaraishi, H.; Imagawa, S.; Shishkin, A.A.; Motojima, O.
2006-10-01
Minimization of the external heating power to access self-ignition is advantageous to increase the reactor design flexibility and to reduce the capital and operating costs of the plasma heating device in a helical reactor. In this work we have discovered that a larger density limit leads to a smaller value of the required confinement enhancement factor, lower density limit margin reduces the external heating power, and over 300 s of the fusion power rise-up time makes it possible to reach a minimized heating power. While the fusion power rise-up time in a tokamak is limited by the OH transformer flux or the current drive capability, any fusion power rise-up time can be employed in a helical reactor for reducing the thermal stresses of the blanket and shields, because the confinement field is generated by the external helical coils. (author)
A general algorithm for computing distance transforms in linear time
Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS
2000-01-01
A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the
Yu, Mengmeng; Zhao, Yonghong; Li, Wenbin; Lu, Zhigang; Wei, Meng; Zhou, Wenxiao; Zhang, Jiayin
2018-03-02
To study the diagnostic performance of the ratio between the Duke jeopardy score (DJS) and the minimal lumen diameter (MLD) (DJS/MLD CT ratio) as assessed by coronary computed tomographic angiography (CTA) for differentiating functionally significant from non-significant coronary artery stenoses, with reference to invasive fractional flow reserve (FFR). Patients who underwent both coronary CTA and FFR measurement during invasive coronary angiography (ICA) within 2 weeks were retrospectively included in the study. Invasive FFR measurement was performed in patients with intermediate to severe coronary stenoseis. DJS/MLD CT ratio and anatomical parameters were recorded. Lesions with FFR ≤0.80 were considered to be functionally significant. One hundred and sixty-one patients with 175 lesions were included into the analysis. Diameter stenosis in CT, area stenosis, plaque burden, lesion length (LL), ICA-based stenosis degree, DJS, LL/MLD 4 ratio, DJS/MLA ratio as well as DJS/MLD ratio were all significantly different between hemodynamically significant and non-significant lesions (pvalue for DJS/MLD CT ratio to be 1.96 (area under curve = 0.863, 95 % confidence interval = 0.803-0.910), yielding a high diagnostic accuracy (86.9%, 152/175). In coronary artery stenoses detected by coronary CTA, the DJS/MLD ratio is able to predict hemodynamic relevance. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Gheraibeh, Petra; Vaidya, Rahul; Hudson, Ian; Meehan, Robert; Tonnos, Frederick; Sethi, Anil
2018-05-01
To prevent leg length discrepancy (LLD) after locked femoral nailing in patients with comminuted femoral shaft fractures. Prospective consecutive case series aimed at quality improvement. Level 1 Trauma Center PATIENTS:: Ninety-eight consecutive patients with a comminuted femoral shaft fracture underwent statically locked intramedullary nailing, with a focused attempt at minimizing LLD during surgery. A computed tomography scanogram of both legs was performed on postoperative day 1 to assess for residual LLD. Patients were offered the option to have LLD >1.5 cm corrected before discharge. LLD >1.5 cm. Twenty-one patients (21.4%) were found to have an LLD >1.5 cm. An LLD >1.5 cm occurred in 10/55 (18%) antegrade nail patients and 11/43 (26%) retrograde nail patients (P = 0.27). No difference was noted based on the mechanism of injury, surgeon training and OTA/AO type B versus C injury. Ninety of 98 patients left with 1.5 cm after locked intramedullary nailing for a comminuted femoral shaft fracture without being informed and the option of early correction. We recommend using a full-length computed tomography scanogram after IM nailing of comminuted femur fractures to prevent iatrogenic LLD. Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.
Directory of Open Access Journals (Sweden)
Santosh Bhattarai
2017-07-01
Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.
Directory of Open Access Journals (Sweden)
Gopi Krishna Reddy Moosani
2017-12-01
Full Text Available BACKGROUND The aim of this study was to compare the ability of MTA and Biodentine to set in the presence of human blood and minimal essential media. MATERIALS AND METHODS Eighty 1 x 3 inches plexi glass sheets were taken. In each sheet, 10 wells were created and divided into 10 groups. Odd number groups were filled with MTA and even groups were filled with Biodentine. Within these groups 4 groups were control groups and the remaining 6 groups were experimental groups (i.e., blood, minimal essential media, blood and minimal essential media. Each block was submerged for 4, 5, 6, 8, 24, 36, and 48 hours in an experimental liquid at 370C with 100% humidity. RESULTS The setting times varied for the 2 materials, with contrasting differences in the setting times between MTA and Biodentine samples. Majority of the MTA samples did not set until 24 hrs. but at 36 hours all the samples of MTA are set. While for Biodentine samples, all of them had set by 6 hours. There is a significant difference in setting time between MTA and Biodentine. CONCLUSION This outcome draws into question the proposed setting time given by each respective manufacturer. Furthermore, despite Biodentine being marketed as a direct competitor to MTA with superior handling properties, MTA consistently set at a faster rate under the conditions of this study.
Quo vadis? : persuasive computing using real time queue information
Meys, Wouter; Groen, Maarten
2014-01-01
By presenting tourists with real-time information an increase in efficiency and satisfaction of their day planning can be achieved. At the same time, real-time information services can offer the municipality the opportunity to spread the tourists throughout the city centre. An important factor for
Patra, S. R.
2017-12-01
Evapotranspiration (ET0) influences water resources and it is considered as a vital process in aridic hydrologic frameworks. It is one of the most important measure in finding the drought condition. Therefore, time series forecasting of evapotranspiration is very important in order to help the decision makers and water system mangers build up proper systems to sustain and manage water resources. Time series considers that -history repeats itself, hence by analysing the past values, better choices, or forecasts, can be carried out for the future. Ten years of ET0 data was used as a part of this study to make sure a satisfactory forecast of monthly values. In this study, three models: (ARIMA) mathematical model, artificial neural network model, support vector machine model are presented. These three models are used for forecasting monthly reference crop evapotranspiration based on ten years of past historical records (1991-2001) of measured evaporation at Ganjam region, Odisha, India without considering the climate data. The developed models will allow water resource managers to predict up to 12 months, making these predictions very useful to optimize the resources needed for effective water resources management. In this study multistep-ahead prediction is performed which is more complex and troublesome than onestep ahead. Our investigation proposed that nonlinear relationships may exist among the monthly indices, so that the ARIMA model might not be able to effectively extract the full relationship hidden in the historical data. Support vector machines are potentially helpful time series forecasting strategies on account of their strong nonlinear mapping capability and resistance to complexity in forecasting data. SVMs have great learning capability in time series modelling compared to ANN. For instance, the SVMs execute the structural risk minimization principle, which allows in better generalization as compared to neural networks that use the empirical risk
Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization
Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton
As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.
Tempel, David G; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.
Mirkin, Katelin A; Greenleaf, Erin K; Hollenbeak, Christopher S; Wong, Joyce
2018-05-01
Pancreatic surgery encompasses complex operations with significant potential morbidity. Greater experience in minimally invasive surgery (MIS) has allowed resections to be performed laparoscopically and robotically. This study evaluates the impact of surgical approach in resected pancreatic cancer. The National Cancer Data Base (2010-2012) was reviewed for patients with stages 1-3 resected pancreatic carcinoma. Open approaches were compared to MIS. A sub-analysis was then performed comparing robotic and laparoscopic approaches. Of the 9047 patients evaluated, surgical approach was open in 7511 (83%), laparoscopic in 992 (11%), and robotic in 131 (1%). The laparoscopic and robotic conversion rate to open was 28% (n = 387) and 17% (n = 26), respectively. Compared to open, MIS was associated with more distal resections (13.5, 24.3%, respectively, p offered significantly shorter LOS in all types. Multivariate analysis demonstrated no survival benefit for any MIS approach relative to open (all, p > 0.05). When adjusted for patient, disease, and treatment characteristics, TTC was not an independent prognostic factor (HR 1.09, p = 0.084). MIS appears to offer comparable surgical oncologic benefit with improved LOS and shorter TTC. This effect, however, was not associated with improved survival.
5 CFR 831.703 - Computation of annuities for part-time service.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...
Storm blueprints patterns for distributed real-time computation
Goetz, P Taylor
2014-01-01
A blueprints book with 10 different projects built in 10 different chapters which demonstrate the various use cases of storm for both beginner and intermediate users, grounded in real-world example applications.Although the book focuses primarily on Java development with Storm, the patterns are more broadly applicable and the tips, techniques, and approaches described in the book apply to architects, developers, and operations.Additionally, the book should provoke and inspire applications of distributed computing to other industries and domains. Hadoop enthusiasts will also find this book a go
Macroprocessing is the computing design principle for the times
2001-01-01
In a keynote speech, Intel Corporation CEO Craig Barrett emphasized that "macroprocessing" provides innovative and cost effective solutions to companies that they can customize and scale to match their own data needs. Barrett showcased examples of macroprocessing implementations from business, government and the scientific community, which use the power of Intel Architecture and Oracle9i Real Application Clusters to build large complex and scalable database solutions. A testimonial from CERN explained how the need for high performance computing to perform scientific research on sub-atomic particles was accomplished by using clusters of Xeon processor-based servers.
Real-time exposure fusion on a mobile computer
CSIR Research Space (South Africa)
Bachoo, AK
2009-12-01
Full Text Available information in these scenarios. An image captured using a short exposure time will not saturate bright image re- gions while an image captured with a long exposure time will show more detail in the dark regions. The pixel depth provided by most camera.... The auto exposure also creates strong blown-out highlights in the foreground (the grass patch). The short shutter time (Exposure 1) correctly exposes the grass while the long shutter time (Exposure 3) is able to correctly expose the camouflaged dummy...
Computer-determined assay time based on preset precision
International Nuclear Information System (INIS)
Foster, L.A.; Hagan, R.; Martin, E.R.; Wachter, J.R.; Bonner, C.A.; Malcom, J.E.
1994-01-01
Most current assay systems for special nuclear materials (SNM) operate on the principle of a fixed assay time which provides acceptable measurement precision without sacrificing the required throughput of the instrument. Waste items to be assayed for SNM content can contain a wide range of nuclear material. Counting all items for the same preset assay time results in a wide range of measurement precision and wastes time at the upper end of the calibration range. A short time sample taken at the beginning of the assay could optimize the analysis time on the basis of the required measurement precision. To illustrate the technique of automatically determining the assay time, measurements were made with a segmented gamma scanner at the Plutonium Facility of Los Alamos National Laboratory with the assay time for each segment determined by counting statistics in that segment. Segments with very little SNM were quickly determined to be below the lower limit of the measurement range and the measurement was stopped. Segments with significant SNM were optimally assays to the preset precision. With this method the total assay time for each item is determined by the desired preset precision. This report describes the precision-based algorithm and presents the results of measurements made to test its validity
Directory of Open Access Journals (Sweden)
Peter Schattner
2015-02-01
Full Text Available Background Minimally disruptive medicine (MDM is proposed as a method for more appropriately managing people with multiple chronic disease. Much clinical management is currently single disease focussed, with people with multimorbidity being managed according to multiple single disease guidelines. Current initiatives to improve care include education about individual conditions and creating an environment where multiple guidelines might be simultaneously supported. The patientcentred medical home (PCMH is an example of the latter. However, educational programmes and PCMH may increase the burden on patients.Problem The cumulative workload for patients in managing the impact of multiple disease-specific guidelines is only relatively recently recognised. There is an intellectual vacuum as to how best to manage multimorbidity and how informatics might support implementing MDM. There is currently no alternative to multiple singlecondition- specific guidelines and a lack of certainty, should the treatment burden need to be reduced, as to which guideline might be ‘dropped’.Action The best information about multimorbidity is recorded in primary care computerised medical record (CMR systems and in an increasing number of integrated care organisations. CMR systems have the potential to flag individuals who might be in greatest need. However, CMR systems may also provide insights into whether there are ameliorating factors that might make it easier for them to be resilient to the burden of care. Data from such CMR systems might be used to develop the evidence base about how to better manage multimorbidity.Conclusions There is potential for these information systems to help reduce the management burden on patients and clinicians. However, substantial investment in research-driven CMR development is needed if we are to achieve this.
Cassetta, Michele; Altieri, Federica; Pandolfi, Stefano; Giansanti, Matteo
2017-03-01
The aim of this case report was to describe an innovative orthodontic treatment method that combined surgical and orthodontic techniques. The novel method was used to achieve a positive result in a case of moderate crowding by employing a computer-guided piezocision procedure followed by the use of clear aligners. A 23-year-old woman had a malocclusion with moderate crowding. Her periodontal indices, oral health-related quality of life (OHRQoL), and treatment time were evaluated. The treatment included interproximal corticotomy cuts extending through the entire thickness of the cortical layer, without a full-thickness flap reflection. This was achieved with a three-dimensionally printed surgical guide using computer-aided design and computer-aided manufacturing. Orthodontic force was applied to the teeth immediately after surgery by using clear appliances for better control of tooth movement. The total treatment time was 8 months. The periodontal indices improved after crowding correction, but the oral health impact profile showed a slight deterioration of OHRQoL during the 3 days following surgery. At the 2-year retention follow-up, the stability of treatment was excellent. The reduction in surgical time and patient discomfort, increased periodontal safety and patient acceptability, and accurate control of orthodontic movement without the risk of losing anchorage may encourage the use of this combined technique in appropriate cases.
Computer-controlled neutron time-of-flight spectrometer. Part II
International Nuclear Information System (INIS)
Merriman, S.H.
1979-12-01
A time-of-flight spectrometer for neutron inelastic scattering research has been interfaced to a PDP-15/30 computer. The computer is used for experimental data acquisition and analysis and for apparatus control. This report was prepared to summarize the functions of the computer and to act as a users' guide to the software system
DEFF Research Database (Denmark)
David, Alexandre; Håkansson, John; G. Larsen, Kim
In this paper we present an algorithm to compute DBM substractions with a guaranteed minimal number of splits and disjoint DBMs to avoid any redundance. The substraction is one of the few operations that result in a non-convex zone, and thus, requires splitting. It is of prime importance to reduce...
International Nuclear Information System (INIS)
Lee, J. I.; Lee, T. Y.; Jang, S. Y.; Lee, J. K.
2003-01-01
The Committed Effective Doses (CEDs) per measured unit of activity in the bioassay compartments at any time (t) after an acute intake by the inhalation of a radionuclide with a different particle size (AMAD) were calculated and compared. As a result, the relative difference between the CEDs evaluated from the different AMAD is affected by the radionuclide, bioassay compartment, and the time (t) after intake. Therefore a special monitoring time to exclude or reduce the effect of AMAD was decided and presented in the evaluation for the CEDs following an acute intake by the inhalation of a radionuclide. If special monitoring is performed during this presented special time after intake, the relative difference of the evaluated CEDs resulted from AMAD can be excluded or reduced
Directory of Open Access Journals (Sweden)
Hung-Keng Li
2015-01-01
Conclusion: SRF is more sensitive for postoperative follow-up than eGFR. Longer warm ischemia time is associated with poorer postoperative renal function. RPN is a safe and feasible alternative to LPN.
Beattle, A J; Oliver, I
1994-12-01
Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.
Computation and evaluation of scheduled waiting time for railway networks
DEFF Research Database (Denmark)
Landex, Alex
2010-01-01
Timetables are affected by scheduled waiting time (SWT) that prolongs the travel times for trains and thereby passengers. SWT occurs when a train hinders another train to run with the wanted speed. The SWT affects both the trains and the passengers in the trains. The passengers may be further...... affected due to longer transfer times to other trains. SWT can be estimated analytically for a given timetable or by simulation of timetables and/or plans of operation. The simulation of SWT has the benefit that it is possible to examine the entire network. This makes it possible to improve the future...
A computer program for the estimation of time of death
DEFF Research Database (Denmark)
Lynnerup, N
1993-01-01
In the 1960s Marshall and Hoare presented a "Standard Cooling Curve" based on their mathematical analyses on the postmortem cooling of bodies. Although fairly accurate under standard conditions, the "curve" or formula is based on the assumption that the ambience temperature is constant and that t......In the 1960s Marshall and Hoare presented a "Standard Cooling Curve" based on their mathematical analyses on the postmortem cooling of bodies. Although fairly accurate under standard conditions, the "curve" or formula is based on the assumption that the ambience temperature is constant...... cooling of bodies is presented. It is proposed that by having a computer program that solves the equation, giving the length of the cooling period in response to a certain rectal temperature, and which allows easy comparison of multiple solutions, the uncertainties related to ambience temperature...
42 CFR 93.509 - Computation of time.
2010-10-01
... holiday observed by the Federal government, in which case it includes the next business day. (b) When the... required or authorized under the rules in this part to be filed for good cause shown. When time permits...
Computational complexity of time-dependent density functional theory
International Nuclear Information System (INIS)
Whitfield, J D; Yung, M-H; Tempel, D G; Aspuru-Guzik, A; Boixo, S
2014-01-01
Time-dependent density functional theory (TDDFT) is rapidly emerging as a premier method for solving dynamical many-body problems in physics and chemistry. The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn–Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn–Sham system can be efficiently obtained given the time-dependent density. We introduce a V-representability parameter which diverges at the boundary of the existence domain and serves to quantify the numerical difficulty of constructing the Kohn-Sham potential. For bounded values of V-representability, we present a polynomial time quantum algorithm to generate the time-dependent Kohn–Sham potential with controllable error bounds. (paper)
Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir
2016-12-01
The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.
Aparecida de Oliveira, Maria; Abeid Ribeiro, Eliana Guimarães; Morato Bergamini, Alzira Maria; Pereira De Martinis, Elaine Cristina
2010-02-01
Modern lifestyle markedly changed eating habits worldwide, with an increasing demand for ready-to-eat foods, such as minimally processed fruits and leafy greens. Packaging and storage conditions of those products may favor the growth of psychrotrophic bacteria, including the pathogen Listeria monocytogenes. In this work, minimally processed leafy vegetables samples (n = 162) from retail market from Ribeirão Preto, São Paulo, Brazil, were tested for the presence or absence of Listeria spp. by the immunoassay Listeria Rapid Test, Oxoid. Two L. monocytogenes positive and six artificially contaminated samples of minimally processed leafy vegetables were evaluated by the Most Probable Number (MPN) with detection by classical culture method and also culture method combined with real-time PCR (RTi-PCR) for 16S rRNA genes of L. monocytogenes. Positive MPN enrichment tubes were analyzed by RTi-PCR with primers specific for L. monocytogenes using the commercial preparation ABSOLUTE QPCR SYBR Green Mix (ABgene, UK). Real-time PCR assay presented good exclusivity and inclusivity results and no statistical significant difference was found in comparison with the conventional culture method (p < 0.05). Moreover, RTi-PCR was fast and easy to perform, with MPN results obtained in ca. 48 h for RTi-PCR in comparison to 7 days for conventional method.
Directory of Open Access Journals (Sweden)
Rinto Yusriski
2015-09-01
Full Text Available This research discusses an integer batch scheduling problems for a single-machine with position-dependent batch processing time due to the simultaneous effect of learning and forgetting. The decision variables are the number of batches, batch sizes, and the sequence of the resulting batches. The objective is to minimize total actual flow time, defined as total interval time between the arrival times of parts in all respective batches and their common due date. There are two proposed algorithms to solve the problems. The first is developed by using the Integer Composition method, and it produces an optimal solution. Since the problems can be solved by the first algorithm in a worst-case time complexity O(n2n-1, this research proposes the second algorithm. It is a heuristic algorithm based on the Lagrange Relaxation method. Numerical experiments show that the heuristic algorithm gives outstanding results.
Directory of Open Access Journals (Sweden)
Qi Li
Full Text Available The vast majority of decision-making research is performed under the assumption of the value maximizing principle. This principle implies that when making decisions, individuals try to optimize outcomes on the basis of cold mathematical equations. However, decisions are emotion-laden rather than cool and analytic when they tap into life-threatening considerations. Using functional magnetic resonance imaging (fMRI, this study investigated the neural mechanisms underlying vital loss decisions. Participants were asked to make a forced choice between two losses across three conditions: both losses are trivial (trivial-trivial, both losses are vital (vital-vital, or one loss is trivial and the other is vital (vital-trivial. Our results revealed that the amygdala was more active and correlated positively with self-reported negative emotion associated with choice during vital-vital loss decisions, when compared to trivial-trivial loss decisions. The rostral anterior cingulate cortex was also more active and correlated positively with self-reported difficulty of choice during vital-vital loss decisions. Compared to the activity observed during trivial-trivial loss decisions, the orbitofrontal cortex and ventral striatum were more active and correlated positively with self-reported positive emotion of choice during vital-trivial loss decisions. Our findings suggest that vital loss decisions involve emotions and cannot be adequately captured by cold computation of minimizing losses. This research will shed light on how people make vital loss decisions.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max
2016-11-25
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Newmark local time stepping on high-performance computing architectures
Energy Technology Data Exchange (ETDEWEB)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)
2017-04-01
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf
2016-01-01
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Invariant set computation for constrained uncertain discrete-time systems
Athanasopoulos, N.; Bitsoris, G.
2010-01-01
In this article a novel approach to the determination of polytopic invariant sets for constrained discrete-time linear uncertain systems is presented. First, the problem of stabilizing a prespecified initial condition set in the presence of input and state constraints is addressed. Second, the
10 CFR 13.27 - Computation of time.
2010-01-01
...; and (2) By 11:59 p.m. Eastern Time for a document served by the E-Filing system. [72 FR 49153, Aug. 28... the calculation of additional days when a participant is not entitled to receive an entire filing... same filing and service method, the number of days for service will be determined by the presiding...
10 CFR 2.306 - Computation of time.
2010-01-01
...:59 p.m. Eastern Time for a document served by the E-Filing system. [72 FR 49151, Aug. 28, 2007] ... the calculation of additional days when a participant is not entitled to receive an entire filing... filing and service method, the number of days for service will be determined by the presiding officer...
Real time operating system for a nuclear power plant computer
International Nuclear Information System (INIS)
Alger, L.S.; Lala, J.H.
1986-01-01
A quadruply redundant synchronous fault tolerant processor (FTP) is now under fabrication at the C.S. Draper Laboratory to be used initially as a trip monitor for the Experimental Breeder Reactor EBR-II operated by the Argonne National Laboratory in Idaho Falls, Idaho. The real time operating system for this processor is described
The reliable solution and computation time of variable parameters Logistic model
Pengfei, Wang; Xinnong, Pan
2016-01-01
The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...
22 CFR 1429.21 - Computation of time for filing papers.
2010-04-01
... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Computation of time for filing papers. 1429.21... MISCELLANEOUS AND GENERAL REQUIREMENTS General Requirements § 1429.21 Computation of time for filing papers. In... subchapter requires the filing of any paper, such document must be received by the Board or the officer or...
Time-ordered product expansions for computational stochastic system biology
International Nuclear Information System (INIS)
Mjolsness, Eric
2013-01-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie’s stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems. (paper)
Wake force computation in the time domain for long structures
International Nuclear Information System (INIS)
Bane, K.; Weiland, T.
1983-07-01
One is often interested in calculating the wake potentials for short bunches in long structures using TBCI. For ultra-relativistic particles it is sufficient to solve for the fields only over a window containing the bunch and moving along with it. This technique reduces both the memory and the running time required by a factor that equals the ratio of the structure length to the window length. For example, for a bunch with sigma/sub z/ of one picosecond traversing a single SLAC cell this improvement factor is 15. It is thus possible to solve for the wakefields in very long structures: for a given problem, increasing the structure length will not change the memory required while only adding linearly to the CPU time needed
HOPE: Just-in-time Python compiler for astrophysical computations
Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre
2014-11-01
HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.
Computational micromagnetics: prediction of time dependent and thermal properties
International Nuclear Information System (INIS)
Schrefl, T.; Scholz, W.; Suess, Dieter; Fidler, J.
2001-01-01
Finite element modeling treats magnetization processes on a length scale of several nanometers and thus gives a quantitative correlation between the microstructure and the magnetic properties of ferromagnetic materials. This work presents a novel finite element/boundary element micro-magnetics solver that combines a wavelet-based matrix compression technique for magnetostatic field calculations with a BDF/GMRES method for the time integration of the Gilbert equation of motion. The simulations show that metastable energy minima and nonuniform magnetic states within the grains are important factors in the reversal dynamics at finite temperature. The numerical solution of the Gilbert equation shows how reversed domains nucleate and expand. The switching time of submicron magnetic elements depends on the shape of the elements. Elements with slanted ends decrease the overall reversal time, as a transverse demagnetizing field suppresses oscillations of the magnetization. Thermal activated processes can be included adding a random thermal field to the effective magnetic field. Thermally assisted reversal was studied for CoCrPtTa thin-film media
Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.
2016-02-01
In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%.
International Nuclear Information System (INIS)
Yusriski, R; Sukoyo; Samadhi, T M A A; Halim, A H
2016-01-01
In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%. (paper)
Choi, Won Jung; Moon, Jin-Hee; Min, Jae Seok; Song, Yong Keun; Lee, Seung A; Ahn, Jin Woo; Lee, Sang Hun; Jung, Ha Chul
2018-03-01
During minimally invasive surgery (MIS), it is impossible to directly detect marked clips around tumors via palpation. Therefore, we developed a novel method and device using Radio Frequency IDentification (RFID) technology to detect the position of clips during minimally invasive gastrectomy or colectomy. The feasibility of the RFID-based detection system was evaluated in an animal experiment consisting of seven swine. The primary outcome was to successfully detect the location of RFID clips in the stomach and colon. The secondary outcome measures were to detect time (time during the intracorporeal detection of the RFID clip), and accuracy (distance between the RFID clip and the detected site). A total of 25 detection attempts (14 in the stomach and 11 in the colon) using the RFID antenna had a 100% success rate. The median detection time was 32.5 s (range, 15-119 s) for the stomach and 28.0 s (range, 8-87 s) for the colon. The median detection distance was 6.5 mm (range, 4-18 mm) for the stomach and 6.0 mm (range, 3-13 mm) for the colon. We demonstrated favorable results for a RFID system that detects the position of gastric and colon tumors in real-time during MIS. © 2017 Wiley Periodicals, Inc.
Müller, Friedrich; Schenk, Henning C; Forterre, Franck
2017-04-01
To determine the effects of a minimally invasive transilial vertebral (MTV) blocking procedure on the computed tomographic (CT) appearance of the lumbosacral (L7/S1) junction of dogs with degenerative lumbosacral stenosis (DLSS). Prospective study. 59 client-owned dogs with DLSS. Lumbosacral CT images were acquired with hyperextended pelvic limbs before and after MTV in all dogs. Clinical follow-up was obtained after 1 year, including a neurologic status classified in 4 grades, and if possible, CT. Morphometric measurements (Mean ± SEM) including foraminal area, endplate distance at L7/S1 and LS angle were obtained on sets of reformatted parasagittal and sagittal CT images. The mean foraminal area (ForL) increased from 32.5 ± 1.7 mm 2 to 59.7 ± 1.9 mm 2 on the left and from 31.1 ± 1.4 mm 2 to 59.1 ± 2.0 mm 2 on the right (ForR) side after MTV. The mean endplate distance (EDmd) between L7/S1 increased from 3.7 ± 0.1 mm to 6.0 ± 0.1 mm, and mean lumbosacral angle (LSa) from 148.0 ± 1.1° to 170.0 ± 1.1° after MTV. CT measurements were available 1 year postoperatively in 12 cases: ForL: 41.2 ± 3.1 mm 2 ; ForR: 37.9 ± 3.1 mm 2 ; EDmd: 4.3 ± 0.4 mm, and LSa 157.6 ± 2.1° (values are mean and standard error of mean = SEM). All 39 dogs with long-term follow-up improved by at least 1 neurologic grade, 9/39 improving by 3 grades, 15/39 by 2 grades, and 15/39 by 1 grade. MTV results in clinical improvement and morphometric enlargement of the foraminal area in dogs with variable degrees of foraminal stenosis. MTV may be a valuable minimally invasive option for treatment of dogs with DLSS. © 2017 The American College of Veterinary Surgeons.
Directory of Open Access Journals (Sweden)
Anderson Geoff
2009-06-01
Full Text Available Abstract Background Rigorous evaluation of an intervention requires that its allocation be unbiased with respect to confounders; this is especially difficult in complex, system-wide healthcare interventions. We developed a short survey instrument to identify factors for a minimization algorithm for the allocation of a hospital-level intervention to reduce emergency department (ED waiting times in Ontario, Canada. Methods Potential confounders influencing the intervention's success were identified by literature review, and grouped by healthcare setting specific change stages. An international multi-disciplinary (clinical, administrative, decision maker, management panel evaluated these factors in a two-stage modified-delphi and nominal group process based on four domains: change readiness, evidence base, face validity, and clarity of definition. Results An original set of 33 factors were identified from the literature. The panel reduced the list to 12 in the first round survey. In the second survey, experts scored each factor according to the four domains; summary scores and consensus discussion resulted in the final selection and measurement of four hospital-level factors to be used in the minimization algorithm: improved patient flow as a hospital's leadership priority; physicians' receptiveness to organizational change; efficiency of bed management; and physician incentives supporting the change goal. Conclusion We developed a simple tool designed to gather data from senior hospital administrators on factors likely to affect the success of a hospital patient flow improvement intervention. A minimization algorithm will ensure balanced allocation of the intervention with respect to these factors in study hospitals.
Leaver, Chad Andrew; Guttmann, Astrid; Zwarenstein, Merrick; Rowe, Brian H; Anderson, Geoff; Stukel, Therese; Golden, Brian; Bell, Robert; Morra, Dante; Abrams, Howard; Schull, Michael J
2009-06-08
Rigorous evaluation of an intervention requires that its allocation be unbiased with respect to confounders; this is especially difficult in complex, system-wide healthcare interventions. We developed a short survey instrument to identify factors for a minimization algorithm for the allocation of a hospital-level intervention to reduce emergency department (ED) waiting times in Ontario, Canada. Potential confounders influencing the intervention's success were identified by literature review, and grouped by healthcare setting specific change stages. An international multi-disciplinary (clinical, administrative, decision maker, management) panel evaluated these factors in a two-stage modified-delphi and nominal group process based on four domains: change readiness, evidence base, face validity, and clarity of definition. An original set of 33 factors were identified from the literature. The panel reduced the list to 12 in the first round survey. In the second survey, experts scored each factor according to the four domains; summary scores and consensus discussion resulted in the final selection and measurement of four hospital-level factors to be used in the minimization algorithm: improved patient flow as a hospital's leadership priority; physicians' receptiveness to organizational change; efficiency of bed management; and physician incentives supporting the change goal. We developed a simple tool designed to gather data from senior hospital administrators on factors likely to affect the success of a hospital patient flow improvement intervention. A minimization algorithm will ensure balanced allocation of the intervention with respect to these factors in study hospitals.
Applications of parallel computer architectures to the real-time simulation of nuclear power systems
International Nuclear Information System (INIS)
Doster, J.M.; Sills, E.D.
1988-01-01
In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area
Cepeda Rubio, M. F. J.; Leija, L.
2018-01-01
Microwave ablation (MWA) by using coaxial antennas is a promising alternative for breast cancer treatment. A double short distance slot coaxial antenna as a newly optimized applicator for minimally invasive treatment of breast cancer is proposed. To validate and to analyze the feasibility of using this method in clinical treatment, a computational model, phantom, and breast swine in vivo experimentation were carried out, by using four microwave powers (50 W, 30 W, 20 W, and 10 W). The finite element method (FEM) was used to develop the computational model. Phantom experimentation was carried out in breast phantom. The in vivo experimentation was carried out in a 90 kg swine sow. Tissue damage was estimated by comparing control and treated micrographs of the porcine mammary gland samples. The coaxial slot antenna was inserted in swine breast glands by using image-guided ultrasound. In all cases, modeling, in vivo and phantom experimentation, and ablation temperatures (above 60°C) were reached. The in vivo experiments suggest that this new MWA applicator could be successfully used to eliminate precise and small areas of tissue (around 20–30 mm2). By modulating the power and time applied, it may be possible to increase/decrease the ablation area. PMID:29854360
I. Fisk
2011-01-01
Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...
International Nuclear Information System (INIS)
Hosomichi, Kazuo
2008-01-01
We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.
Reilly, J.; Abdel-Jaber, H.; Yarnold, M.; Glisic, B.
2017-04-01
Structural Health Monitoring aims to characterize the performance of a structure from a combination of recorded sensor data and analytic techniques. Many methods are concerned with quantifying the elastic response of the structure, treating temperature changes as noise in the analysis. While these elastic profiles do demonstrate a portion of structural behavior, thermal loads on a structure can induce comparable strains to elastic loads. Understanding this relationship between the temperature of the structure and the resultant strain and displacement can provide in depth knowledge of the structural condition. A necessary parameter for this form of analysis is the Coefficient of Thermal Expansion (CTE). The CTE of a material relates the amount of expansion or contraction a material undergoes per degree change in temperature, and can be determined from temperature-strain relationship given that the thermal strain can be isolated. Many times with concrete, the actual amount of expansion with temperature in situ varies from the given values for the CTE due to thermally generated elastic strain, which complicates evaluation of the CTE. To accurately characterize the relationship between temperature and strain on a structure, the actual thermal behavior of the structure needs to be analyzed. This rate can vary for different parts of a structure, depending on boundary conditions. In a case of unrestrained structures, the strain in the structure should be linearly related to the temperature change. Thermal gradients in a structure can affect this relationship, as they induce curvature and deplanations in the cross section. This paper proposes a method that addresses these challenges in evaluating the CTE.
Computation of a long-time evolution in a Schroedinger system
International Nuclear Information System (INIS)
Girard, R.; Kroeger, H.; Labelle, P.; Bajzer, Z.
1988-01-01
We compare different techniques for the computation of a long-time evolution and the S matrix in a Schroedinger system. As an application we consider a two-nucleon system interacting via the Yamaguchi potential. We suggest computation of the time evolution for a very short time using Pade approximants, the long-time evolution being obtained by iterative squaring. Within the technique of strong approximation of Moller wave operators (SAM) we compare our calculation with computation of the time evolution in the eigenrepresentation of the Hamiltonian and with the standard Lippmann-Schwinger solution for the S matrix. We find numerical agreement between these alternative methods for time-evolution computation up to half the number of digits of internal machine precision, and fairly rapid convergence of both techniques towards the Lippmann-Schwinger solution
Directory of Open Access Journals (Sweden)
Othman M. K. Alsmadi
2015-01-01
Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
2001-01-01
The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...
Multiscale Space-Time Computational Methods for Fluid-Structure Interactions
2015-09-13
thermo-fluid analysis of a ground vehicle and its tires ST-SI Computational Analysis of a Vertical - Axis Wind Turbine We have successfully...of a vertical - axis wind turbine . Multiscale Compressible-Flow Computation with Particle Tracking We have successfully tested the multiscale...Tezduyar, Spenser McIntyre, Nikolay Kostov, Ryan Kolesar, Casey Habluetzel. Space–time VMS computation of wind - turbine rotor and tower aerodynamics
Energy Technology Data Exchange (ETDEWEB)
Reisch, F; Vayssier, G
1969-05-15
This non-linear model serves as one of the blocks in a series of codes to study the transient behaviour of BWR or PWR type reactors. This program is intended to be the hydrodynamic part of the BWR core representation or the hydrodynamic part of the PWR heat exchanger secondary side representation. The equations have been prepared for the CSMP digital simulation language. By using the most suitable integration routine available, the ratio of simulation time to real time is about one on an IBM 360/75 digital computer. Use of the slightly different language DSL/40 on an IBM 7044 computer takes about four times longer. The code has been tested against the Eindhoven loop with satisfactory agreement.
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that
Real Time Animation of Trees Based on BBSC in Computer Games
Directory of Open Access Journals (Sweden)
Xuefeng Ao
2009-01-01
Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.
Ubiquitous computing technology for just-in-time motivation of behavior change.
Intille, Stephen S
2004-01-01
This paper describes a vision of health care where "just-in-time" user interfaces are used to transform people from passive to active consumers of health care. Systems that use computational pattern recognition to detect points of decision, behavior, or consequences automatically can present motivational messages to encourage healthy behavior at just the right time. Further, new ubiquitous computing and mobile computing devices permit information to be conveyed to users at just the right place. In combination, computer systems that present messages at the right time and place can be developed to motivate physical activity and healthy eating. Computational sensing technologies can also be used to measure the impact of the motivational technology on behavior.
Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment
He, Aiguo
2011-01-01
Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…
2013-06-28
... exposed to various forms of cyber attack. In some cases, attacks can be thwarted through the use of...-3383-01] Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response... systems will be successfully attacked. When a successful attack occurs, the job of a Computer Security...
Minimal families of curves on surfaces
Lubbes, Niels
2014-01-01
A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase I
National Aeronautics and Space Administration — We propose to demonstrate key elements of feasibility for a high speed automated time domain terahertz computed axial tomography (TD-THz CT) non destructive...
Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase II
National Aeronautics and Space Administration — In this Phase 2 project, we propose to develop, construct, and deliver to NASA a computed axial tomography time-domain terahertz (CT TD-THz) non destructive...
Hine, Jeffrey F.; Ardoin, Scott P.; Foster, Tori E.
2015-01-01
Research suggests that students spend a substantial amount of time transitioning between classroom activities, which may reduce time spent academically engaged. This study used an ABAB design to evaluate the effects of a computer-assisted intervention that automated intervention components previously shown to decrease transition times. We examined…
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
CROSAT: A digital computer program for statistical-spectral analysis of two discrete time series
International Nuclear Information System (INIS)
Antonopoulos Domis, M.
1978-03-01
The program CROSAT computes directly from two discrete time series auto- and cross-spectra, transfer and coherence functions, using a Fast Fourier Transform subroutine. Statistical analysis of the time series is optional. While of general use the program is constructed to be immediately compatible with the ICL 4-70 and H316 computers at AEE Winfrith, and perhaps with minor modifications, with any other hardware system. (author)
Gregersen, H; Barlow, J; Thompson, D
1999-04-01
A computer-controlled tensiometer for studying wall tension in tubular organs has been developed. The system consisted of a probe with an inflatable balloon, an impedance planimeter, pressure transducer and amplifier, a pump with RS232 interface and a PC with dedicated software. Circumferential wall tension was computed in real time from pressure and cross-sectional area measurements (tension measurement mode). Wall tension can be maintained on a preset level or be changed as a step or ramp function by a feedback control of the infusion/withdrawal pump (tension control mode). A software regulator adjusted the volume rate (low volume rate when the computed tension was close to the preset value) to minimize overshoot and oscillation. Validation tests were performed and the technique was applied in the human oesophagus. Volume- and tension-controlled balloon distensions elicited secondary peristalsis of increasing intensity that was decreased significantly by the antimuscarinic agent Hyoscine butyl bromide. In tension control mode Hyoscine butyl bromide caused oesophageal relaxation, i.e. CSA to increase and pressure to decay. Furthermore, pronounced pressure relaxation and tension relaxation were observed during volume-controlled distension after administration of Hyoscine butyl bromide.
Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm
Directory of Open Access Journals (Sweden)
Amjad Mahmood
2017-04-01
Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.
Image denoising by a direct variational minimization
Directory of Open Access Journals (Sweden)
Pilipović Stevan
2011-01-01
Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.
DEFF Research Database (Denmark)
Lauridsen, M M; Mikkelsen, S; Svensson, T
2017-01-01
Background: Minimal hepatic encephalopathy (MHE) is clinically undetectable and the diagnosis requires psychometric tests. However, a lack of clarity exists as to whether the tests are in fact able to detect changes in cognition. Aim: To examine if the continuous reaction time test (CRT) can detect...... changes in cognition with anti-HE intervention in patients with cirrhosis and without clinically manifest hepatic encephalopathy (HE). Methods: Firstly, we conducted a reproducibility analysis and secondly measured change in CRT induced by anti-HE treatment in a randomized controlled pilot study: We...... stratified 44 patients with liver cirrhosis and without clinically manifest HE according to a normal (n = 22) or abnormal (n = 22) CRT. Each stratum was then block randomized to receive multimodal anti-HE intervention (lactulose+branched-chain amino acids+rifaximin) or triple placebos for 3 months...
Time expenditure in computer aided time studies implemented for highly mechanized forest equipment
Directory of Open Access Journals (Sweden)
Elena Camelia Mușat
2016-06-01
Full Text Available Time studies represent important tools that are used in forest operations research to produce empirical models or to comparatively assess the performance of two or more operational alternatives with the general aim to predict the performance of operational behavior, choose the most adequate equipment or eliminate the useless time. There is a long tradition in collecting the needed data in a traditional fashion, but this approach has its limitations, and it is likely that in the future the use of professional software would be extended is such preoccupations as this kind of tools have been already implemented. However, little to no information is available in what concerns the performance of data analyzing tasks when using purpose-built professional time studying software in such research preoccupations, while the resources needed to conduct time studies, including here the time may be quite intensive. Our study aimed to model the relations between the variation of time needed to analyze the video-recorded time study data and the variation of some measured independent variables for a complex organization of a work cycle. The results of our study indicate that the number of work elements which were separated within a work cycle as well as the delay-free cycle time and the software functionalities that were used during data analysis, significantly affected the time expenditure needed to analyze the data (α=0.01, p<0.01. Under the conditions of this study, where the average duration of a work cycle was of about 48 seconds and the number of separated work elements was of about 14, the speed that was usedto replay the video files significantly affected the mean time expenditure which averaged about 273 seconds for half of the real speed and about 192 seconds for an analyzing speed that equaled the real speed. We argue that different study designs as well as the parameters used within the software are likely to produce
Energy Technology Data Exchange (ETDEWEB)
Cline, M.C.
1981-08-01
VNAP2 is a computer program for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow. VNAP2 solves the two-dimensional, time-dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing-length model, a one-equation model, or the Jones-Launder two-equation model. The geometry may be a single- or a dual-flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference-plane-characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free-jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet-powered afterbodies, airfoils, and free-jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.
M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
Real-time computing platform for spiking neurons (RT-spike).
Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael
2006-07-01
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
Variation in computer time with geometry prescription in monte carlo code KENO-IV
International Nuclear Information System (INIS)
Gopalakrishnan, C.R.
1988-01-01
In most studies, the Monte Carlo criticality code KENO-IV has been compared with other Monte Carlo codes, but evaluation of its performance with different box descriptions has not been done so far. In Monte Carlo computations, any fractional savings of computing time is highly desirable. Variation in computation time with box description in KENO for two different fast reactor fuel subassemblies of FBTR and PFBR is studied. The K eff of an infinite array of fuel subassemblies is calculated by modelling the subassemblies in two different ways (i) multi-region, (ii) multi-box. In addition to these two cases, excess reactivity calculations of FBTR are also performed in two ways to study this effect in a complex geometry. It is observed that the K eff values calculated by multi-region and multi-box models agree very well. However the increase in computation time from the multi-box to the multi-region is considerable, while the difference in computer storage requirements for the two models is negligible. This variation in computing time arises from the way the neutron is tracked in the two cases. (author)
Energy Technology Data Exchange (ETDEWEB)
Kealey, S.M.; Dodd, J.D.; MacEneaney, P.M.; Gibney, R.G.; Malone, D.E. E-mail: d.malone@st-vincents.ie
2004-01-01
AIM: To evaluate the efficacy of minimal preparation computed tomography (MPCT) in diagnosing clinically significant colonic tumours in frail, elderly patients. MATERIALS AND METHODS: A prospective study was performed in a group of consecutively referred, frail, elderly patients with symptoms or signs of anaemia, pain, rectal bleeding or weight loss. The MPCT protocol consisted of 1.5 l Gastrografin 1% diluted with sterile water administered during the 48 h before the procedure with no bowel preparation or administration of intravenous contrast medium. Eight millimetre contiguous scans through the abdomen and pelvis were performed. The scans were double-reported by two gastrointestinal radiologists as showing definite (>90% certain), probable (50-90% certain), possible (<50% certain) neoplasm or normal. Where observers disagreed the more pessimistic of the two reports was accepted. The gold standard was clinical outcome at 1 year with positive end-points defined as (1) histological confirmation of CRC, (2) clinical presentation consistent with CRC without histological confirmation if the patient was too unwell for biopsy/surgery, and (3) death directly attributable to colorectal carcinoma (CRC) with/without post-mortem confirmation. Negative end-points were defined as patients with no clinical, radiological or post-mortem findings of CRC. Patients were followed for 1 year or until one of the above end-points were met. RESULTS: Seventy-two patients were included (mean age 81; range 62-93). One-year follow-up was completed in 94.4% (n=68). Mortality from all causes was 33% (n=24). Five histologically proven tumours were diagnosed with CT and there were two probable false-negatives. Results were analysed twice: assuming all CT lesions test positive and considering 'possible' lesions test negative [brackets] (95% confidence intervals): sensitivity 0.88 (0.47-1.0) [0.75 (0.35-0.97)], specificity 0.47 (0.34-0.6) [0.87 (0.75-0.94)], positive predictive value 0
Computing the Maximum Detour of a Plane Graph in Subquadratic Time
DEFF Research Database (Denmark)
Wulff-Nilsen, Christian
Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....
Continuous-variable quantum computing in optical time-frequency modes using quantum memories.
Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A
2014-09-26
We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.
Computation of transit times using the milestoning method with applications to polymer translocation
Hawk, Alexander T.; Konda, Sai Sriharsha M.; Makarov, Dmitrii E.
2013-08-01
Milestoning is an efficient approximation for computing long-time kinetics and thermodynamics of large molecular systems, which are inaccessible to brute-force molecular dynamics simulations. A common use of milestoning is to compute the mean first passage time (MFPT) for a conformational transition of interest. However, the MFPT is not always the experimentally observed timescale. In particular, the duration of the transition path, or the mean transit time, can be measured in single-molecule experiments, such as studies of polymers translocating through pores and fluorescence resonance energy transfer studies of protein folding. Here we show how to use milestoning to compute transit times and illustrate our approach by applying it to the translocation of a polymer through a narrow pore.
Amendola, Alessandra; Bloisi, Maria; Marsella, Patrizia; Sabatini, Rosella; Bibbò, Angela; Angeletti, Claudio; Capobianchi, Maria Rosaria
2011-09-01
Numerous studies investigating clinical significance of HIV-1 minimal residual viremia (MRV) suggest potential utility of assays more sensitive than those routinely used to monitor viral suppression. However currently available methods, based on different technologies, show great variation in detection limit and input plasma volume, and generally suffer from lack of standardization. In order to establish new tools suitable for routine quantification of minimal residual viremia in patients under virological suppression, some modifications were introduced into standard procedure of the Abbott RealTime HIV-1 assay leading to a "modified" and an "ultrasensitive" protocols. The following modifications were introduced: calibration curve extended towards low HIV-1 RNA concentration; 4 fold increased sample volume by concentrating starting material; reduced volume of internal control; adoption of "open-mode" software for quantification. Analytical performances were evaluated using the HIV-1 RNA Working Reagent 1 for NAT assays (NIBSC). Both tests were applied to clinical samples from virologically suppressed patients. The "modified" and the "ultrasensitive" configurations of the assay reached a limit of detection of 18.8 (95% CI: 11.1-51.0 cp/mL) and 4.8 cp/mL (95% CI: 2.6-9.1 cp/mL), respectively, with high precision and accuracy. In clinical samples from virologically suppressed patients, "modified" and "ultrasensitive" protocols allowed to detect and quantify HIV RNA in 12.7% and 46.6%, respectively, of samples resulted "not-detectable", and in 70.0% and 69.5%, respectively, of samples "detected laboratories for measuring MRV. Copyright © 2011 Elsevier B.V. All rights reserved.
Kim, Hee Kyung; Serai, Suraj; Merrow, Arnold C; Wang, Lily; Horn, Paul S; Laor, Tal
2014-02-01
Various skeletal muscle diseases result in fatty infiltration, making it important to develop noninvasive biomarkers to objectively measure muscular fat. We compared T2 relaxation time mapping (T2 maps) and magnetic resonance spectroscopy (MRS) with physical characteristics previously correlated with intramuscular fat to validate T2 maps and MRS as objective measures of skeletal muscle fat. We evaluated gluteus maximus muscles in 30 healthy boys (ages 5-19 years) at 3 T with T1-weighted images, T2-W images with fat saturation, T2 maps with and without fat saturation, and MR spectroscopy. We calculated body surface area (BSA), body mass index (BMI) and BMI percentile (BMI %). We performed fat and inflammation grading on T1-W imaging and fat-saturated T2-W imaging, respectively. Mean T2 values from T2 maps with fat saturation were subtracted from T2 maps without fat saturation to determine T2 fat values. We obtained lipid-to-water ratios by MR spectroscopy. Pearson correlation was used to assess relationships between BSA, BMI, BMI %, T2 fat values, and lipid-to-water ratios for each boy. Twenty-four boys completed all exams; 21 showed minimal and 3 showed no fatty infiltration. None showed muscle inflammation. There was correlation between BSA, BMI, and BMI %, and T2 fat values (P values and lipid-to-water ratios (P skeletal muscles, even in microscopic amounts, and validate each other. Both techniques might enable detection of minimal pathological fatty infiltration in children with skeletal muscle disorders.
Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation
International Nuclear Information System (INIS)
Lent, Wineke A.M. van; Deetman, Joost W.; Teertstra, H. Jelle; Muller, Sara H.; Hans, Erwin W.; Harten, Wim H. van
2012-01-01
Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.
Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation
Energy Technology Data Exchange (ETDEWEB)
Lent, Wineke A.M. van, E-mail: w.v.lent@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands); Deetman, Joost W., E-mail: j.deetman@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Teertstra, H. Jelle, E-mail: h.teertstra@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Muller, Sara H., E-mail: s.muller@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Hans, Erwin W., E-mail: e.w.hans@utwente.nl [University of Twente, School of Management and Governance, Dept. of Industrial Engineering and Business Intelligence Systems, Enschede (Netherlands); Harten, Wim H. van, E-mail: w.v.harten@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands)
2012-11-15
Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.
Energy Technology Data Exchange (ETDEWEB)
Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu [Department of Chemistry, University of Washington, Seattle, Washington 98195 (United States)
2016-06-21
One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.
Real-time data acquisition and feedback control using Linux Intel computers
International Nuclear Information System (INIS)
Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.
2006-01-01
This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS
Kajian dan Implementasi Real TIME Operating System pada Single Board Computer Berbasis Arm
A, Wiedjaja; M, Handi; L, Jonathan; Christian, Benyamin; Kristofel, Luis
2014-01-01
Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system) which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC) ARM-based, namely Pandaboard ES with ...
Y2K issues for real time computer systems for fast breeder test reactor
International Nuclear Information System (INIS)
Swaminathan, P.
1999-01-01
Presentation shows the classification of real time systems related to operation, control and monitoring of the fast breeder test reactor. Software life cycle includes software requirement specification, software design description, coding, commissioning, operation and management. A software scheme in supervisory computer of fast breeder test rector is described with the twenty years of experience in design, development, installation, commissioning, operation and maintenance of computer based supervision control system for nuclear installation with a particular emphasis on solving the Y2K problem
Near real-time digital holographic microscope based on GPU parallel computing
Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan
2018-01-01
A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
Energy Technology Data Exchange (ETDEWEB)
Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)
2017-06-01
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.
Sako, Shunji; Sugiura, Hiromichi; Tanoue, Hironori; Kojima, Makoto; Kono, Mitsunobu; Inaba, Ryoichi
2014-08-01
This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1) the distal position (DP), in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2) the proximal position (PP), in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses), operating efficiencies (based on word counts), and fatigue levels (based on the visual analog scale - VAS). Oxygen consumption (VO(2)), the ratio of inspiration time to respiration time (T(i)/T(total)), respiratory rate (RR), minute ventilation (VE), and the ratio of expiration to inspiration (Te/T(i)) were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT), carbon dioxide output rates (VCO(2)/VE), and oxygen extraction fractions (VO(2)/VE) were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when operating a computer.
Directory of Open Access Journals (Sweden)
Shunji Sako
2014-08-01
Full Text Available Objectives: This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. Material and Methods: The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1 the distal position (DP, in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2 the proximal position (PP, in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses, operating efficiencies (based on word counts, and fatigue levels (based on the visual analog scale – VAS. Results: Oxygen consumption (VO2, the ratio of inspiration time to respiration time (Ti/Ttotal, respiratory rate (RR, minute ventilation (VE, and the ratio of expiration to inspiration (Te/Ti were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT, carbon dioxide output rates (VCO2/VE, and oxygen extraction fractions (VO2/VE were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Conclusions: Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when
Swift, Arthur; von Grote, Erika; Jonas, Brandie; Nogueira, Alessandra
2017-01-01
The appeal of hyaluronic acid fillers for facial soft tissue augmentation is attributable to both an immediate aesthetic effect and relatively short recovery time. Although recovery time is an important posttreatment variable, as it impacts comfort with appearance and perceived treatment benefit, it is not routinely evaluated. Natural-looking aesthetic outcomes are also a primary concern for many patients. A single-center, noncomparative study evaluated the time (in hours) until subjects return to social engagement (RtSE) following correction of moderate and severe nasolabial folds (NLFs) with R R (Restylane ® Refyne) ® and R D (Restylane Defyne), respectively. Twenty subjects (aged 35-57 years) who received bilateral NLF correction documented their RtSE and injection-related events posttreatment. Treatment efficacy was evaluated by improvements in Wrinkle Severity Rating Scale (WSRS) and subject satisfaction questionnaire at days 14 and 30, and by Global Aesthetic Improvement Scale (GAIS) at day 30. Safety was evaluated by injection-related events and treatment-emergent adverse events. Fifty percent of subjects reported RtSE within 2 hours posttreatment. WSRS for the R R group improved significantly from baseline at day 14 (-1.45±0.42) and day 30 (-1.68±0.46) ( P experienced 3 related treatment-emergent adverse events; 1 R R subject experienced severe bruising, and 1 R D subject experienced severe erythema and mild telangiectasia. Subject satisfaction was high regarding aesthetic outcomes and natural-looking results. Optimal correction of moderate NLFs with R R and severe NLFs with R D involved minimal time to RtSE for most subjects. Treatments that significantly improved WSRS and GAIS, were generally well-tolerated, and provided natural-looking aesthetic outcomes.
Explicit time marching methods for the time-dependent Euler computations
International Nuclear Information System (INIS)
Tai, C.H.; Chiang, D.C.; Su, Y.P.
1997-01-01
Four explicit type time marching methods, including one proposed by the authors, are examined. The TVD conditions of this method are analyzed with the linear conservation law as the model equation. Performance of these methods when applied to the Euler equations are numerically tested. Seven examples are tested, the main concern is the performance of the methods when discontinuities with different strengths are encountered. When the discontinuity is getting stronger, spurious oscillation shows up for three existing methods, while the method proposed by the authors always gives the results with satisfaction. The effect of the limiter is also investigated. To put these methods in the same basis for the comparison the same spatial discretization is used. Roe's solver is used to evaluate the fluxes at the cell interface; spatially second-order accuracy is achieved by the MUSCL reconstruction. 19 refs., 8 figs
P. McBride
It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
I. Fisk
2012-01-01
Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
M. Kasemann
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
International Nuclear Information System (INIS)
Wang, Henry; Ma Yunzhi; Pratx, Guillem; Xing Lei
2011-01-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
Energy Technology Data Exchange (ETDEWEB)
Wang, Henry [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Ma Yunzhi; Pratx, Guillem; Xing Lei, E-mail: hwang41@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847 (United States)
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Bergström, Ida; Elfgren, Erik
2013-06-11
At the particle physics laboratory CERN in Geneva, Switzerland, the Neutron Time-of-Flight facility has recently started the construction of a second experimental line. The new neutron beam line will unavoidably induce radiation in both the experimental area and in nearby accessible areas. Computer simulations for the minimization of the background were carried out using the FLUKA Monte Carlo simulation package. The background radiation in the new experimental area needs to be kept to a minimum during measurements. This was studied with focus on the contributions from backscattering in the beam dump. The beam dump was originally designed for shielding the outside area using a block of iron covered in concrete. However, the backscattering was never studied in detail. In this thesis, the fluences (i.e. the flux integrated over time) of neutrons and photons were studied in the experimental area while the beam dump design was modified. An optimized design was obtained by stopping the fast neutrons in a high Z mat...
A computationally simple and robust method to detect determinism in a time series
DEFF Research Database (Denmark)
Lu, Sheng; Ju, Ki Hwan; Kanters, Jørgen K.
2006-01-01
We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals. The IS ......We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals...
SLMRACE: a noise-free RACE implementation with reduced computational time
Chauvin, Juliet; Provenzi, Edoardo
2017-05-01
We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).
Ultrasonic divergent-beam scanner for time-of-flight tomography with computer evaluation
Energy Technology Data Exchange (ETDEWEB)
Glover, G H
1978-03-02
The rotatable ultrasonic divergent-beam scanner is designed for time-of-flight tomography with computer evaluation. With it there can be measured parameters that are of importance for the structure of soft tissues, e.g. time as a function of the velocity distribution along a certain path of flight(the method is analogous to the transaxial X-ray tomography). Moreover it permits to perform the quantitative measurement of two-dimensional velocity distributions and may therefore be applied to serial examinations for detecting cancer of the breast. As computers digital memories as well as analog-digital-hybrid systems are suitable.
Saieg, Mauro Ajaj; Geddie, William R; Boerner, Scott L; Bailey, Denis; Crump, Michael; da Cunha Santos, Gilda
2013-01-01
BACKGROUND: Numerous genomic abnormalities in B-cell non-Hodgkin lymphomas (NHLs) have been revealed by novel high-throughput technologies, including recurrent mutations in EZH2 (enhancer of zeste homolog 2) and CD79B (B cell antigen receptor complex-associated protein beta chain) genes. This study sought to determine the evolution of the mutational status of EZH2 and CD79B over time in different samples from the same patient in a cohort of B-cell NHLs, through use of a customized multiplex mutation assay. METHODS: DNA that was extracted from cytological material stored on FTA cards as well as from additional specimens, including archived frozen and formalin-fixed histological specimens, archived stained smears, and cytospin preparations, were submitted to a multiplex mutation assay specifically designed for the detection of point mutations involving EZH2 and CD79B, using MassARRAY spectrometry followed by Sanger sequencing. RESULTS: All 121 samples from 80 B-cell NHL cases were successfully analyzed. Mutations in EZH2 (Y646) and CD79B (Y196) were detected in 13.2% and 8% of the samples, respectively, almost exclusively in follicular lymphomas and diffuse large B-cell lymphomas. In one-third of the positive cases, a wild type was detected in a different sample from the same patient during follow-up. CONCLUSIONS: Testing multiple minimal tissue samples using a high-throughput multiplex platform exponentially increases tissue availability for molecular analysis and might facilitate future studies of tumor progression and the related molecular events. Mutational status of EZH2 and CD79B may vary in B-cell NHL samples over time and support the concept that individualized therapy should be based on molecular findings at the time of treatment, rather than on results obtained from previous specimens. Cancer (Cancer Cytopathol) 2013;121:377–386. © 2013 American Cancer Society. PMID:23361872
Computer-games for gravitational wave science outreach: Black Hole Pong and Space Time Quest
International Nuclear Information System (INIS)
Carbone, L; Bond, C; Brown, D; Brückner, F; Grover, K; Lodhia, D; Mingarelli, C M F; Fulda, P; Smith, R J E; Unwin, R; Vecchio, A; Wang, M; Whalley, L; Freise, A
2012-01-01
We have established a program aimed at developing computer applications and web applets to be used for educational purposes as well as gravitational wave outreach activities. These applications and applets teach gravitational wave physics and technology. The computer programs are generated in collaboration with undergraduates and summer students as part of our teaching activities, and are freely distributed on a dedicated website. As part of this program, we have developed two computer-games related to gravitational wave science: 'Black Hole Pong' and 'Space Time Quest'. In this article we present an overview of our computer related outreach activities and discuss the games and their educational aspects, and report on some positive feedback received.
Computational Procedures for a Class of GI/D/k Systems in Discrete Time
Directory of Open Access Journals (Sweden)
Md. Mostafizur Rahman
2009-01-01
Full Text Available A class of discrete time GI/D/k systems is considered for which the interarrival times have finite support and customers are served in first-in first-out (FIFO order. The system is formulated as a single server queue with new general independent interarrival times and constant service duration by assuming cyclic assignment of customers to the identical servers. Then the queue length is set up as a quasi-birth-death (QBD type Markov chain. It is shown that this transformed GI/D/1 system has special structures which make the computation of the matrix R simple and efficient, thereby reducing the number of multiplications in each iteration significantly. As a result we were able to keep the computation time very low. Moreover, use of the resulting structural properties makes the computation of the distribution of queue length of the transformed system efficient. The computation of the distribution of waiting time is also shown to be simple by exploiting the special structures.
Brigdan, Matthew; Hill, Michael D; Jagdev, Abhijeet; Kamal, Noreen
2018-01-01
The ESCAPE (Endovascular Treatment for Small Core and Anterior Circulation Proximal Occlusion With Emphasis on Minimizing CT to Recanalization Times) randomized clinical trial collected a large diverse data set. However, it is difficult to fully understand the effects of the study on certain patient groups and disease progression. We developed and evaluated an interactive visualization of the ESCAPE trial data. We iteratively designed an interactive visualization using Python's Bokeh software library. The design was evaluated through a user study, which quantitatively evaluated its efficiency and accuracy against traditional modified Rankin Scalegraphic. Qualitative feedback was also evaluated. The novel interactive visualization of the ESCAPE data are publicly available at http://escapevisualization.herokuapp.com/. There was no difference in the efficiency and accuracy when comparing the use of the novel with the traditional visualization. However, users preferred the novel visualization because it allowed for greater exploration. Some insights obtained through exploration of the ESCAPE data are presented. Novel interactive visualizations can be applied to acute stroke trial data to allow for greater exploration of the results. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01778335. © 2017 American Heart Association, Inc.
Television viewing, computer use and total screen time in Canadian youth.
Mark, Amy E; Boyce, William F; Janssen, Ian
2006-11-01
Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.
Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef
2007-01-01
Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
Math modeling and computer mechanization for real time simulation of rotary-wing aircraft
Howe, R. M.
1979-01-01
Mathematical modeling and computer mechanization for real time simulation of rotary wing aircraft is discussed. Error analysis in the digital simulation of dynamic systems, such as rotary wing aircraft is described. The method for digital simulation of nonlinearities with discontinuities, such as exist in typical flight control systems and rotor blade hinges, is discussed.
Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks
Directory of Open Access Journals (Sweden)
Hui-Ping Chen
2016-11-01
Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.
Helder, Onno K.; Mulder, Paul G. H.; van Goudoever, Johannes B.
2008-01-01
To compare effects on premature infants' weight gain of a computer-generated and a nurse-determined incubator humidity strategy. An optimal humidity protocol is thought to reduce time to regain birthweight. Prospective randomized controlled design. Level IIIC neonatal intensive care unit in the
Computing Camps for Girls : A First-Time Experience at the University of Limerick
McInerney, Clare; Lamprecht, A.L.; Margaria, Tiziana
2018-01-01
Increasing the number of females in ICT-related university courses has been a major concern for several years. In 2015, we offered a girls-only computing summer camp for the first time, as a new component in our education and outreach activities to foster students’ interest in our discipline. In
A Real-Time Plagiarism Detection Tool for Computer-Based Assessments
Jeske, Heimo J.; Lall, Manoj; Kogeda, Okuthe P.
2018-01-01
Aim/Purpose: The aim of this article is to develop a tool to detect plagiarism in real time amongst students being evaluated for learning in a computer-based assessment setting. Background: Cheating or copying all or part of source code of a program is a serious concern to academic institutions. Many academic institutions apply a combination of…
International Nuclear Information System (INIS)
Sankar, Bindu; Sasidhar Rao, B.; Ilango Sambasivam, S.; Swaminathan, P.
2002-01-01
Full text: Real time computer systems are increasingly used for safety critical supervision and control of nuclear reactors. Typical application areas are supervision of reactor core against coolant flow blockage, supervision of clad hot spot, supervision of undesirable power excursion, power control and control logic for fuel handling systems. The most frequent cause of fault in safety critical real time computer system is traced to fuzziness in requirement specification. To ensure the specified safety, it is necessary to model the requirement specification of safety critical real time computer systems using formal mathematical methods. Modeling eliminates the fuzziness in the requirement specification and also helps to prepare the verification and validation schemes. Test data can be easily designed from the model of the requirement specification. Z and B are the popular languages used for modeling the requirement specification. A typical safety critical real time computer system for supervising the reactor core of prototype fast breeder reactor (PFBR) against flow blockage is taken as case study. Modeling techniques and the actual model are explained in detail. The advantages of modeling for ensuring the safety are summarized
Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis
2015-01-01
Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor
Green computing: power optimisation of vfi-based real-time multiprocessor dataflow applications
Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis
2015-01-01
Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor
International Nuclear Information System (INIS)
Sapizah Rahim; Khairul Anuar Mohd Salleh; Noorhazleena Azaman; Shaharudin Sayuti; Siti Madiha Muhammad Amir; Arshad Yassin; Abdul Razak Hamzah
2010-01-01
Signal-to-noise ratio (SNR) and sensitivity study of Computed Radiography (CR) system with reduction of exposure time is presented. The purposes of this research are to determine the behavior of SNR toward three different thicknesses (step wedge; 5, 10 and 15 mm) and the ability of CR system to recognize hole type penetrameter when the exposure time decreased up to 80 % according to the exposure chart (D7; ISOVOLT Titan E). It is shown that the SNR is decreased with decreasing of exposure time percentage but the high quality image is achieved until 80 % reduction of exposure time. (author)
Eckhardt, D. E., Jr.
1979-01-01
A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.
A computer-based time study system for timber harvesting operations
Jingxin Wang; Joe McNeel; John Baumgras
2003-01-01
A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...
van der Velden, V. H. J.; Cazzaniga, G.; Schrauder, A.; Hancock, J.; Bader, P.; Panzer-Grumayer, E. R.; Flohr, T.; Sutton, R.; Cave, H.; Madsen, H. O.; Cayuela, J. M.; Trka, J.; Eckert, C.; Foroni, L.; Zur Stadt, U.; Beldjord, K.; Raff, T.; van der Schoot, C. E.; van Dongen, J. J. M.
2007-01-01
Most modern treatment protocols for acute lymphoblastic leukaemia (ALL) include the analysis of minimal residual disease (MRD). To ensure comparable MRD results between different MRD-polymerase chain reaction (PCR) laboratories, standardization and quality control are essential. The European Study
Directory of Open Access Journals (Sweden)
Catherine Mooney
Full Text Available MicroRNAs are a class of small non-coding RNA that regulate gene expression at a post-transcriptional level. MicroRNAs have been identified in various body fluids under normal conditions and their stability as well as their dysregulation in disease opens up a new field for biomarker study. However, diurnal and day-to-day variation in plasma microRNA levels, and differential regulation between males and females, may affect biomarker stability. A QuantStudio 12K Flex Real-Time PCR System was used to profile plasma microRNA levels using OpenArray in male and female healthy volunteers, in the morning and afternoon, and at four time points over a one month period. Using this system we were able to run four OpenArray plates in a single run, the equivalent of 32 traditional 384-well qPCR plates or 12,000 data points. Up to 754 microRNAs can be identified in a single plasma sample in under two hours. 108 individual microRNAs were identified in at least 80% of all our samples which compares favourably with other reports of microRNA profiles in serum or plasma in healthy adults. Many of these microRNAs, including miR-16-5p, miR-17-5p, miR-19a-3p, miR-24-3p, miR-30c-5p, miR-191-5p, miR-223-3p and miR-451a are highly expressed and consistent with previous studies using other platforms. Overall, microRNA levels were very consistent between individuals, males and females, and time points and we did not detect significant differences in levels of microRNAs. These results suggest the suitability of this platform for microRNA profiling and biomarker discovery and suggest minimal confounding influence of sex or sample timing. However, the platform has not been subjected to rigorous validation which must be demonstrated in future biomarker studies where large differences may exist between disease and control samples.
International Nuclear Information System (INIS)
Roos, Justus E.; Paik, David; Olsen, David; Liu, Emily G.; Leung, Ann N.; Mindelzun, Robert; Choudhury, Kingshuk R.; Napel, Sandy; Rubin, Geoffrey D.; Chow, Lawrence C.; Naidich, David P.
2010-01-01
The diagnostic performance of radiologists using incremental CAD assistance for lung nodule detection on CT and their temporal variation in performance during CAD evaluation was assessed. CAD was applied to 20 chest multidetector-row computed tomography (MDCT) scans containing 190 non-calcified ≥3-mm nodules. After free search, three radiologists independently evaluated a maximum of up to 50 CAD detections/patient. Multiple free-response ROC curves were generated for free search and successive CAD evaluation, by incrementally adding CAD detections one at a time to the radiologists' performance. The sensitivity for free search was 53% (range, 44%-59%) at 1.15 false positives (FP)/patient and increased with CAD to 69% (range, 59-82%) at 1.45 FP/patient. CAD evaluation initially resulted in a sharp rise in sensitivity of 14% with a minimal increase in FP over a time period of 100 s, followed by flattening of the sensitivity increase to only 2%. This transition resulted from a greater prevalence of true positive (TP) versus FP detections at early CAD evaluation and not by a temporal change in readers' performance. The time spent for TP (9.5 s ± 4.5 s) and false negative (FN) (8.4 s ± 6.7 s) detections was similar; FP decisions took two- to three-times longer (14.4 s ± 8.7 s) than true negative (TN) decisions (4.7 s ± 1.3 s). When CAD output is ordered by CAD score, an initial period of rapid performance improvement slows significantly over time because of non-uniformity in the distribution of TP CAD output and not to a changing reader performance over time. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Si, Ming-Jue, E-mail: smjsh@hotmail.com [Department of Radiology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, No. 280, Mohe Road, Shanghai 201999 (China); Tao, Xiao-Feng, E-mail: taoxiaofeng1963@hotmail.com [Department of Radiology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, No. 280, Mohe Road, Shanghai 201999 (China); Du, Guang-Ye, E-mail: 715376158@qq.com [Department of Pathology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, No. 280, Mohe Road, Shanghai 201999 (China); Cai, Ling-Ling, E-mail: caill_00@163.com [Department of Radiology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, No. 280, Mohe Road, Shanghai 201999 (China); Han, Hong-Xiu, E-mail: hanhongxiu@hotmail.com [Department of Pathology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, No. 280, Mohe Road, Shanghai 201999 (China); Liang, Xi-Zi, E-mail: liangxizish@hotmail.com [Department of Pathology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, No. 280, Mohe Road, Shanghai 201999 (China); Zhao, Jiang-Min, E-mail: zhaojiangmin1962@hotmail.com [Department of Radiology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, No. 280, Mohe Road, Shanghai 201999 (China)
2016-10-15
Objective: To retrospectively compare focal interstitial fibrosis (FIF), atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), and minimally invasive adenocarcinoma (MIA) with pure ground-glass opacity (GGO) using thin-section computed tomography (CT). Materials and methods: Sixty pathologically confirmed cases were reviewed including 7 cases of FIF, 17 of AAH, 23of AIS, and 13 of MIA. All nodules kept pure ground glass appearances before surgical resection and their last time of thin-section CT imaging data before operation were collected. Differences of patient demographics and CT features were compared among these four types of lesions. Results: FIF occurred more frequently in males and smokers while the others occurred more frequently in female nonsmokers. Nodule size was significant larger in MIA (P < 0.001, cut-off value = 7.5 mm). Nodule shape (P = 0.045), margin characteristics (P < 0.001), the presence of pleural indentation (P = 0.032), and vascular ingress (P < 0.001) were significant factors that differentiated the 4 groups. A concave margin was only demonstrated in a high proportion of FIF at 85.7% (P = 0.002). There were no significant differences (all P > 0.05) in age, malignant history, attenuation value, location, and presence of bubble-like lucency. Conclusion: A nodule size >7.5 mm increases the possibility of MIA. A concave margin could be useful for differentiation of FIF from the other malignant or pre-malignant GGO nodules. The presence of spiculation or pleural indentation may preclude the diagnosis of AAH.
Minimal families of curves on surfaces
Lubbes, Niels
2014-11-01
A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.
Directory of Open Access Journals (Sweden)
Yeqing Zhang
2018-02-01
Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-01-01
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301
Computationally determining the salience of decision points for real-time wayfinding support
Directory of Open Access Journals (Sweden)
Makoto Takemiya
2012-06-01
Full Text Available This study introduces the concept of computational salience to explain the discriminatory efficacy of decision points, which in turn may have applications to providing real-time assistance to users of navigational aids. This research compared algorithms for calculating the computational salience of decision points and validated the results via three methods: high-salience decision points were used to classify wayfinders; salience scores were used to weight a conditional probabilistic scoring function for real-time wayfinder performance classification; and salience scores were correlated with wayfinding-performance metrics. As an exploratory step to linking computational and cognitive salience, a photograph-recognition experiment was conducted. Results reveal a distinction between algorithms useful for determining computational and cognitive saliences. For computational salience, information about the structural integration of decision points is effective, while information about the probability of decision-point traversal shows promise for determining cognitive salience. Limitations from only using structural information and motivations for future work that include non-structural information are elicited.
In-Network Computation is a Dumb Idea Whose Time Has Come
Sapio, Amedeo; Abdelaziz, Ibrahim; Aldilaijan, Abdulla; Canini, Marco; Kalnis, Panos
2017-01-01
Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.
In-Network Computation is a Dumb Idea Whose Time Has Come
Sapio, Amedeo
2017-11-27
Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers\\' computation time.
Real-time computer treatment of THz passive device images with the high image quality
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
Cyclone Simulation via Action Minimization
Plotkin, D. A.; Weare, J.; Abbot, D. S.
2016-12-01
A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for
2010-01-01
Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...
I. Fisk
2012-01-01
Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...
Towards the development of run times leveraging virtualization for high performance computing
International Nuclear Information System (INIS)
Diakhate, F.
2010-12-01
In recent years, there has been a growing interest in using virtualization to improve the efficiency of data centers. This success is rooted in virtualization's excellent fault tolerance and isolation properties, in the overall flexibility it brings, and in its ability to exploit multi-core architectures efficiently. These characteristics also make virtualization an ideal candidate to tackle issues found in new compute cluster architectures. However, in spite of recent improvements in virtualization technology, overheads in the execution of parallel applications remain, which prevent its use in the field of high performance computing. In this thesis, we propose a virtual device dedicated to message passing between virtual machines, so as to improve the performance of parallel applications executed in a cluster of virtual machines. We also introduce a set of techniques facilitating the deployment of virtualized parallel applications. These functionalities have been implemented as part of a runtime system which allows to benefit from virtualization's properties in a way that is as transparent as possible to the user while minimizing performance overheads. (author)
International Nuclear Information System (INIS)
Langner, Ulrich W.; Keall, Paul J.
2010-01-01
Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.
An Efficient Integer Coding and Computing Method for Multiscale Time Segment
Directory of Open Access Journals (Sweden)
TONG Xiaochong
2016-12-01
Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.
I. Fisk
2011-01-01
Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...
Ousmen, Ahmad; Conroy, Thierry; Guillemin, Francis; Velten, Michel; Jolly, Damien; Mercier, Mariette; Causeret, Sylvain; Cuisenier, Jean; Graesslin, Olivier; Hamidou, Zeinab; Bonnetain, Franck; Anota, Amélie
2016-12-03
An important challenge of the longitudinal analysis of health-related quality of life (HRQOL) is the potential occurrence of a Response Shift (RS) effect. While the impact of RS effect on the longitudinal analysis of HRQOL has already been studied, few studies have been conducted on its impact on the determination of the Minimal Important Difference (MID). This study aims to investigate the impact of the RS effect on the determination of the MID over time for each scale of both EORTC QLQ-C30 and QLQ-BR23 questionnaires in breast cancer patients. Patients with breast cancer completed the EORTC QLQ-C30 and the EORTC QLQ-BR23 questionnaires at baseline (time of diagnosis; T0), three months (T1) and six months after surgery (T2). Four hospitals and care centers participated in this study: cancer centers of Dijon and Nancy, the university hospitals of Reims and Strasbourg At T1 and T2, patients were asked to evaluate their HRQOL change during the last 3 months using the Jaeschke transition question. They were also asked to assess retrospectively their HRQOL level of three months ago. The occurrence of the RS effect was explored using the then-test method and its impact on the determination of the MID by using the Anchor-based method. Between February 2006 and February 2008, 381 patients were included of mean age 58 years old (SD = 11). For patients who reported a deterioration of their HRQOL level at each follow-up, an increase of RS effect has been detected between T1 and T2 in 13/15 dimensions of QLQ-C30 questionnaire, and 4/7 dimensions of QLQ-BR23 questionnaire. In contrast, a decrease of the RS effect was observed in 8/15 dimensions of QLQ-C30 questionnaire and in 5/7 dimensions of QLQ-BR23 questionnaire in case of improvement. At T2, the MID became ≥ 5 points when taking into account the RS effect in 10/15 dimensions of QLQ-C30 questionnaire and in 5/7 dimensions of QLQ-BR23 questionnaire. This study highlights that the RS effect increases over time in
An assessment of the real-time application capabilities of the SIFT computer system
Butler, R. W.
1982-01-01
The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.
Theory and computation of disturbance invariant sets for discrete-time linear systems
Directory of Open Access Journals (Sweden)
Kolmanovsky Ilya
1998-01-01
Full Text Available This paper considers the characterization and computation of invariant sets for discrete-time, time-invariant, linear systems with disturbance inputs whose values are confined to a specified compact set but are otherwise unknown. The emphasis is on determining maximal disturbance-invariant sets X that belong to a specified subset Γ of the state space. Such d-invariant sets have important applications in control problems where there are pointwise-in-time state constraints of the form χ ( t ∈ Γ . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.
Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui
2016-04-01
Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.
International Nuclear Information System (INIS)
Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui
2016-01-01
Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.
Energy Technology Data Exchange (ETDEWEB)
Chandrasekara, S; Pella, S [21st Century Oncology, Boca Raton, FL (United States); Hyvarinen, M; Pinder, J [Florida Atlantic University, Boca Raton, FL (United States)
2016-06-15
Purpose: To assess the variation in dose received by the organs at risk (OARs) due to inter-fractional motion by SAVI to determine the importance of providing proper immobilization Methods: An analysis of 15 patients treated with SAVI applicators were considered for this study. Treatment planning teams did not see significant changes in their CT scans through scout images and initial treatment plan was used for the entire treatment. These scans, taken before each treatment were imported in to the treatment planning system and were fused together with respective to the applicator, using landmark registration. Dosimetric evaluations were performed. Dose received by skin, ribs and PTV(Planning target volume) respect to the initial treatment plan were measured. Results: Contours of the OARs were not similar with the initial image. Deduction in volumes of PTV and cavity, small deviations in displacements from the applicator to the OARs, difference in doses received by the OARs between treatments were noticed. The maximum, minimum, average doses varied between 10% to 20% 5% to 8% and 15% to 20% in ribs and skin. The 0.1cc doses to OARs showed an average change of 10% of the prescribed dose. PTV was receiving a different dose than the estimated dose Conclusion: The variation in volumes and isodoses related to the OARs, PTV receiving a lesser dose than the prescribed dose indicate that the estimated doses are different from the received dose. This study reveals the urgent need of improving the immobilization methods. Taking a CT scan before each treatment and replanning is helpful to minimize the risk of delivering undesired high doses to the OARs. Patient positioning, motion, respiration, observer differences and time lap between the planning and treating can arise more complications. VacLock, Positioning cushions, Image guided brachytherapy and adjustable registration should be used for further improvements.
Energy Technology Data Exchange (ETDEWEB)
Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao [Department of Chemistry, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong); Huang, Xuhui, E-mail: xuhuihuang@ust.hk [Department of Chemistry, Division of Biomedical Engineering, Center of Systems Biology and Human Health, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong); The HKUST Shenzhen Research Institute, Shenzhen (China)
2016-04-21
Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.
The minimally tuned minimal supersymmetric standard model
International Nuclear Information System (INIS)
Essig, Rouven; Fortin, Jean-Francois
2008-01-01
The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV
Hardware architecture design of image restoration based on time-frequency domain computation
Wen, Bo; Zhang, Jing; Jiao, Zipeng
2013-10-01
The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.
Joint Time-Frequency-Space Classification of EEG in a Brain-Computer Interface Application
Directory of Open Access Journals (Sweden)
Molina Gary N Garcia
2003-01-01
Full Text Available Brain-computer interface is a growing field of interest in human-computer interaction with diverse applications ranging from medicine to entertainment. In this paper, we present a system which allows for classification of mental tasks based on a joint time-frequency-space decorrelation, in which mental tasks are measured via electroencephalogram (EEG signals. The efficiency of this approach was evaluated by means of real-time experimentations on two subjects performing three different mental tasks. To do so, a number of protocols for visualization, as well as training with and without feedback, were also developed. Obtained results show that it is possible to obtain good classification of simple mental tasks, in view of command and control, after a relatively small amount of training, with accuracies around 80%, and in real time.
Stieber, Michael E.
1989-01-01
A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.
International Nuclear Information System (INIS)
Raza, K.S.M.
2004-01-01
This paper demonstrates that if a complicated nonlinear, non-square, state-coupled multi variable system is smartly linearized and subjected to a thorough stability analysis then we can achieve our design objectives via a controller which will be quite simple (in term of resource usage and execution time) and very efficient (in terms of robustness). Further the aim is to implement this controller via computer in a real time environment. Therefore first a nonlinear mathematical model of the system is achieved. An intelligent work is done to decouple the multivariable system. Linearization and stability analysis techniques are employed for the development of a linearized and mathematically sound control law. Nonlinearities like the saturation in actuators are also been catered. The controller is then discretized using Runge-Kutta integration. Finally the discretized control law is programmed in a computer in a real time environment. The programme is done in RT -Linux using GNU C for the real time realization of the control scheme. The real time processes, like sampling and controlled actuation, and the non real time processes, like graphical user interface and display, are programmed as different tasks. The issue of inter process communication, between real time and non real time task is addressed quite carefully. The results of this research pursuit are presented graphically. (author)
Copyright and Computer Generated Materials – Is it Time to Reboot the Discussion About Authorship?
Directory of Open Access Journals (Sweden)
Anne Fitzgerald
2013-12-01
Full Text Available Computer generated materials are ubiquitous and we encounter them on a daily basis, even though most people are unaware that this is the case. Blockbuster movies, television weather reports and telephone directories all include material that is produced by utilising computer technologies. Copyright protection for materials generated by a programmed computer was considered by the Federal Court and Full Court of the Federal Court in Telstra Corporation Limited v Phone Directories Company Pty Ltd. The court held that the White and Yellow pages telephone directories produced by Telstra and its subsidiary, Sensis, were not protected by copyright because they were computer-generated works which lacked the requisite human authorship.The Copyright Act 1968 (Cth does not contain specific provisions on the subsistence of copyright in computer-generated materials. Although the issue of copyright protection for computer-generated materials has been examined in Australia on two separate occasions by independently-constituted Copyright Law Review Committees over a period of 10 years (1988 to 1998, the Committees’ recommendations for legislative clarification by the enactment of specific amendments to the Copyright Act have not yet been implemented and the legal position remains unclear. In the light of the decision of the Full Federal Court in Telstra v Phone Directories it is timely to consider whether specific provisions should be enacted to clarify the position of computer-generated works under copyright law and, in particular, whether the requirement of human authorship for original works protected under Part III of the Copyright Act should now be reconceptualised to align with the realities of how copyright materials are created in the digital era.
Billings, Seth; Kang, Hyun Jae; Cheng, Alexis; Boctor, Emad; Kazanzides, Peter; Taylor, Russell
2015-06-01
We present a registration method for computer-assisted total hip replacement (THR) surgery, which we demonstrate to improve the state of the art by both reducing the invasiveness of current methods and increasing registration accuracy. A critical element of computer-guided procedures is the determination of the spatial correspondence between the patient and a computational model of patient anatomy. The current method for establishing this correspondence in robot-assisted THR is to register points intraoperatively sampled by a tracked pointer from the exposed proximal femur and, via auxiliary incisions, from the distal femur. In this paper, we demonstrate a noninvasive technique for sampling points on the distal femur using tracked B-mode ultrasound imaging and present a new algorithm for registering these data called Projected Iterative Most-Likely Oriented Point (P-IMLOP). Points and normal orientations of the distal bone surface are segmented from ultrasound images and registered to the patient model along with points sampled from the exposed proximal femur via a tracked pointer. The proposed approach is evaluated using a bone- and tissue-mimicking leg phantom constructed to enable accurate assessment of experimental registration accuracy with respect to a CT-image-based model of the phantom. These experiments demonstrate that localization of the femur shaft is greatly improved by tracked ultrasound. The experiments further demonstrate that, for ultrasound-based data, the P-IMLOP algorithm significantly improves registration accuracy compared to the standard ICP algorithm. Registration via tracked ultrasound and the P-IMLOP algorithm has high potential to reduce the invasiveness and improve the registration accuracy of computer-assisted orthopedic procedures.
Real-time field programmable gate array architecture for computer vision
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2001-01-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.
Abuter, Roberto; Dembet, Roderick; Lacour, Sylvestre; di Lieto, Nicola; Woillez, Julien; Eisenhauer, Frank; Fedou, Pierre; Phan Duc, Than
2016-08-01
The new VLTI (Very Large Telescope Interferometer) 1 instrument GRAVITY5, 22, 23 is equipped with a fringe tracker16 able to stabilize the K-band fringes on six baselines at the same time. It has been designed to achieve a performance for average seeing conditions of a residual OPD (Optical Path Difference) lower than 300 nm with objects brighter than K = 10. The control loop implementing the tracking is composed of a four stage real time computer system compromising: a sensor where the detector pixels are read in and the OPD and GD (Group Delay) are calculated; a controller receiving the computed sensor quantities and producing commands for the piezo actuators; a concentrator which combines both the OPD commands with the real time tip/tilt corrections offloading them to the piezo actuator; and finally a Kalman15 parameter estimator. This last stage is used to monitor current measurements over a window of few seconds and estimate new values for the main Kalman15 control loop parameters. The hardware and software implementation of this design runs asynchronously and communicates the four computers for data transfer via the Reflective Memory Network3. With the purpose of improving the performance of the GRAVITY5, 23 fringe tracking16, 22 control loop, a deviation from the standard asynchronous communication mechanism has been proposed and implemented. This new scheme operates the four independent real time computers involved in the tracking loop synchronously using the Reflective Memory Interrupts2 as the coordination signal. This synchronous mechanism had the effect of reducing the total pure delay of the loop from 3.5 [ms] to 2.0 [ms] which then translates on a better stabilization of the fringes as the bandwidth of the system is substantially improved. This paper will explain in detail the real time architecture of the fringe tracker in both is synchronous and synchronous implementation. The achieved improvements on reducing the delay via this mechanism will be
Bound on quantum computation time: Quantum error correction in a critical environment
International Nuclear Information System (INIS)
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2010-01-01
We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.
Real-time computing in environmental monitoring of a nuclear power plant
International Nuclear Information System (INIS)
Deme, S.; Lang, E.; Nagy, Gy.
1987-06-01
A real-time computing method is described for calculating the environmental radiation exposure due to a nuclear power plant both at normal operation and at accident. The effects of the Gaussian plume are recalculated in every ten minutes based on meteorological parameters measured at a height of 20 and 120 m as well as on emission data. At normal operation the quantity of radioactive materials released through the stacks is measured and registered while, at an accident, the source strength is unknown and the calculated relative data are normalized to the values measured at the eight environmental monitoring stations. The doses due to noble gases and to dry and wet deposition as well as the time integral of 131 I concentration are calculated and stored by a professional personal computer for 720 points of the environment of 11 km radius. (author)
The Educator´s Approach to Media Training and Computer Games within Leisure Time of School-children
MORAVCOVÁ, Dagmar
2009-01-01
The paper describes possible ways of approaching computer games playing as part of leisure time of school-children and deals with the significance of media training in leisure time. At first it specifies the concept of leisure time and its functions, then shows some positive and negative effects of the media. It further describes classical computer games, the problem of excess computer game playing and means of prevention. The paper deals with the educator's personality and the importance of ...
Computation of the Short-Time Linear Canonical Transform with Dual Window
Directory of Open Access Journals (Sweden)
Lei Huang
2017-01-01
Full Text Available The short-time linear canonical transform (STLCT, which maps the time domain signal into the joint time and frequency domain, has recently attracted some attention in the area of signal processing. However, its applications are still limited due to the fact that selection of coefficients of the short-time linear canonical series (STLCS is not unique, because time and frequency elementary functions (together known as basis function of STLCS do not constitute an orthogonal basis. To solve this problem, this paper investigates a dual window solution. First, the nonorthogonal problem that suffered from original window is fulfilled by orthogonal condition with dual window. Then based on the obtained condition, a dual window computation approach of the GT is extended to the STLCS. In addition, simulations verify the validity of the proposed condition and solutions. Furthermore, some possible applied directions are discussed.
2015-05-28
recognition is simpler and requires less computational resources compared to other inputs such as facial expressions . The Berlin database of Emotional ...Processing Magazine, IEEE, vol. 18, no. 1, pp. 32– 80, 2001. [15] K. R. Scherer, T. Johnstone, and G. Klasmeyer, “Vocal expression of emotion ...Network for Real-Time Speech- Emotion Recognition 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S) Q
Wilson loops in minimal surfaces
International Nuclear Information System (INIS)
Drukker, Nadav; Gross, David J.; Ooguri, Hirosi
1999-01-01
The AdS/CFT correspondence suggests that the Wilson loop of the large N gauge theory with N = 4 supersymmetry in 4 dimensions is described by a minimal surface in AdS 5 x S 5 . The authors examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which the authors call BPS loops, whose expectation values are free from ultra-violet divergence. They formulate the loop equation for such loops. To the extent that they have checked, the minimal surface in AdS 5 x S 5 gives a solution of the equation. The authors also discuss the zig-zag symmetry of the loop operator. In the N = 4 gauge theory, they expect the zig-zag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. They will show how this is realized for the minimal surface
Wilson loops and minimal surfaces
International Nuclear Information System (INIS)
Drukker, Nadav; Gross, David J.; Ooguri, Hirosi
1999-01-01
The AdS-CFT correspondence suggests that the Wilson loop of the large N gauge theory with N=4 supersymmetry in four dimensions is described by a minimal surface in AdS 5 xS 5 . We examine various aspects of this proposal, comparing gauge theory expectations with computations of minimal surfaces. There is a distinguished class of loops, which we call BPS loops, whose expectation values are free from ultraviolet divergence. We formulate the loop equation for such loops. To the extent that we have checked, the minimal surface in AdS 5 xS 5 gives a solution of the equation. We also discuss the zigzag symmetry of the loop operator. In the N=4 gauge theory, we expect the zigzag symmetry to hold when the loop does not couple the scalar fields in the supermultiplet. We will show how this is realized for the minimal surface. (c) 1999 The American Physical Society
van der Velden, V. H. J.; Willemse, M. J.; van der Schoot, C. E.; Hählen, K.; van Wering, E. R.; van Dongen, J. J. M.
2002-01-01
Immunoglobulin gene rearrangements are used as PCR targets for detection of minimal residual disease (MRD) in acute lymphoblastic leukemia (ALL). We Investigated the occurrence of monoclonal immunoglobulin kappa-deleting element (IGK-Kde) rearrangements by Southern blotting and PCR/heteroduplex
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N. J.
2014-02-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4 s post-injection trigger signal and at 9-12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.
Computational model for real-time determination of tritium inventory in a detritiation installation
International Nuclear Information System (INIS)
Bornea, Anisia; Stefanescu, Ioan; Zamfirache, Marius; Stefan, Iuliana; Sofalca, Nicolae; Bidica, Nicolae
2008-01-01
Full text: At ICIT Rm.Valcea an experimental pilot plant was built having as main objective the development of a technology for detritiation of heavy water processed in the CANDU-type reactors of the nuclear power plant at Cernavoda, Romania. The aspects related to safeguards and safety for such a detritiation installation being of great importance, a complex computational model has been developed. The model allows real-time calculation of tritium inventory in a working installation. The applied detritiation technology is catalyzed isotopic exchange coupled with cryogenic distillation. Computational models for non-steady working conditions have been developed for each process of isotopic exchange. By coupling these processes tritium inventory can be determined in real-time. The computational model was developed based on the experience gained on the pilot installation. The model uses a set of parameters specific to isotopic exchange processes. These parameters were experimentally determined in the pilot installation. The model is included in the monitoring system and uses as input data the parameters acquired in real-time from automation system of the pilot installation. A friendly interface has been created to visualize the final results as data or graphs. (authors)
A real-time computational model for estimating kinematics of ankle ligaments.
Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Quan
2016-01-01
An accurate assessment of ankle ligament kinematics is crucial in understanding the injury mechanisms and can help to improve the treatment of an injured ankle, especially when used in conjunction with robot-assisted therapy. A number of computational models have been developed and validated for assessing the kinematics of ankle ligaments. However, few of them can do real-time assessment to allow for an input into robotic rehabilitation programs. An ankle computational model was proposed and validated to quantify the kinematics of ankle ligaments as the foot moves in real-time. This model consists of three bone segments with three rotational degrees of freedom (DOFs) and 12 ankle ligaments. This model uses inputs for three position variables that can be measured from sensors in many ankle robotic devices that detect postures within the foot-ankle environment and outputs the kinematics of ankle ligaments. Validation of this model in terms of ligament length and strain was conducted by comparing it with published data on cadaver anatomy and magnetic resonance imaging. The model based on ligament lengths and strains is in concurrence with those from the published studies but is sensitive to ligament attachment positions. This ankle computational model has the potential to be used in robot-assisted therapy for real-time assessment of ligament kinematics. The results provide information regarding the quantification of kinematics associated with ankle ligaments related to the disability level and can be used for optimizing the robotic training trajectory.
Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula
Directory of Open Access Journals (Sweden)
Marlowe Thomas J.
1998-01-01
Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants. We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results. We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.
Extending the length and time scales of Gram–Schmidt Lyapunov vector computations
Energy Technology Data Exchange (ETDEWEB)
Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)
2013-08-01
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.
Extending the length and time scales of Gram–Schmidt Lyapunov vector computations
International Nuclear Information System (INIS)
Costa, Anthony B.; Green, Jason R.
2013-01-01
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra
A neuro-fuzzy computing technique for modeling hydrological time series
Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.
2004-05-01
Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.
The minimal non-minimal standard model
International Nuclear Information System (INIS)
Bij, J.J. van der
2006-01-01
In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed
A user-friendly SSVEP-based brain-computer interface using a time-domain classifier.
Luo, An; Sullivan, Thomas J
2010-04-01
We introduce a user-friendly steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) system. Single-channel EEG is recorded using a low-noise dry electrode. Compared to traditional gel-based multi-sensor EEG systems, a dry sensor proves to be more convenient, comfortable and cost effective. A hardware system was built that displays four LED light panels flashing at different frequencies and synchronizes with EEG acquisition. The visual stimuli have been carefully designed such that potential risk to photosensitive people is minimized. We describe a novel stimulus-locked inter-trace correlation (SLIC) method for SSVEP classification using EEG time-locked to stimulus onsets. We studied how the performance of the algorithm is affected by different selection of parameters. Using the SLIC method, the average light detection rate is 75.8% with very low error rates (an 8.4% false positive rate and a 1.3% misclassification rate). Compared to a traditional frequency-domain-based method, the SLIC method is more robust (resulting in less annoyance to the users) and is also suitable for irregular stimulus patterns.
Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos
2015-02-01
The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.
Reliability of real-time computing with radiation data feedback at accidental release
International Nuclear Information System (INIS)
Deme, S.; Feher, I.; Lang, E.
1990-01-01
At the first workshop in 1985 we reported on the real-time dose computing method used at the Paks Nuclear Power Plant and on the telemetric system developed for the normalization of the computed data. At present, the computing method normalized for the telemetric data represents the primary information for deciding on any necessary counter measures in case of a nuclear reactor accident. In this connection we analyzed the reliability of the results obtained in this manner. The points of the analysis were: how the results are influenced by the choice of certain parameters that cannot be determined by direct methods and how the improperly chosen diffusion parameters would distort the determination of environmental radiation parameters normalized on the basis of the measurements ( 131 I activity concentration, gamma dose rate) at points lying at a given distance from the measuring stations. A further source of errors may be that, when determining the level of gamma radiation, the radionuclide doses in the cloud and on the ground surface are measured together by the environmental monitoring stations, whereas these doses appear separately in the computations. At the Paks NPP it is the time integral of the aiborne activity concentration of vapour form 131 I which is determined. This quantity includes neither the other physical and chemical forms of 131 I nor the other isotopes of radioiodine. We gave numerical examples for the uncertainties due to the above factors. As a result, we arrived at the conclusions that there is a need to decide on accident-related measures based on the computing method that the dose uncertainties may reach one order of magnitude for points lying far from the monitoring stations. Different measures are discussed to make the uncertainties significantly lower
Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.
Sanchez, Yerly; Pinzon, David; Zheng, Bin
2017-10-01
To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.
Computer simulation of the time evolution of a quenched model alloy in the nucleation region
International Nuclear Information System (INIS)
Marro, J.; Lebowitz, J.L.; Kalos, M.H.
1979-01-01
The time evolution of the structure function and of the cluster (or grain) distribution following quenching in a model binary alloy with a small concentration of minority atoms is obtained from computer simulations. The structure function S-bar (k,t) obeys a simple scaling relation, S-bar (k,t) = K -3 F (k/K) with K (t) proportional t/sup -a/, a approx. = 0.25, during the latter and larger part of the evolution. During the same period, the mean cluster size grows approximately linearly with time
Manual cross check of computed dose times for motorised wedged fields
International Nuclear Information System (INIS)
Porte, J.
2001-01-01
If a mass of tissue equivalent material is exposed in turn to wedged and open radiation fields of the same size, for equal times, it is incorrect to assume that the resultant isodose pattern will be effectively that of a wedge having half the angle of the wedged field. Computer programs have been written to address the problem of creating an intermediate wedge field, commonly known as a motorized wedge. The total exposure time is apportioned between the open and wedged fields, to produce a beam modification equivalent to that of a wedged field of a given wedge angle. (author)
Energy Technology Data Exchange (ETDEWEB)
Machaj, B. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)
1996-12-31
This research is aimed to develop a device for continuous monitoring of radon in the air, by measuring alpha activity of radon and its short lived decay products. The influence of alpha activity variation of radon and its daughters on the measured results is of importance and requires a knowledge of this variation with time. Employing the measurement of alpha radiation of radon and of its short lived decay products, require knowledge of radon concentration variation and its decay products against the time. A computer program in Turbo Pascal language was therefore developed performing the computations employing the known relations involved, the program being adapted for IBM PC computers. The presented program enables computation of activity of {sup 222}Rn and its daughter products: {sup 218}Po, {sup 214}Pb, {sup 214}Bi and {sup 214}Po every 1 min within the period of 0-255 min for any state of radiation equilibrium between the radon and its daughter products. The program permits also to compute alpha activity of {sup 222}Rn + {sup 218}Po + {sup 214}Po against time and the total alpha activity at selected interval of time. The results of computations are stored on the computer hard disk in ASCII format and are used a graphic program e.g. by DrawPerfect program to make diagrams. Equations employed for computation of the alpha activity of radon and its decay products as well as the description of program functions are given. (author). 2 refs, 4 figs.
M. Kasemann
CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...
Television Viewing, Computer Use, Time Driving and All‐Cause Mortality: The SUN Cohort
Basterra‐Gortari, Francisco Javier; Bes‐Rastrollo, Maira; Gea, Alfredo; Núñez‐Córdoba, Jorge María; Toledo, Estefanía; Martínez‐González, Miguel Ángel
2014-01-01
Background Sedentary behaviors have been directly associated with all‐cause mortality. However, little is known about different types of sedentary behaviors in relation to overall mortality. Our objective was to assess the association between different sedentary behaviors and all‐cause mortality. Methods and Results In this prospective, dynamic cohort study (the SUN Project) 13 284 Spanish university graduates with a mean age of 37 years were followed‐up for a median of 8.2 years. Television, computer, and driving time were assessed at baseline. Poisson regression models were fitted to examine the association between each sedentary behavior and total mortality. All‐cause mortality incidence rate ratios (IRRs) per 2 hours per day were 1.40 (95% confidence interval (CI): 1.06 to 1.84) for television viewing, 0.96 (95% CI: 0.79 to 1.18) for computer use, and 1.14 (95% CI: 0.90 to 1.44) for driving, after adjustment for age, sex, smoking status, total energy intake, Mediterranean diet adherence, body mass index, and physical activity. The risk of mortality was twofold higher for participants reporting ≥3 h/day of television viewing than for those reporting Television viewing was directly associated with all‐cause mortality. However, computer use and time spent driving were not significantly associated with higher mortality. Further cohort studies and trials designed to assess whether reductions in television viewing are able to reduce mortality are warranted. The lack of association between computer use or time spent driving and mortality needs further confirmation. PMID:24965030
International Nuclear Information System (INIS)
Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide
2007-01-01
We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustrate our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls
International Nuclear Information System (INIS)
Dubois, Daniel M.
2000-01-01
This paper is a continuation of our preceding paper dealing with computational derivation of the Klein-Gordon quantum relativist equation and the Schroedinger quantum equation with forward and backward space-time shifts. The first part introduces forward and backward derivatives for discrete and continuous systems. Generalized complex discrete and continuous derivatives are deduced. The second part deduces the Klein-Gordon equation from the space-time complex continuous derivatives. These derivatives take into account forward-backward space-time shifts related to an internal phase velocity u. The internal group velocity v is related to the speed of light u.v=c 2 and to the external group and phase velocities u.v=v g .v p . Without time shift, the Schroedinger equation is deduced, with a supplementary term, which could represent a reference potential. The third part deduces the Quantum Relativist Klein-Gordon equation for a particle in an electromagnetic field
Optimal design and use of retry in fault tolerant real-time computer systems
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
International Nuclear Information System (INIS)
Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru
1975-01-01
Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)
Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.
2016-01-01
In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718
Cloud computing platform for real-time measurement and verification of energy performance
International Nuclear Information System (INIS)
Ke, Ming-Tsun; Yeh, Chia-Hung; Su, Cheng-Jie
2017-01-01
Highlights: • Application of PSO algorithm can improve the accuracy of the baseline model. • M&V cloud platform automatically calculates energy performance. • M&V cloud platform can be applied in all energy conservation measures. • Real-time operational performance can be monitored through the proposed platform. • M&V cloud platform facilitates the development of EE programs and ESCO industries. - Abstract: Nations worldwide are vigorously promoting policies to improve energy efficiency. The use of measurement and verification (M&V) procedures to quantify energy performance is an essential topic in this field. Currently, energy performance M&V is accomplished via a combination of short-term on-site measurements and engineering calculations. This requires extensive amounts of time and labor and can result in a discrepancy between actual energy savings and calculated results. In addition, the M&V period typically lasts for periods as long as several months or up to a year, the failure to immediately detect abnormal energy performance not only decreases energy performance, results in the inability to make timely correction, and misses the best opportunity to adjust or repair equipment and systems. In this study, a cloud computing platform for the real-time M&V of energy performance is developed. On this platform, particle swarm optimization and multivariate regression analysis are used to construct accurate baseline models. Instantaneous and automatic calculations of the energy performance and access to long-term, cumulative information about the energy performance are provided via a feature that allows direct uploads of the energy consumption data. Finally, the feasibility of this real-time M&V cloud platform is tested for a case study involving improvements to a cold storage system in a hypermarket. Cloud computing platform for real-time energy performance M&V is applicable to any industry and energy conservation measure. With the M&V cloud platform, real-time
Automated selection of brain regions for real-time fMRI brain-computer interfaces
Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio
2017-02-01
Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.
Scheduling with time-dependent execution times
Woeginger, G.J.
1995-01-01
We consider systems of tasks where the task execution times are time-dependent and where all tasks have some common deadline. We describe how to compute in polynomial time a schedule that minimizes the number of late tasks. This answers a question raised in a recent paper by Ho, Leung and Wei.
A sub-cubic time algorithm for computing the quartet distance between two general trees
DEFF Research Database (Denmark)
Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas
2011-01-01
Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...
Computer vision system in real-time for color determination on flat surface food
Directory of Open Access Journals (Sweden)
Erick Saldaña
2013-03-01
Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.
Computer vision system in real-time for color determination on flat surface food
Directory of Open Access Journals (Sweden)
Erick Saldaña
2013-01-01
Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.
A stable computational scheme for stiff time-dependent constitutive equations
International Nuclear Information System (INIS)
Shih, C.F.; Delorenzi, H.G.; Miller, A.K.
1977-01-01
Viscoplasticity and creep type constitutive equations are increasingly being employed in finite element codes for evaluating the deformation of high temperature structural members. These constitutive equations frequently exhibit stiff regimes which makes an analytical assessment of the structure very costly. A computational scheme for handling deformation in stiff regimes is proposed in this paper. By the finite element discretization, the governing partial differential equations in the spatial (x) and time (t) variables are reduced to a system of nonlinear ordinary differential equations in the independent variable t. The constitutive equations are expanded in a Taylor's series about selected values of t. The resulting system of differential equations are then integrated by an implicit scheme which employs a predictor technique to initiate the Newton-Raphson procedure. To examine the stability and accuracy of the computational scheme, a series of calculations were carried out for uniaxial specimens and thick wall tubes subjected to mechanical and thermal loading. (Auth.)
Real-time dynamics of lattice gauge theories with a few-qubit quantum computer
Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer
2016-06-01
Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.
Minimal Poems Written in 1979 Minimal Poems Written in 1979
Directory of Open Access Journals (Sweden)
Sandra Sirangelo Maggio
2008-04-01
Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.
Fault tolerant distributed real time computer systems for I and C of prototype fast breeder reactor
Energy Technology Data Exchange (ETDEWEB)
Manimaran, M., E-mail: maran@igcar.gov.in; Shanmugam, A.; Parimalam, P.; Murali, N.; Satya Murty, S.A.V.
2014-03-15
Highlights: • Architecture of distributed real time computer system (DRTCS) used in I and C of PFBR is explained. • Fault tolerant (hot standby) architecture, fault detection and switch over are detailed. • Scaled down model was used to study functional and performance requirements of DRTCS. • Quality of service parameters for scaled down model was critically studied. - Abstract: Prototype fast breeder reactor (PFBR) is in the advanced stage of construction at Kalpakkam, India. Three-tier architecture is adopted for instrumentation and control (I and C) of PFBR wherein bottom tier consists of real time computer (RTC) systems, middle tier consists of process computers and top tier constitutes of display stations. These RTC systems are geographically distributed and networked together with process computers and display stations. Hot standby architecture comprising of dual redundant RTC systems with switch over logic system is deployed in order to achieve fault tolerance. Fault tolerant dual redundant network connectivity is provided in each RTC system and TCP/IP protocol is selected for network communication. In order to assess the performance of distributed RTC systems, scaled down model was developed with 9 representative systems and nearly 15% of I and C signals of PFBR were connected and monitored. Functional and performance testing were carried out for each RTC system and the fault tolerant characteristics were studied by creating various faults into the system and observed the performance. Various quality of service parameters like connection establishment delay, priority parameter, transit delay, throughput, residual error ratio, etc., are critically studied for the network.
Minimizing inner product data dependencies in conjugate gradient iteration
Vanrosendale, J.
1983-01-01
The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N)).
IMPORTANCE, Minimal Cut Sets and System Availability from Fault Tree Analysis
International Nuclear Information System (INIS)
Lambert, H. W.
1987-01-01
1 - Description of problem or function: IMPORTANCE computes various measures of probabilistic importance of basic events and minimal cut sets to a fault tree or reliability network diagram. The minimal cut sets, the failure rates and the fault duration times (i.e., the repair times) of all basic events contained in the minimal cut sets are supplied as input data. The failure and repair distributions are assumed to be exponential. IMPORTANCE, a quantitative evaluation code, then determines the probability of the top event and computes the importance of minimal cut sets and basic events by a numerical ranking. Two measures are computed. The first describes system behavior at one point in time; the second describes sequences of failures that cause the system to fail in time. All measures are computed assuming statistical independence of basic events. In addition, system unavailability and expected number of system failures are computed by the code. 2 - Method of solution: Seven measures of basic event importance and two measures of cut set importance can be computed. Birnbaum's measure of importance (i.e., the partial derivative) and the probability of the top event are computed using the min cut upper bound. If there are no replicated events in the minimal cut sets, then the min cut upper bound is exact. If basic events are replicated in the minimal cut sets, then based on experience the min cut upper bound is accurate if the probability of the top event is less than 0.1. Simpson's rule is used in computing the time-integrated measures of importance. Newton's method for approximating the roots of an equation is employed in the options where the importance measures are computed as a function of the probability of the top event, and a shell sort puts the output in descending order of importance
2010-04-01
... Management; computation of time; availability for public disclosure. 10.20 Section 10.20 Food and Drugs FOOD... Management; computation of time; availability for public disclosure. (a) A submission to the Division of Dockets Management of a petition, comment, objection, notice, compilation of information, or any other...
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
Directory of Open Access Journals (Sweden)
Traykov Alexander
2015-01-01
Full Text Available Numerical studies are performed on computer models taking into account the stages of construction and time dependent material properties defined in two forms. A 2D model of three storey two spans frame is created. The first form deals with material defined in the usual design practice way - without taking into account the time dependent properties of the concrete. The second form creep and shrinkage of the concrete are taken into account. Displacements and internal forces in specific elements and sections are reported. The influence of the time dependent material properties on the displacement and the internal forces in the main structural elements is tracked down. The results corresponding to the two forms of material definition are compared together as well as with the results obtained by the usual design calculations. Conclusions on the influence of the concrete creep and shrinkage during the construction towards structural behaviour are made.
Computational imaging with multi-camera time-of-flight systems
Shrestha, Shikhar
2016-07-11
Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.
Fulcher, Ben D; Jones, Nick S
2017-11-22
Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Suryono, T. J.; Gofuku, A.
2018-02-01
One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.
de Jong, Joost J A; Lataster, Arno; van Rietbergen, Bert; Arts, Jacobus J; Geusens, Piet P; van den Bergh, Joop P W; Willems, Paul C
2017-02-27
Carbon-fiber-reinforced poly-ether-ether-ketone (CFR-PEEK) has superior radiolucency compared to other orthopedic implant materials, e.g. titanium or stainless steel, thus allowing metal-artifact-free postoperative monitoring by computed tomography (CT). Recently, high-resolution peripheral quantitative CT (HRpQCT) proved to be a promising technique to monitor the recovery of volumetric bone mineral density (vBMD), micro-architecture and biomechanical parameters in stable conservatively treated distal radius fractures. When using HRpQCT to monitor unstable distal radius fractures that require volar distal radius plating for fixation, radiolucent CFR-PEEK plates may be a better alternative to currently used titanium plates to allow for reliable assessment. In this pilot study, we assessed the effect of a volar distal radius plate made from CFR-PEEK on bone parameters obtained from HRpQCT in comparison to two titanium plates. Plates were instrumented in separate cadaveric human fore-arms (n = 3). After instrumentation and after removal of the plates duplicate HRpQCT scans were made of the region covered by the plate. HRpQCT images were visually checked for artifacts. vBMD, micro-architectural and biomechanical parameters were calculated, and compared between the uninstrumented and instrumented radii. No visible image artifacts were observed in the CFR-PEEK plate instrumented radius, and errors in bone parameters ranged from -3.2 to 2.6%. In the radii instrumented with the titanium plates, severe image artifacts were observed and errors in bone parameters ranged between -30.2 and 67.0%. We recommend using CFR-PEEK plates in longitudinal in vivo studies that monitor the healing process of unstable distal radius fractures treated operatively by plating or bone graft ingrowth.
The reliable solution and computation time of variable parameters logistic model
Wang, Pengfei; Pan, Xinnong
2018-05-01
The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.
Computing moment to moment BOLD activation for real-time neurofeedback
Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.
2013-01-01
Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350
Real-time computation of parameter fitting and image reconstruction using graphical processing units
Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin
2017-06-01
In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.
P. McBride
The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
Present and future aspects of PROSA - A computer program for near real time accountancy
International Nuclear Information System (INIS)
Beedgen, R.
1987-01-01
The methods of near real time accountancy (NRTA) for safeguarding nuclear material received a lot of attention in the last years. They developed PROSA 1.0 as a computer program to evaluate a sequence of material balance data based on three statistical tests for a selected false alarm probability. A new NRTA test procedure will be included and an option for the calculation of detection probabilities of hypothetical loss patterns will be made available in future releases of PROSA. Under a non-loss assumption, PROSA may also be used for the analysis of facility measurement models
Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula
Directory of Open Access Journals (Sweden)
Thomas J. Marlowe
1998-01-01
Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants.
Using real-time fMRI brain-computer interfacing to treat eating disorders.
Sokunbi, Moses O
2018-05-15
Real-time functional magnetic resonance imaging based brain-computer interfacing (fMRI neurofeedback) has shown encouraging outcomes in the treatment of psychiatric and behavioural disorders. However, its use in the treatment of eating disorders is very limited. Here, we give a brief overview of how to design and implement fMRI neurofeedback intervention for the treatment of eating disorders, considering the basic and essential components. We also attempt to develop potential adaptations of fMRI neurofeedback intervention for the treatment of anorexia nervosa, bulimia nervosa and binge eating disorder. Copyright © 2018 Elsevier B.V. All rights reserved.