WorldWideScience

Sample records for rapid computing times

  1. Rapid computation of chemical equilibrium composition - An application to hydrocarbon combustion

    Science.gov (United States)

    Erickson, W. D.; Prabhu, R. K.

    1986-01-01

    A scheme for rapidly computing the chemical equilibrium composition of hydrocarbon combustion products is derived. A set of ten governing equations is reduced to a single equation that is solved by the Newton iteration method. Computation speeds are approximately 80 times faster than the often used free-energy minimization method. The general approach also has application to many other chemical systems.

  2. Rapid prototyping of an EEG-based brain-computer interface (BCI).

    Science.gov (United States)

    Guger, C; Schlögl, A; Neuper, C; Walterspacher, D; Strein, T; Pfurtscheller, G

    2001-03-01

    The electroencephalogram (EEG) is modified by motor imagery and can be used by patients with severe motor impairments (e.g., late stage of amyotrophic lateral sclerosis) to communicate with their environment. Such a direct connection between the brain and the computer is known as an EEG-based brain-computer interface (BCI). This paper describes a new type of BCI system that uses rapid prototyping to enable a fast transition of various types of parameter estimation and classification algorithms to real-time implementation and testing. Rapid prototyping is possible by using Matlab, Simulink, and the Real-Time Workshop. It is shown how to automate real-time experiments and perform the interplay between on-line experiments and offline analysis. The system is able to process multiple EEG channels on-line and operates under Windows 95 in real-time on a standard PC without an additional digital signal processor (DSP) board. The BCI can be controlled over the Internet, LAN or modem. This BCI was tested on 3 subjects whose task it was to imagine either left or right hand movement. A classification accuracy between 70% and 95% could be achieved with two EEG channels after some sessions with feedback using an adaptive autoregressive (AAR) model and linear discriminant analysis (LDA).

  3. Computational area measurement of orbital floor fractures: Reliability, accuracy and rapidity

    International Nuclear Information System (INIS)

    Schouman, Thomas; Courvoisier, Delphine S.; Imholz, Benoit; Van Issum, Christopher; Scolozzi, Paolo

    2012-01-01

    Objective: To evaluate the reliability, accuracy and rapidity of a specific computational method for assessing the orbital floor fracture area on a CT scan. Method: A computer assessment of the area of the fracture, as well as that of the total orbital floor, was determined on CT scans taken from ten patients. The ratio of the fracture's area to the orbital floor area was also calculated. The test–retest precision of measurement calculations was estimated using the Intraclass Correlation Coefficient (ICC) and Dahlberg's formula to assess the agreement across observers and across measures. The time needed for the complete assessment was also evaluated. Results: The Intraclass Correlation Coefficient across observers was 0.92 [0.85;0.96], and the precision of the measures across observers was 4.9%, according to Dahlberg's formula .The mean time needed to make one measurement was 2 min and 39 s (range, 1 min and 32 s to 4 min and 37 s). Conclusion: This study demonstrated that (1) the area of the orbital floor fracture can be rapidly and reliably assessed by using a specific computer system directly on CT scan images; (2) this method has the potential of being routinely used to standardize the post-traumatic evaluation of orbital fractures

  4. Computation of a long-time evolution in a Schroedinger system

    International Nuclear Information System (INIS)

    Girard, R.; Kroeger, H.; Labelle, P.; Bajzer, Z.

    1988-01-01

    We compare different techniques for the computation of a long-time evolution and the S matrix in a Schroedinger system. As an application we consider a two-nucleon system interacting via the Yamaguchi potential. We suggest computation of the time evolution for a very short time using Pade approximants, the long-time evolution being obtained by iterative squaring. Within the technique of strong approximation of Moller wave operators (SAM) we compare our calculation with computation of the time evolution in the eigenrepresentation of the Hamiltonian and with the standard Lippmann-Schwinger solution for the S matrix. We find numerical agreement between these alternative methods for time-evolution computation up to half the number of digits of internal machine precision, and fairly rapid convergence of both techniques towards the Lippmann-Schwinger solution

  5. Development of a rapid multi-line detector for industrial computed tomography

    International Nuclear Information System (INIS)

    Nachtrab, Frank; Firsching, Markus; Hofmann, Thomas; Uhlmann, Norman; Neubauer, Harald; Nowak, Arne

    2015-01-01

    In this paper we present the development of a rapid multi-row detector is optimized for industrial computed tomography. With a high frame rate, high spatial resolution and the ability to use up to 450 kVp it is particularly suitable for applications such as fast acquisition of large objects, inline CT or time-resolved 4D CT. (Contains PowerPoint slides). [de

  6. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  7. Computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9-in. NaI(Tl) crystal containing a 3.25-in. deep by 3.5-in. diameter well. This gamma detection system is controlled by a mini-computer with a dual floppy disk storage medium. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing

  8. A novel technique for presurgical nasoalveolar molding using computer-aided reverse engineering and rapid prototyping.

    Science.gov (United States)

    Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang

    2011-01-01

    To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.

  9. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development

    OpenAIRE

    Ziatdinov, Rushan; Musa, Sajid

    2013-01-01

    In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  10. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    Science.gov (United States)

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  11. 12 CFR 1102.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1102.27 Section 1102.27 Banks... for Proceedings § 1102.27 Computing time. (a) General rule. In computing any period of time prescribed... time begins to run is not included. The last day so computed is included, unless it is a Saturday...

  12. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)

    2017-06-01

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.

  13. 12 CFR 622.21 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Computing time. 622.21 Section 622.21 Banks and... Formal Hearings § 622.21 Computing time. (a) General rule. In computing any period of time prescribed or... run is not to be included. The last day so computed shall be included, unless it is a Saturday, Sunday...

  14. 12 CFR 908.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 908.27 Section 908.27 Banks and... PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD General Rules § 908.27 Computing time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event...

  15. Effectiveness Analysis of a Part-Time Rapid Response System During Operation Versus Nonoperation.

    Science.gov (United States)

    Kim, Youlim; Lee, Dong Seon; Min, Hyunju; Choi, Yun Young; Lee, Eun Young; Song, Inae; Park, Jong Sun; Cho, Young-Jae; Jo, You Hwan; Yoon, Ho Il; Lee, Jae Ho; Lee, Choon-Taek; Do, Sang Hwan; Lee, Yeon Joo

    2017-06-01

    To evaluate the effect of a part-time rapid response system on the occurrence rate of cardiopulmonary arrest by comparing the times of rapid response system operation versus nonoperation. Retrospective cohort study. A 1,360-bed tertiary care hospital. Adult patients admitted to the general ward were screened. Data were collected over 36 months from rapid response system implementation (October 2012 to September 2015) and more than 45 months before rapid response system implementation (January 2009 to September 2012). None. The rapid response system operates from 7 AM to 10 PM on weekdays and from 7 AM to 12 PM on Saturdays. Primary outcomes were the difference of cardiopulmonary arrest incidence between pre-rapid response system and post-rapid response system periods and whether the rapid response system operating time affects the cardiopulmonary arrest incidence. The overall cardiopulmonary arrest incidence (per 1,000 admissions) was 1.43. Although the number of admissions per month and case-mix index were increased (3,555.18 vs 4,564.72, p times (0.82 vs 0.49/1,000 admissions; p = 0.001) but remained similar during rapid response system nonoperating times (0.77 vs 0.73/1,000 admissions; p = 0.729). The implementation of a part-time rapid response system reduced the cardiopulmonary arrest incidence based on the reduction of cardiopulmonary arrest during rapid response system operating times. Further analysis of the cost effectiveness of part-time rapid response system is needed.

  16. A computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities (RASA) Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9 inch NaI(T1) crystal containing a 3.25 inch deep by 3.5 inch diameter well. This gamma detection system is controlled by a minicomputer with a dual floppy disk storage medium, line printer, and optional X-Y plotter. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing. The computer system is a Commodore Business Machines (CBM) Model 8032 personal computer with CBM peripherals. Control and data signals are utilized via the parallel user's port to the interface unit. The analog-to-digital converter (ADC) is controlled in machine language, bootstrapped to high memory, and is addressed through the BASIC program. The BASIC program is designed to be ''user friendly'' and provides the operator with several modes of operation such as background and analysis acquisition. Any number of energy regions-of-interest (ROI) may be analyzed with automatic background substraction. Also employed in the BASIC program are the 226 Ra algorithms which utilize linear and polynomial regression equations for data conversion and look-up tables for radon equilibrating coefficients. The optional X-Y plotter may be used with two- or three-dimensional curve programs to enhance data analysis and presentation. A description of the system is presented and typical applications are discussed

  17. 12 CFR 1780.11 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1780.11 Section 1780.11 Banks... time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event that commences the designated period of time is not included. The last day so...

  18. Quantitative evaluation of the disintegration of orally rapid disintegrating tablets by X-ray computed tomography.

    Science.gov (United States)

    Otsuka, Makoto; Yamanaka, Azusa; Uchino, Tomohiro; Otsuka, Kuniko; Sadamoto, Kiyomi; Ohshima, Hiroyuki

    2012-01-01

    To measure the rapid disintegration of Oral Disintegrating Tablets (ODT), a new test (XCT) was developed using X-ray computing tomography (X-ray CT). Placebo ODT, rapid disintegration candy (RDC) and Gaster®-D-Tablets (GAS) were used as model samples. All these ODTs were used to measure oral disintegration time (DT) in distilled water at 37±2°C by XCT. DTs were affected by the width of mesh screens, and degree to which the tablet holder vibrated from air bubbles. An in-vivo tablet disintegration test was performed for RDC using 11 volunteers. DT by the in-vivo method was significantly longer than that using the conventional tester. The experimental conditions for XCT such as the width of the mesh screen and degree of vibration were adjusted to be consistent with human DT values. Since DTs by the XCT method were almost the same as the human data, this method was able to quantitatively evaluate the rapid disintegration of ODT under the same conditions as inside the oral cavity. The DTs of four commercially available ODTs were comparatively evaluated by the XCT method, conventional tablet disintegration test and in-vivo method.

  19. 6 CFR 13.27 - Computation of time.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Computation of time. 13.27 Section 13.27 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY PROGRAM FRAUD CIVIL REMEDIES § 13.27 Computation of time. (a) In computing any period of time under this part or in an order issued...

  20. Rapid Time Response: A solution for Manufacturing Issue

    Directory of Open Access Journals (Sweden)

    Norazlin N.

    2017-01-01

    Full Text Available Respond time in manufacturing give the major impact that able to contribute too many manufacturing issues. Based on two worst case scenario occurred where Toyota in 2009 made a massive vehicles call due to car complexity of 11 major models and over 9 million vehicles. The recalls cost at least $2 billion in cost of repair, lost deals and result in lost 5% of its market share in United State of America, while A380 was reported on missing target in new production and leads to delayed market entry due to their weak product life cycle management (PLM. These cases give a sign to all industries to possess and optimize the facilities for better traceability in shortest time period. In Industry 4.0, the traceability and time respond become the factors for high performance manufacturing and rapid time respond able to expedite the traceability process and strengthen the communication level between man, machine and management. The round trip time (RTT experiment gives variant time respond between two difference operating system for intra and inter-platform signal. If this rapid time respond is adopted in any manufacturing process, the delay in traceability on every issue that lead to losses can be successfully avoided.

  1. Noise-constrained switching times for heteroclinic computing

    Science.gov (United States)

    Neves, Fabio Schittler; Voit, Maximilian; Timme, Marc

    2017-03-01

    Heteroclinic computing offers a novel paradigm for universal computation by collective system dynamics. In such a paradigm, input signals are encoded as complex periodic orbits approaching specific sequences of saddle states. Without inputs, the relevant states together with the heteroclinic connections between them form a network of states—the heteroclinic network. Systems of pulse-coupled oscillators or spiking neurons naturally exhibit such heteroclinic networks of saddles, thereby providing a substrate for general analog computations. Several challenges need to be resolved before it becomes possible to effectively realize heteroclinic computing in hardware. The time scales on which computations are performed crucially depend on the switching times between saddles, which in turn are jointly controlled by the system's intrinsic dynamics and the level of external and measurement noise. The nonlinear dynamics of pulse-coupled systems often strongly deviate from that of time-continuously coupled (e.g., phase-coupled) systems. The factors impacting switching times in pulse-coupled systems are still not well understood. Here we systematically investigate switching times in dependence of the levels of noise and intrinsic dissipation in the system. We specifically reveal how local responses to pulses coact with external noise. Our findings confirm that, like in time-continuous phase-coupled systems, piecewise-continuous pulse-coupled systems exhibit switching times that transiently increase exponentially with the number of switches up to some order of magnitude set by the noise level. Complementarily, we show that switching times may constitute a good predictor for the computation reliability, indicating how often an input signal must be reiterated. By characterizing switching times between two saddles in conjunction with the reliability of a computation, our results provide a first step beyond the coding of input signal identities toward a complementary coding for

  2. Computed tomographic demonstration of rapid changes in fatty infiltration of the liver

    International Nuclear Information System (INIS)

    Bashist, B.; Hecht, H.L.; Harely, W.D.

    1982-01-01

    Two alcoholic patients in whom computed tomography (CT) demonstrated reversal of fatty infiltration of the liver are described. The rapid reversibility of fatty infiltration can be useful in monitoring alcoholics with fatty livers. Focal fatty infiltration can mimic focal hepatic lesions and repeat scans can be utilized to assess changes in CT attenuation values when this condition is suspected

  3. Rapid Reconstitution Packages (RRPs) implemented by integration of computational fluid dynamics (CFD) and 3D printed microfluidics.

    Science.gov (United States)

    Chi, Albert; Curi, Sebastian; Clayton, Kevin; Luciano, David; Klauber, Kameron; Alexander-Katz, Alfredo; D'hers, Sebastian; Elman, Noel M

    2014-08-01

    Rapid Reconstitution Packages (RRPs) are portable platforms that integrate microfluidics for rapid reconstitution of lyophilized drugs. Rapid reconstitution of lyophilized drugs using standard vials and syringes is an error-prone process. RRPs were designed using computational fluid dynamics (CFD) techniques to optimize fluidic structures for rapid mixing and integrating physical properties of targeted drugs and diluents. Devices were manufactured using stereo lithography 3D printing for micrometer structural precision and rapid prototyping. Tissue plasminogen activator (tPA) was selected as the initial model drug to test the RRPs as it is unstable in solution. tPA is a thrombolytic drug, stored in lyophilized form, required in emergency settings for which rapid reconstitution is of critical importance. RRP performance and drug stability were evaluated by high-performance liquid chromatography (HPLC) to characterize release kinetics. In addition, enzyme-linked immunosorbent assays (ELISAs) were performed to test for drug activity after the RRPs were exposed to various controlled temperature conditions. Experimental results showed that RRPs provided effective reconstitution of tPA that strongly correlated with CFD results. Simulation and experimental results show that release kinetics can be adjusted by tuning the device structural dimensions and diluent drug physical parameters. The design of RRPs can be tailored for a number of applications by taking into account physical parameters of the active pharmaceutical ingredients (APIs), excipients, and diluents. RRPs are portable platforms that can be utilized for reconstitution of emergency drugs in time-critical therapies.

  4. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  5. Effects of computing time delay on real-time control systems

    Science.gov (United States)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  6. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licensing reviews of nuclear power plant structures. The documentation of the Seismic Module of CARES consists of three volumes. This report is Volume 2 of the three volume documentation of the Seismic Module of CARES and represents the User's Manual. 14 refs

  7. Rapid genetic algorithm optimization of a mouse computational model: Benefits for anthropomorphization of neonatal mouse cardiomyocytes

    Directory of Open Access Journals (Sweden)

    Corina Teodora Bot

    2012-11-01

    Full Text Available While the mouse presents an invaluable experimental model organism in biology, its usefulness in cardiac arrhythmia research is limited in some aspects due to major electrophysiological differences between murine and human action potentials (APs. As previously described, these species-specific traits can be partly overcome by application of a cell-type transforming clamp (CTC to anthropomorphize the murine cardiac AP. CTC is a hybrid experimental-computational dynamic clamp technique, in which a computationally calculated time-dependent current is inserted into a cell in real time, to compensate for the differences between sarcolemmal currents of that cell (e.g., murine and the desired species (e.g., human. For effective CTC performance, mismatch between the measured cell and a mathematical model used to mimic the measured AP must be minimal. We have developed a genetic algorithm (GA approach that rapidly tunes a mathematical model to reproduce the AP of the murine cardiac myocyte under study. Compared to a prior implementation that used a template-based model selection approach, we show that GA optimization to a cell-specific model results in a much better recapitulation of the desired AP morphology with CTC. This improvement was more pronounced when anthropomorphizing neonatal mouse cardiomyocytes to human-like APs than to guinea pig APs. CTC may be useful for a wide range of applications, from screening effects of pharmaceutical compounds on ion channel activity, to exploring variations in the mouse or human genome. Rapid GA optimization of a cell-specific mathematical model improves CTC performance and may therefore expand the applicability and usage of the CTC technique.

  8. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    Science.gov (United States)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  9. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  10. Fast algorithms for computing phylogenetic divergence time.

    Science.gov (United States)

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  11. Preparing printed circuit boards for rapid turn-around time on a protomat plotter

    International Nuclear Information System (INIS)

    Hawtree, J.

    1998-01-01

    This document describes the use of the LPKF ProtoMat mill/drill unit circuit board Plotter, with the associated CAD/CAM software BoardMaster and CircuitCAM. At present its primarily use here at Fermilab's Particle Physics Department is for rapid-turnover of prototype PCBs double-sided and single-sided copper clad printed circuit boards (PCBs). (The plotter is also capable of producing gravure films and engraving aluminum or plastic although we have not used it for this.) It has the capability of making traces 0.004 inch wide with 0.004 inch spacings which is appropriate for high density surface mount circuits as well as other through-mounted discrete and integrated components. One of the primary benefits of the plotter is the capability to produce double-sided drilled boards from CAD files in a few hours. However to achieve this rapid turn-around time, some care must be taken in preparing the files. This document describes how to optimize the process of PCB fabrication. With proper preparation, researchers can often have a completed circuit board in a day's time instead of a week or two wait with usual procedures. It is assumed that the software and hardware are properly installed and that the machinist is acquainted with the Win95 operating system and the basics of the associated software. This paper does not describe its use with pen plotters, lasers or rubouts. The process of creating a PCB (printed circuit board) begins with the CAD (computer-aided design) software, usually PCAD or VeriBest. These files are then moved to CAM (computer-aided machining) where they are edited and converted to put them into the proper format for running on the ProtoMat plotter. The plotter then performs the actual machining of the board. This document concentrates on the LPKF programs CircuitCam BASIS and BoardMaster for the CAM software. These programs run on a Windows 95 platform to run an LPKF ProtoMat 93s plotter

  12. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licensing reviews of nuclear power plant structures. The documentation of the Seismic Module of CARES consists of three volumes. This report represents Volume 3 of the volume documentation of the Seismic Module of CARES. It presents three sample problems typically encountered in the Soil-Structure Interaction analyses. 14 refs., 36 figs., 2 tabs

  13. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  14. 5 CFR 890.101 - Definitions; time computations.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Definitions; time computations. 890.101....101 Definitions; time computations. (a) In this part, the terms annuitant, carrier, employee, employee... in section 8901 of title 5, United States Code, and supplement the following definitions: Appropriate...

  15. Evaluation of RAPID for a UNF cask benchmark problem

    Science.gov (United States)

    Mascolino, Valerio; Haghighat, Alireza; Roskoff, Nathan J.

    2017-09-01

    This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system for the simulation of a used nuclear fuel (UNF) cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days). A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.

  16. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  17. 29 CFR 1921.22 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Computation of time. 1921.22 Section 1921.22 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... WORKERS' COMPENSATION ACT Miscellaneous § 1921.22 Computation of time. Sundays and holidays shall be...

  18. CARES (Computer Analysis for Rapid Evaluation of Structures) Version 1.0, seismic module

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulas, A.J.; Miller, C.A.; Costantino, C.J.

    1990-07-01

    During FY's 1988 and 1989, Brookhaven National Laboratory (BNL) developed the CARES system (Computer Analysis for Rapid Evaluation of Structures) for the US Nuclear Regulatory Commission (NRC). CARES is a PC software system which has been designed to perform structural response computations similar to those encountered in licencing reviews of nuclear power plant structures. The docomentation of the Seismic Module of CARES consists of three volumes. This report represents Volume 1 of the three volume documentation of the Seismic Module of CARES. It concentrates on the theoretical basis of the system and presents modeling assumptions and limitations as well as solution schemes and algorithms of CARES. 31 refs., 6 figs

  19. General purpose computers in real time

    International Nuclear Information System (INIS)

    Biel, J.R.

    1989-01-01

    I see three main trends in the use of general purpose computers in real time. The first is more processing power. The second is the use of higher speed interconnects between computers (allowing more data to be delivered to the processors). The third is the use of larger programs running in the computers. Although there is still work that needs to be done, I believe that all indications are that the online need for general purpose computers should be available for the SCC and LHC machines. 2 figs

  20. Evaluation of RAPID for a UNF cask benchmark problem

    Directory of Open Access Journals (Sweden)

    Mascolino Valerio

    2017-01-01

    Full Text Available This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection code system for the simulation of a used nuclear fuel (UNF cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days. A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.

  1. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics

    Science.gov (United States)

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  2. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    National Research Council Canada - National Science Library

    Skjellum, Anthony

    2004-01-01

    Polymorphous Computing Architectures (PCA) rapidly "morph" (reorganize) software and hardware configurations in order to achieve high performance on computation styles ranging from specialized streaming to general threaded applications...

  3. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  4. Time sequential single photon emission computed tomography studies in brain tumour using thallium-201

    International Nuclear Information System (INIS)

    Ueda, Takashi; Kaji, Yasuhiro; Wakisaka, Shinichiro; Watanabe, Katsushi; Hoshi, Hiroaki; Jinnouchi, Seishi; Futami, Shigemi

    1993-01-01

    Time sequential single photon emission computed tomography (SPECT) studies using thallium-201 were performed in 25 patients with brain tumours to evaluate the kinetics of thallium in the tumour and the biological malignancy grade preoperatively. After acquisition and reconstruction of SPECT data from 1 min post injection to 48 h (1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 15-20 min, followed by 4-6, 24 and 48 h), the thallium uptake ratio in the tumour versus the homologous contralateral area of the brain was calculated and compared with findings of X-ray CT, magnetic resonance imaging, cerebral angiography and histological investigations. Early uptake of thallium in tumours was related to tumour vascularity and the disruption of the blood-brain barrier. High and rapid uptake and slow reduction of thallium indicated a hypervascular malignant tumour; however, high and rapid uptake but rapid reduction of thallium indicated a hypervascular benign tumour, such as meningioma. Hypovascular and benign tumours tended to show low uptake and slow reduction of thallium. Long-lasting retention or uptake of thallium indicates tumour malignancy. (orig.)

  5. Recent achievements in real-time computational seismology in Taiwan

    Science.gov (United States)

    Lee, S.; Liang, W.; Huang, B.

    2012-12-01

    Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information ROS completes a 3D simulation real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).

  6. 7 CFR 1.603 - How are time periods computed?

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false How are time periods computed? 1.603 Section 1.603... Licenses General Provisions § 1.603 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2...

  7. The application of digital computers to near-real-time processing of flutter test data

    Science.gov (United States)

    Hurley, S. R.

    1976-01-01

    Procedures used in monitoring, analyzing, and displaying flight and ground flutter test data are presented. These procedures include three digital computer programs developed to process structural response data in near real time. Qualitative and quantitative modal stability data are derived from time history response data resulting from rapid sinusoidal frequency sweep forcing functions, tuned-mode quick stops, and pilot induced control pulses. The techniques have been applied to both fixed and rotary wing aircraft, during flight, whirl tower rotor systems tests, and wind tunnel flutter model tests. An hydraulically driven oscillatory aerodynamic vane excitation system utilized during the flight flutter test programs accomplished during Lockheed L-1011 and S-3A development is described.

  8. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Science.gov (United States)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  9. Computer-facilitated rapid HIV testing in emergency care settings: provider and patient usability and acceptability.

    Science.gov (United States)

    Spielberg, Freya; Kurth, Ann E; Severynen, Anneleen; Hsieh, Yu-Hsiang; Moring-Parris, Daniel; Mackenzie, Sara; Rothman, Richard

    2011-06-01

    Providers in emergency care settings (ECSs) often face barriers to expanded HIV testing. We undertook formative research to understand the potential utility of a computer tool, "CARE," to facilitate rapid HIV testing in ECSs. Computer tool usability and acceptability were assessed among 35 adult patients, and provider focus groups were held, in two ECSs in Washington State and Maryland. The computer tool was usable by patients of varying computer literacy. Patients appreciated the tool's privacy and lack of judgment and their ability to reflect on HIV risks and create risk reduction plans. Staff voiced concerns regarding ECS-based HIV testing generally, including resources for follow-up of newly diagnosed people. Computer-delivered HIV testing support was acceptable and usable among low-literacy populations in two ECSs. Such tools may help circumvent some practical barriers associated with routine HIV testing in busy settings though linkages to care will still be needed.

  10. Instruction timing for the CDC 7600 computer

    International Nuclear Information System (INIS)

    Lipps, H.

    1975-01-01

    This report provides timing information for all instructions of the Control Data 7600 computer, except for instructions of type 01X, to enable the optimization of 7600 programs. The timing rules serve as background information for timing charts which are produced by a program (TIME76) of the CERN Program Library. The rules that co-ordinate the different sections of the CPU are stated in as much detail as is necessary to time the flow of instructions for a given sequence of code. Instruction fetch, instruction issue, and access to small core memory are treated at length, since details are not available from the computer manuals. Annotated timing charts are given for 24 examples, chosen to display the full range of timing considerations. (Author)

  11. 50 CFR 221.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false How are time periods computed? 221.3... Provisions § 221.3 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2) The last day of the...

  12. [Introduction and some problems of the rapid time series laboratory reporting system].

    Science.gov (United States)

    Kanao, M; Yamashita, K; Kuwajima, M

    1999-09-01

    We introduced an on-line system of biochemical, hematological, serological, urinary, bacteriological, and emergency examinations and associated office work using a client server system NEC PC-LACS based on a system consisting of concentration of outpatient blood collection, concentration of outpatient reception, and outpatient examination by reservation. Using this on-line system, results of 71 items in chemical serological, hematological, and urinary examinations are rapidly reported within 1 hour. Since the ordering system at our hospital has not been completed yet, we constructed a rapid time series reporting system in which time series data obtained on 5 serial occasions are printed on 2 sheets of A4 paper at the time of the final report. In each consultation room of the medical outpatient clinic, at the neuromedical outpatient clinic, and at the kidney center where examinations are frequently performed, terminal equipment and a printer for inquiry were established for real-time output of time series reports. Results are reported by FAX to the other outpatient clinics and wards, and subsequently, time series reports are output at the clinical laboratory department. This system allowed rapid examination, especially preconsultation examination. This system was also useful for reducing office work and effectively utilize examination data.

  13. Status of IGS Ultra-Rapid Products for Real-Time Applications

    Science.gov (United States)

    Ray, J.; Griffiths, J.

    2008-12-01

    Since November 2000 the International GNSS Service (IGS) has produced Ultra-rapid (IGU) products for near real-time and real-time applications. They include GPS orbits, satellite clocks, and Earth rotation parameters for a sliding 48-hr period. The first day of each update is based on the most recent GPS observational data from the IGS hourly tracking network. At the time of release, these observed products have an initial latency of 3 hr. The second day of each update consists of predictions. So the predictions between about 3 and 9 hr into the second half are relevant for true real-time uses. Originally updated twice daily, the IGU products since April 2004 have been issued four times per day, at 3, 9, 15, and 21 UTC. Up to seven Analysis Centers (ACs) contribute to the IGU combinations: Astronomical Institute of the University of Berne (AIUB), European Space Operations Center (ESOC), Geodetic Observatory Pecny (GOP), GeoForschungsZentrum (GFZ) Potsdam, Natural Resources Canada (NRC), Scripps Insitution of Oceanography (SIO), U.S. Naval Observatory (USNO). This redundancy affords a high measure of reliability and enhanced orbit accuracy. IGU orbit precision has improved markedly since late 2007. This is due to a combination of factors: decommissioning of the old, poorly behaved PRN29 in October 2007; upgraded procedures implemented by GOP around the same time, by SIO in spring 2008, and by USNO in June 2008; better handling of maneuvered satellites at the combination level starting June 2008; and stricter AC rejection criteria since July 2008. As a consequence, the weighted 1D RMS residual of the IGU orbit predictions over their first 6 hr is currently about 20 to 30 mm (after a Helmert transformation) compared to the IGS Rapid orbits, averaged over the constellation. The median residual is about 15 to 20 mm. When extended to the full 24 hr prediction period, the IGU orbit errors approximately double. Systematic rotational offsets are probably more important than

  14. Real-time patient survey data during routine clinical activities for rapid-cycle quality improvement.

    Science.gov (United States)

    Wofford, James Lucius; Campos, Claudia L; Jones, Robert E; Stevens, Sheila F

    2015-03-12

    Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (30 words)-8.0, 11.8, 16.8, seconds, respectively. This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health care system.

  15. Real-time computational photon-counting LiDAR

    Science.gov (United States)

    Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles

    2018-03-01

    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.

  16. TimeSet: A computer program that accesses five atomic time services on two continents

    Science.gov (United States)

    Petrakis, P. L.

    1993-01-01

    TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.

  17. TV time but not computer time is associated with cardiometabolic risk in Dutch young adults.

    Science.gov (United States)

    Altenburg, Teatske M; de Kroon, Marlou L A; Renders, Carry M; Hirasing, Remy; Chinapaw, Mai J M

    2013-01-01

    TV time and total sedentary time have been positively related to biomarkers of cardiometabolic risk in adults. We aim to examine the association of TV time and computer time separately with cardiometabolic biomarkers in young adults. Additionally, the mediating role of waist circumference (WC) is studied. Data of 634 Dutch young adults (18-28 years; 39% male) were used. Cardiometabolic biomarkers included indicators of overweight, blood pressure, blood levels of fasting plasma insulin, cholesterol, glucose, triglycerides and a clustered cardiometabolic risk score. Linear regression analyses were used to assess the cross-sectional association of self-reported TV and computer time with cardiometabolic biomarkers, adjusting for demographic and lifestyle factors. Mediation by WC was checked using the product-of-coefficient method. TV time was significantly associated with triglycerides (B = 0.004; CI = [0.001;0.05]) and insulin (B = 0.10; CI = [0.01;0.20]). Computer time was not significantly associated with any of the cardiometabolic biomarkers. We found no evidence for WC to mediate the association of TV time or computer time with cardiometabolic biomarkers. We found a significantly positive association of TV time with cardiometabolic biomarkers. In addition, we found no evidence for WC as a mediator of this association. Our findings suggest a need to distinguish between TV time and computer time within future guidelines for screen time.

  18. The use of real-time polymerase chain reaction for rapid diagnosis of skeletal tuberculosis.

    Science.gov (United States)

    Kobayashi, Naomi; Fraser, Thomas G; Bauer, Thomas W; Joyce, Michael J; Hall, Gerri S; Tuohy, Marion J; Procop, Gary W

    2006-07-01

    We identified Mycobacterium tuberculosis DNA using real-time polymerase chain reaction on a specimen from an osteolytic lesion of a femoral condyle, in which the frozen section demonstrated granulomas. The process was much more rapid than is possible with culture. The rapid detection of M tuberculosis and the concomitant exclusion of granulomatous disease caused by nontuberculous mycobacteria or systemic fungi are necessary to appropriately treat skeletal tuberculosis. The detection and identification of M tuberculosis by culture may require several weeks using traditional methods. The real-time polymerase chain reaction method used has been shown to be rapid and reliable, and is able to detect and differentiate both tuberculous and nontuberculous mycobacteria. Real-time polymerase chain reaction may become a diagnostic standard for the evaluation of clinical specimens for the presence of mycobacteria; this case demonstrates the potential utility of this assay for the rapid diagnosis of skeletal tuberculosis.

  19. Time-of-Flight Cameras in Computer Graphics

    DEFF Research Database (Denmark)

    Kolb, Andreas; Barth, Erhardt; Koch, Reinhard

    2010-01-01

    Computer Graphics, Computer Vision and Human Machine Interaction (HMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real-time geometry...

  20. 29 CFR 4245.8 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Computation of time. 4245.8 Section 4245.8 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION INSOLVENCY, REORGANIZATION, TERMINATION, AND OTHER RULES APPLICABLE TO MULTIEMPLOYER PLANS NOTICE OF INSOLVENCY § 4245.8 Computation of...

  1. A users manual for a computer program which calculates time optical geocentric transfers using solar or nuclear electric and high thrust propulsion

    Science.gov (United States)

    Sackett, L. L.; Edelbaum, T. N.; Malchow, H. L.

    1974-01-01

    This manual is a guide for using a computer program which calculates time optimal trajectories for high-and low-thrust geocentric transfers. Either SEP or NEP may be assumed and a one or two impulse, fixed total delta V, initial high thrust phase may be included. Also a single impulse of specified delta V may be included after the low thrust state. The low thrust phase utilizes equinoctial orbital elements to avoid the classical singularities and Kryloff-Boguliuboff averaging to help insure more rapid computation time. The program is written in FORTRAN 4 in double precision for use on an IBM 360 computer. The manual includes a description of the problem treated, input/output information, examples of runs, and source code listings.

  2. Computation Offloading for Frame-Based Real-Time Tasks under Given Server Response Time Guarantees

    Directory of Open Access Journals (Sweden)

    Anas S. M. Toma

    2014-11-01

    Full Text Available Computation offloading has been adopted to improve the performance of embedded systems by offloading the computation of some tasks, especially computation-intensive tasks, to servers or clouds. This paper explores computation offloading for real-time tasks in embedded systems, provided given response time guarantees from the servers, to decide which tasks should be offloaded to get the results in time. We consider frame-based real-time tasks with the same period and relative deadline. When the execution order of the tasks is given, the problem can be solved in linear time. However, when the execution order is not specified, we prove that the problem is NP-complete. We develop a pseudo-polynomial-time algorithm for deriving feasible schedules, if they exist.  An approximation scheme is also developed to trade the error made from the algorithm and the complexity. Our algorithms are extended to minimize the period/relative deadline of the tasks for performance maximization. The algorithms are evaluated with a case study for a surveillance system and synthesized benchmarks.

  3. Comparative use of the computer-aided angiography and rapid prototyping technology versus conventional imaging in the management of the Tile C pelvic fractures.

    Science.gov (United States)

    Li, Baofeng; Chen, Bei; Zhang, Ying; Wang, Xinyu; Wang, Fei; Xia, Hong; Yin, Qingshui

    2016-01-01

    Computed tomography (CT) scan with three-dimensional (3D) reconstruction has been used to evaluate complex fractures in pre-operative planning. In this study, rapid prototyping of a life-size model based on 3D reconstructions including bone and vessel was applied to evaluate the feasibility and prospect of these new technologies in surgical therapy of Tile C pelvic fractures by observing intra- and perioperative outcomes. The authors conducted a retrospective study on a group of 157 consecutive patients with Tile C pelvic fractures. Seventy-six patients were treated with conventional pre-operative preparation (A group) and 81 patients were treated with the help of computer-aided angiography and rapid prototyping technology (B group). Assessment of the two groups considered the following perioperative parameters: length of surgical procedure, intra-operative complications, intra- and postoperative blood loss, postoperative pain, postoperative nausea and vomiting (PONV), length of stay, and type of discharge. The two groups were homogeneous when compared in relation to mean age, sex, body weight, injury severity score, associated injuries and pelvic fracture severity score. Group B was performed in less time (105 ± 19 minutes vs. 122 ± 23 minutes) and blood loss (31.0 ± 8.2 g/L vs. 36.2 ± 7.4 g/L) compared with group A. Patients in group B experienced less pain (2.5 ± 2.3 NRS score vs. 2.8 ± 2.0 NRS score), and PONV affected only 8 % versus 10 % of cases. Times to discharge were shorter (7.8 ± 2.0 days vs. 10.2 ± 3.1 days) in group B, and most of patients were discharged to home. In our study, patients of Tile C pelvic fractures treated with computer-aided angiography and rapid prototyping technology had a better perioperative outcome than patients treated with conventional pre-operative preparation. Further studies are necessary to investigate the advantages in terms of clinical results in the short and long run.

  4. Rapid expansion and pseudo spectral implementation for reverse time migration in VTI media

    KAUST Repository

    Pestana, Reynam C

    2012-04-24

    In isotropic media, we use the scalar acoustic wave equation to perform reverse time migration (RTM) of the recorded pressure wavefield data. In anisotropic media, P- and SV-waves are coupled, and the elastic wave equation should be used for RTM. For computational efficiency, a pseudo-acoustic wave equation is often used. This may be solved using a coupled system of second-order partial differential equations. We solve these using a pseudo spectral method and the rapid expansion method (REM) for the explicit time marching. This method generates a degenerate SV-wave in addition to the P-wave arrivals of interest. To avoid this problem, the elastic wave equation for vertical transversely isotropic (VTI) media can be split into separate wave equations for P- and SV-waves. These separate wave equations are stable, and they can be effectively used to model and migrate seismic data in VTI media where |ε- δ| is small. The artifact for the SV-wave has also been removed. The independent pseudo-differential wave equations can be solved one for each mode using the pseudo spectral method for the spatial derivatives and the REM for the explicit time advance of the wavefield. We show numerically stable and high-resolution modeling and RTM results for the pure P-wave mode in VTI media. © 2012 Sinopec Geophysical Research Institute.

  5. Rapid expansion and pseudo spectral implementation for reverse time migration in VTI media

    KAUST Repository

    Pestana, Reynam C; Ursin, Bjø rn; Stoffa, Paul L

    2012-01-01

    In isotropic media, we use the scalar acoustic wave equation to perform reverse time migration (RTM) of the recorded pressure wavefield data. In anisotropic media, P- and SV-waves are coupled, and the elastic wave equation should be used for RTM. For computational efficiency, a pseudo-acoustic wave equation is often used. This may be solved using a coupled system of second-order partial differential equations. We solve these using a pseudo spectral method and the rapid expansion method (REM) for the explicit time marching. This method generates a degenerate SV-wave in addition to the P-wave arrivals of interest. To avoid this problem, the elastic wave equation for vertical transversely isotropic (VTI) media can be split into separate wave equations for P- and SV-waves. These separate wave equations are stable, and they can be effectively used to model and migrate seismic data in VTI media where |ε- δ| is small. The artifact for the SV-wave has also been removed. The independent pseudo-differential wave equations can be solved one for each mode using the pseudo spectral method for the spatial derivatives and the REM for the explicit time advance of the wavefield. We show numerically stable and high-resolution modeling and RTM results for the pure P-wave mode in VTI media. © 2012 Sinopec Geophysical Research Institute.

  6. Late-time dynamics of rapidly rotating black holes

    International Nuclear Information System (INIS)

    Glampedakis, K.; Andersson, N.

    2001-01-01

    We study the late-time behaviour of a dynamically perturbed rapidly rotating black hole. Considering an extreme Kerr black hole, we show that the large number of virtually undamped quasinormal modes (that exist for nonzero values of the azimuthal eigenvalue m) combine in such a way that the field (as observed at infinity) oscillates with an amplitude that decays as 1/t at late times. For a near extreme black hole, these modes, collectively, give rise to an exponentially decaying field which, however, is considerably 'long-lived'. Our analytic results are verified using numerical time-evolutions of the Teukolsky equation. Moreover, we argue that the physical mechanism behind the observed behaviour is the presence of a 'superradiance resonance cavity' immediately outside the black hole. We present this new feature in detail, and discuss whether it may be relevant for astrophysical black holes. (author)

  7. Stochastic nonlinear time series forecasting using time-delay reservoir computers: performance and universality.

    Science.gov (United States)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2014-07-01

    Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Rapid and real-time detection technologies for emerging viruses of ...

    Indian Academy of Sciences (India)

    2008-10-17

    Oct 17, 2008 ... The development of technologies with rapid and sensitive detection capabilities and increased throughput have become crucial for responding to greater number threats posed by emerging and re-emerging viruses in the recent past. The conventional identification methods require time-consuming culturing ...

  9. Fuji apple storage time rapid determination method using Vis/NIR spectroscopy

    Science.gov (United States)

    Liu, Fuqi; Tang, Xuxiang

    2015-01-01

    Fuji apple storage time rapid determination method using visible/near-infrared (Vis/NIR) spectroscopy was studied in this paper. Vis/NIR diffuse reflection spectroscopy responses to samples were measured for 6 days. Spectroscopy data were processed by stochastic resonance (SR). Principal component analysis (PCA) was utilized to analyze original spectroscopy data and SNR eigen value. Results demonstrated that PCA could not totally discriminate Fuji apples using original spectroscopy data. Signal-to-noise ratio (SNR) spectrum clearly classified all apple samples. PCA using SNR spectrum successfully discriminated apple samples. Therefore, Vis/NIR spectroscopy was effective for Fuji apple storage time rapid discrimination. The proposed method is also promising in condition safety control and management for food and environmental laboratories. PMID:25874818

  10. ADVANCEMENT OF RAPID PROTOTYPING IN AEROSPACE INDUSTRY -A REVIEW

    OpenAIRE

    Vineet Kumar Vashishtha,; Rahul Makade,; Neeraj Mehla

    2011-01-01

    Rapid prototyping technology have emerged a new innovation to reduced the time cost of moulds fabrication by creating 3D product directly from computer aided design thus the designer is able to perform design validation and accuracy analysis easily in a virtual environment as if using a physical model. The primary aim of this paper is to give the reader an overview of the current state of the art in rapid prototyping technology .The paper also deal with feature’s of rapid prototyping in Aeros...

  11. Integrating Remote Sensing Data, Hybrid-Cloud Computing, and Event Notifications for Advanced Rapid Imaging & Analysis (Invited)

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.

  12. Real-time risk assessment in seismic early warning and rapid response: a feasibility study in Bishkek (Kyrgyzstan)

    Science.gov (United States)

    Picozzi, M.; Bindi, D.; Pittore, M.; Kieling, K.; Parolai, S.

    2013-04-01

    Earthquake early warning systems (EEWS) are considered to be an effective, pragmatic, and viable tool for seismic risk reduction in cities. While standard EEWS approaches focus on the real-time estimation of an earthquake's location and magnitude, innovative developments in EEWS include the capacity for the rapid assessment of damage. Clearly, for all public authorities that are engaged in coordinating emergency activities during and soon after earthquakes, real-time information about the potential damage distribution within a city is invaluable. In this work, we present a first attempt to design an early warning and rapid response procedure for real-time risk assessment. In particular, the procedure uses typical real-time information (i.e., P-wave arrival times and early waveforms) derived from a regional seismic network for locating and evaluating the size of an earthquake, information which in turn is exploited for extracting a risk map representing the potential distribution of damage from a dataset of predicted scenarios compiled for the target city. A feasibility study of the procedure is presented for the city of Bishkek, the capital of Kyrgyzstan, which is surrounded by the Kyrgyz seismic network by mimicking the ground motion associated with two historical events that occurred close to Bishkek, namely the 1911 Kemin ( M = 8.2; ±0.2) and the 1885 Belovodsk ( M = 6.9; ±0.5) earthquakes. Various methodologies from previous studies were considered when planning the implementation of the early warning and rapid response procedure for real-time risk assessment: the Satriano et al. (Bull Seismol Soc Am 98(3):1482-1494, 2008) approach to real-time earthquake location; the Caprio et al. (Geophys Res Lett 38:L02301, 2011) approach for estimating moment magnitude in real time; the EXSIM method for ground motion simulation (Motazedian and Atkinson, Bull Seismol Soc Am 95:995-1010, 2005); the Sokolov (Earthquake Spectra 161: 679-694, 2002) approach for estimating

  13. Imprecise results: Utilizing partial computations in real-time systems

    Science.gov (United States)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  14. Real-Time Thevenin Impedance Computation

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Jóhannsson, Hjörtur

    2013-01-01

    operating state, and strict time constraints are difficult to adhere to as the complexity of the grid increases. Several suggested approaches for real-time stability assessment require Thevenin impedances to be determined for the observed system conditions. By combining matrix factorization, graph reduction......, and parallelization, we develop an algorithm for computing Thevenin impedances an order of magnitude faster than previous approaches. We test the factor-and-solve algorithm with data from several power grids of varying complexity, and we show how the algorithm allows realtime stability assessment of complex power...

  15. Spying on real-time computers to improve performance

    International Nuclear Information System (INIS)

    Taff, L.M.

    1975-01-01

    The sampled program-counter histogram, an established technique for shortening the execution times of programs, is described for a real-time computer. The use of a real-time clock allows particularly easy implementation. (Auth.)

  16. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gilani, Syed Irtiza Ali

    2008-09-15

    Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T{sub 1} relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase

  17. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    International Nuclear Information System (INIS)

    Gilani, Syed Irtiza Ali

    2008-09-01

    Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T 1 relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase

  18. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  19. Radiographic and computed tomographic demonstration of pseudotumor cerebri due to rapid weight gain in a child with pelvic rhabdomyosarcoma

    Energy Technology Data Exchange (ETDEWEB)

    Berdon, W.E.; Barker, D.H.; Barash, F.S.

    1982-06-01

    Rapid weight gain in a malnourished child can be associated with suture diastasis in the pattern of pseudotumor cerebri; this has been previously reported in deprivational dwarfism and cystic fibrosis. In a child with pelvic rhabdomyosarcoma, skull radiographs and cranial computed tomographic (CT) scans were available prior to a period of rapid weight gain induced by hyperalimentation. Suture diastasis developed and repeat CT scans showed this to be accompanied by smaller ventricles.

  20. Radiographic and computed tomographic demonstration of pseudotumor cerebri due to rapid weight gain in a child with pelvic rhabdomyosarcoma

    International Nuclear Information System (INIS)

    Berdon, W.E.; Barker, D.H.; Barash, F.S.

    1982-01-01

    Rapid weight gain in a malnourished child can be associated with suture diastasis in the pattern of pseudotumor cerebri; this has been previously reported in deprivational dwarfism and cystic fibrosis. In a child with pelvic rhabdomyosarcoma, skull radiographs and cranial computed tomographic (CT) scans were available prior to a period of rapid weight gain induced by hyperalimentation. Suture diastasis developed and repeat CT scans showed this to be accompanied by smaller ventricles

  1. Rapid diagnosis of sepsis with TaqMan-Based multiplex real-time PCR.

    Science.gov (United States)

    Liu, Chang-Feng; Shi, Xin-Ping; Chen, Yun; Jin, Ye; Zhang, Bing

    2018-02-01

    The survival rate of septic patients mainly depends on a rapid and reliable diagnosis. A rapid, broad range, specific and sensitive quantitative diagnostic test is the urgent need. Thus, we developed a TaqMan-Based Multiplex real-time PCR assays to identify bloodstream pathogens within a few hours. Primers and TaqMan probes were designed to be complementary to conserved regions in the 16S rDNA gene of different kinds of bacteria. To evaluate accurately, sensitively, and specifically, the known bacteria samples (Standard strains, whole blood samples) are determined by TaqMan-Based Multiplex real-time PCR. In addition, 30 blood samples taken from patients with clinical symptoms of sepsis were tested by TaqMan-Based Multiplex real-time PCR and blood culture. The mean frequency of positive for Multiplex real-time PCR was 96% at a concentration of 100 CFU/mL, and it was 100% at a concentration greater than 1000 CFU/mL. All the known blood samples and Standard strains were detected positively by TaqMan-Based Multiplex PCR, no PCR products were detected when DNAs from other bacterium were used in the multiplex assay. Among the 30 patients with clinical symptoms of sepsis, 18 patients were confirmed positive by Multiplex real-time PCR and seven patients were confirmed positive by blood culture. TaqMan-Based Multiplex real-time PCR assay with highly sensitivity, specificity and broad detection range, is a rapid and accurate method in the detection of bacterial pathogens of sepsis and should have a promising usage in the diagnosis of sepsis. © 2017 Wiley Periodicals, Inc.

  2. Addressing unmet need for HIV testing in emergency care settings: a role for computer-facilitated rapid HIV testing?

    Science.gov (United States)

    Kurth, Ann E; Severynen, Anneleen; Spielberg, Freya

    2013-08-01

    HIV testing in emergency departments (EDs) remains underutilized. The authors evaluated a computer tool to facilitate rapid HIV testing in an urban ED. Randomly assigned nonacute adult ED patients were randomly assigned to a computer tool (CARE) and rapid HIV testing before a standard visit (n = 258) or to a standard visit (n = 259) with chart access. The authors assessed intervention acceptability and compared noted HIV risks. Participants were 56% nonWhite and 58% male; median age was 37 years. In the CARE arm, nearly all (251/258) of the patients completed the session and received HIV results; four declined to consent to the test. HIV risks were reported by 54% of users; one participant was confirmed HIV-positive, and two were confirmed false-positive (seroprevalence 0.4%, 95% CI [0.01, 2.2]). Half (55%) of the patients preferred computerized rather than face-to-face counseling for future HIV testing. In the standard arm, one HIV test and two referrals for testing occurred. Computer-facilitated HIV testing appears acceptable to ED patients. Future research should assess cost-effectiveness compared with staff-delivered approaches.

  3. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  4. 43 CFR 45.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false How are time periods computed? 45.3... IN FERC HYDROPOWER LICENSES General Provisions § 45.3 How are time periods computed? (a) General... run is not included. (2) The last day of the period is included. (i) If that day is a Saturday, Sunday...

  5. Rapidity correlations test stochastic hydrodynamics

    International Nuclear Information System (INIS)

    Zin, C; Gavin, S; Moschelli, G

    2017-01-01

    We show that measurements of the rapidity dependence of transverse momentum correlations can be used to determine the characteristic time τ π that dictates the rate of isotropization of the stress energy tensor, as well as the shear viscosity ν = η/sT . We formulate methods for computing these correlations using second order dissipative hydrodynamics with noise. Current data are consistent with τ π /ν ∼ 10 but targeted measurements can improve this precision. (paper)

  6. Reading Time Allocation Strategies and Working Memory Using Rapid Serial Visual Presentation

    Science.gov (United States)

    Busler, Jessica N.; Lazarte, Alejandro A.

    2017-01-01

    Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce…

  7. Accuracy of using computer-aided rapid prototyping templates for mandible reconstruction with an iliac crest graft

    Science.gov (United States)

    2014-01-01

    Background This study aimed to evaluate the accuracy of surgical outcomes in free iliac crest mandibular reconstructions that were carried out with virtual surgical plans and rapid prototyping templates. Methods This study evaluated eight patients who underwent mandibular osteotomy and reconstruction with free iliac crest grafts using virtual surgical planning and designed guiding templates. Operations were performed using the prefabricated guiding templates. Postoperative three-dimensional computer models were overlaid and compared with the preoperatively designed models in the same coordinate system. Results Compared to the virtual osteotomy, the mean error of distance of the actual mandibular osteotomy was 2.06 ± 0.86 mm. When compared to the virtual harvested grafts, the mean error volume of the actual harvested grafts was 1412.22 ± 439.24 mm3 (9.12% ± 2.84%). The mean error between the volume of the actual harvested grafts and the shaped grafts was 2094.35 ± 929.12 mm3 (12.40% ± 5.50%). Conclusions The use of computer-aided rapid prototyping templates for virtual surgical planning appears to positively influence the accuracy of mandibular reconstruction. PMID:24957053

  8. Relativistic Photoionization Computations with the Time Dependent Dirac Equation

    Science.gov (United States)

    2016-10-12

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6795--16-9698 Relativistic Photoionization Computations with the Time Dependent Dirac... Photoionization Computations with the Time Dependent Dirac Equation Daniel F. Gordon and Bahman Hafizi Naval Research Laboratory 4555 Overlook Avenue, SW...Unclassified Unlimited Unclassified Unlimited 22 Daniel Gordon (202) 767-5036 Tunneling Photoionization Ionization of inner shell electrons by laser

  9. New layer-based imaging and rapid prototyping techniques for computer-aided design and manufacture of custom dental restoration.

    Science.gov (United States)

    Lee, M-Y; Chang, C-C; Ku, Y C

    2008-01-01

    Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.

  10. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  11. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  12. Computer-controlled neutron time-of-flight spectrometer. Part II

    International Nuclear Information System (INIS)

    Merriman, S.H.

    1979-12-01

    A time-of-flight spectrometer for neutron inelastic scattering research has been interfaced to a PDP-15/30 computer. The computer is used for experimental data acquisition and analysis and for apparatus control. This report was prepared to summarize the functions of the computer and to act as a users' guide to the software system

  13. Mapping land cover through time with the Rapid Land Cover Mapper—Documentation and user manual

    Science.gov (United States)

    Cotillon, Suzanne E.; Mathis, Melissa L.

    2017-02-15

    The Rapid Land Cover Mapper is an Esri ArcGIS® Desktop add-in, which was created as an alternative to automated or semiautomated mapping methods. Based on a manual photo interpretation technique, the tool facilitates mapping over large areas and through time, and produces time-series raster maps and associated statistics that characterize the changing landscapes. The Rapid Land Cover Mapper add-in can be used with any imagery source to map various themes (for instance, land cover, soils, or forest) at any chosen mapping resolution. The user manual contains all essential information for the user to make full use of the Rapid Land Cover Mapper add-in. This manual includes a description of the add-in functions and capabilities, and step-by-step procedures for using the add-in. The Rapid Land Cover Mapper add-in was successfully used by the U.S. Geological Survey West Africa Land Use Dynamics team to accurately map land use and land cover in 17 West African countries through time (1975, 2000, and 2013).

  14. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  15. Heterogeneous real-time computing in radio astronomy

    Science.gov (United States)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  16. Ubiquitous computing technology for just-in-time motivation of behavior change.

    Science.gov (United States)

    Intille, Stephen S

    2004-01-01

    This paper describes a vision of health care where "just-in-time" user interfaces are used to transform people from passive to active consumers of health care. Systems that use computational pattern recognition to detect points of decision, behavior, or consequences automatically can present motivational messages to encourage healthy behavior at just the right time. Further, new ubiquitous computing and mobile computing devices permit information to be conveyed to users at just the right place. In combination, computer systems that present messages at the right time and place can be developed to motivate physical activity and healthy eating. Computational sensing technologies can also be used to measure the impact of the motivational technology on behavior.

  17. Development Of A Data Assimilation Capability For RAPID

    Science.gov (United States)

    Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.

    2017-12-01

    The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.

  18. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  19. Continuous-Time Symmetric Hopfield Nets are Computationally Universal

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 3 (2003), s. 693-733 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : continuous-time Hopfield network * Liapunov function * analog computation * computational power * Turing universality Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  20. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    Science.gov (United States)

    2015-09-13

    thermo-fluid analysis of a ground vehicle and its tires ST-SI Computational Analysis of a Vertical - Axis Wind Turbine We have successfully...of a vertical - axis wind turbine . Multiscale Compressible-Flow Computation with Particle Tracking We have successfully tested the multiscale...Tezduyar, Spenser McIntyre, Nikolay Kostov, Ryan Kolesar, Casey Habluetzel. Space–time VMS computation of wind - turbine rotor and tower aerodynamics

  1. Development of a high-speed real-time PCR system for rapid and precise nucleotide recognition

    Science.gov (United States)

    Terazono, Hideyuki; Takei, Hiroyuki; Hattori, Akihiro; Yasuda, Kenji

    2010-04-01

    Polymerase chain reaction (PCR) is a common method used to create copies of a specific target region of a DNA sequence and to produce large quantities of DNA. A few DNA molecules, which act as templates, are rapidly amplified by PCR into many billions of copies. PCR is a key technology in genome-based biological analysis, revolutionizing many life science fields such as medical diagnostics, food safety monitoring, and countermeasures against bioterrorism. Thus, many applications have been developed with the thermal cycling. For these PCR applications, one of the most important key factors is reduction in the data acquisition time. To reduce the acquisition time, it is necessary to decrease the temperature transition time between the high and low ends as much as possible. We have developed a novel rapid real-time PCR system based on rapid exchange of media maintained at different temperatures. This system consists of two thermal reservoirs and a reaction chamber for PCR observation. The temperature transition was achieved within 0.3 sec, and good thermal stability was achieved during thermal cycling with rapid exchange of circulating media. This system allows rigorous optimization of the temperatures required for each stage of the PCR processes. Resulting amplicons were confirmed by electrophoresis. Using the system, rapid DNA amplification was accomplished within 3.5 min, including initial heating and complete 50 PCR cycles. It clearly shows that the device could allow us faster temperature switching than the conventional conduction-based heating systems based on Peltier heating/cooling.

  2. Computational materials design

    International Nuclear Information System (INIS)

    Snyder, R.L.

    1999-01-01

    Full text: Trial and error experimentation is an extremely expensive route to the development of new materials. The coming age of reduced defense funding will dramatically alter the way in which advanced materials have developed. In the absence of large funding we must concentrate on reducing the time and expense that the R and D of a new material consumes. This may be accomplished through the development of computational materials science. Materials are selected today by comparing the technical requirements to the materials databases. When existing materials cannot meet the requirements we explore new systems to develop a new material using experimental databases like the PDF. After proof of concept, the scaling of the new material to manufacture requires evaluating millions of parameter combinations to optimize the performance of the new device. Historically this process takes 10 to 20 years and requires hundreds of millions of dollars. The development of a focused set of computational tools to predict the final properties of new materials will permit the exploration of new materials systems with only a limited amount of materials characterization. However, to bound computational extrapolations, the experimental formulations and characterization will need to be tightly coupled to the computational tasks. The required experimental data must be obtained by dynamic, in-situ, very rapid characterization. Finally, to evaluate the optimization matrix required to manufacture the new material, very rapid in situ analysis techniques will be essential to intelligently monitor and optimize the formation of a desired microstructure. Techniques and examples for the rapid real-time application of XRPD and optical microscopy will be shown. Recent developments in the cross linking of the world's structural and diffraction databases will be presented as the basis for the future Total Pattern Analysis by XRPD. Copyright (1999) Australian X-ray Analytical Association Inc

  3. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  4. 22 CFR 1429.21 - Computation of time for filing papers.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Computation of time for filing papers. 1429.21... MISCELLANEOUS AND GENERAL REQUIREMENTS General Requirements § 1429.21 Computation of time for filing papers. In... subchapter requires the filing of any paper, such document must be received by the Board or the officer or...

  5. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  6. RapidRMSD: Rapid determination of RMSDs corresponding to motions of flexible molecules.

    Science.gov (United States)

    Neveu, Emilie; Popov, Petr; Hoffmann, Alexandre; Migliosi, Angelo; Besseron, Xavier; Danoy, Grégoire; Bouvry, Pascal; Grudinin, Sergei

    2018-03-15

    The root mean square deviation (RMSD) is one of the most used similarity criteria in structural biology and bioinformatics. Standard computation of the RMSD has a linear complexity with respect to the number of atoms in a molecule, making RMSD calculations time-consuming for the large-scale modeling applications, such as assessment of molecular docking predictions or clustering of spatially proximate molecular conformations. Previously we introduced the RigidRMSD algorithm to compute the RMSD corresponding to the rigid-body motion of a molecule. In this study we go beyond the limits of the rigid-body approximation by taking into account conformational flexibility of the molecule. We model the flexibility with a reduced set of collective motions computed with e.g. normal modes or principal component analysis. The initialization of our algorithm is linear in the number of atoms and all the subsequent evaluations of RMSD values between flexible molecular conformations depend only on the number of collective motions that are selected to model the flexibility. Therefore, our algorithm is much faster compared to the standard RMSD computation for large-scale modeling applications. We demonstrate the efficiency of our method on several clustering examples, including clustering of flexible docking results and molecular dynamics (MD) trajectories. We also demonstrate how to use the presented formalism to generate pseudo-random constant-RMSD structural molecular ensembles and how to use these in cross-docking. We provide the algorithm written in C ++ as the open-source RapidRMSD library governed by the BSD-compatible license, which is available at http://team.inria.fr/nano-d/software/RapidRMSD/. The constant-RMSD structural ensemble application and clustering of MD trajectories is available at http://team.inria.fr/nano-d/software/nolb-normal-modes/. sergei.grudinin@inria.fr. Supplementary data are available at Bioinformatics.

  7. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  8. Computer simulations of long-time tails: what's new?

    NARCIS (Netherlands)

    Hoef, van der M.A.; Frenkel, D.

    1995-01-01

    Twenty five years ago Alder and Wainwright discovered, by simulation, the 'long-time tails' in the velocity autocorrelation function of a single particle in fluid [1]. Since then, few qualitatively new results on long-time tails have been obtained by computer simulations. However, within the

  9. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  10. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  11. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2009-01-01

    an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second

  12. Computational aspects of feedback in neural circuits.

    Directory of Open Access Journals (Sweden)

    Wolfgang Maass

    2007-01-01

    Full Text Available It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also

  13. 5 CFR 831.703 - Computation of annuities for part-time service.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...

  14. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  15. Rapid identification of ST131 Escherichia coli by a novel multiplex real-time allelic discrimination assay.

    Science.gov (United States)

    François, Patrice; Bonetti, Eve-Julie; Fankhauser, Carolina; Baud, Damien; Cherkaoui, Abdessalam; Schrenzel, Jacques; Harbarth, Stephan

    2017-09-01

    Escherichia coli sequence type 131 is increasingly described in severe hospital infections. We developed a rapid real-time allelic discrimination assay for the rapid identification of E. coli ST131 isolates. This rapid assay represents an affordable alternative to sequence-based strategies before completing characterization of potentially highly virulent isolates of E. coli. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Response time distributions in rapid chess: A large-scale decision making experiment

    Directory of Open Access Journals (Sweden)

    Mariano Sigman

    2010-10-01

    Full Text Available Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times and position value in rapid chess games. We measured robust emergent statistical observables: 1 Response time (RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, 2 RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.

  17. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    Science.gov (United States)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  18. Rapid deuterium exchange-in time for probing conformational change

    International Nuclear Information System (INIS)

    Dharmasiri, K.; Smith, D.L.

    1995-01-01

    Isotopic exchange of protein backbone amide hydrogens has been used extensively as a sensitive probe of protein structure. One of the salient features of hydrogen exchange is the vast range of exchange rates in one protein. Isotopic exchange methods have been used to study the structural features including protein folding and unfolding (1), functionally different forms of proteins (2), protein-protein complexation (3), and protein stability parameter. Many backbone amide protons that are surface accessible and are not involved in hydrogen bonding undergo rapid deuterium exchange. In order to study, fast exchanging amide protons, fast exchange-in times are necessary

  19. [Real-time PCR in rapid diagnosis of Aeromonas hydrophila necrotizing soft tissue infections].

    Science.gov (United States)

    Kohayagawa, Yoshitaka; Izumi, Yoko; Ushita, Misuzu; Niinou, Norio; Koshizaki, Masayuki; Yamamori, Yuji; Kaneko, Sakae; Fukushima, Hiroshi

    2009-11-01

    We report a case of rapidly progressive necrotizing soft tissue infection and sepsis followed by a patient's death. We suspected Vibrio vulnificus infection because the patient's underlying disease was cirrhosis and the course extremely rapid. No microbe had been detected at death. We extracted DNA from a blood culture bottle. SYBR green I real-time PCR was conducted but could not detect V. vulnificus vvh in the DNA sample. Aeromonas hydrophila was cultured and identified in blood and necrotized tissue samples. Real-time PCR was conducted to detect A. hydrophila ahh1, AHCYTOEN and aerA in the DNA sample extracted from the blood culture bottle and an isolated necrotized tissue strain, but only ahh1 was positive. High-mortality in necrotizing soft tissue infections makes it is crucial to quickly detect V. vulnificus and A. hydrophila. We found real-time PCR for vvh, ahh1, AHCYTOEN, and aerA useful in detecting V. vulnificus and A. hydrophila in necrotizing soft tissue infections.

  20. Time is of essence; rapid identification of veterinary pathogens using MALDI TOF

    DEFF Research Database (Denmark)

    Nonnemann, Bettina; Dalsgaard, Inger; Pedersen, Karl

    Rapid and accurate identification of microbial pathogens is a cornerstone for timely and correct treatment of diseases of livestock and fish. The utility of the MALDI-TOF technique in the diagnostic laboratory is directly related to the quality of mass spectra and quantity of different microbial...

  1. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  2. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  3. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  4. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  5. UMTS rapid response real-time seismic networks: implementation and strategies at INGV

    Science.gov (United States)

    Govoni, Aladino; Margheriti, Lucia; Moretti, Milena; Lauciani, Valentino; Sensale, Gianpaolo; Bucci, Augusto; Criscuoli, Fabio

    2015-04-01

    The benefits of portable real-time seismic networks are several and well known. During the management of a temporary experiment from the real-time data it is possible to detect and fix rapidly problems with power supply, time synchronization, disk failures and, most important, seismic signal quality degradation due to unexpected noise sources or sensor alignment/tampering. This usually minimizes field maintenance trips and maximizes both the quantity and the quality of the acquired data. When the area of the temporary experiment is not well monitored by the local permanent network, the real-time data from the temporary experiment can be fed to the permanent network monitoring system improving greatly both the real-time hypocentral locations and the final revised bulletin. All these benefits apply also in case of seismic crises when rapid deployment stations can significantly contribute to the aftershock analysis. Nowadays data transmission using meshed radio networks or satellite systems is not a big technological problem for a permanent seismic network where each site is optimized for the device power consumption and is usually installed by properly specialized technicians that can configure transmission devices and align antennas. This is not usually practical for temporary networks and especially for rapid response networks where the installation time is the main concern. These difficulties are substantially lowered using the now widespread UMTS technology for data transmission. A small (but sometimes power hungry) properly configured device with an omnidirectional antenna must be added to the station assembly. All setups are usually configured before deployment and this allows for an easy installation also by untrained personnel. We describe here the implementation of a UMTS based portable seismic network for both temporary experiments and rapid response applications developed at INGV. The first field experimentation of this approach dates back to the 2009 L

  6. Soft Real-Time PID Control on a VME Computer

    Science.gov (United States)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  7. OpenVX-based Python Framework for real-time cross platform acceleration of embedded computer vision applications

    Directory of Open Access Journals (Sweden)

    Ori Heimlich

    2016-11-01

    Full Text Available Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.

  8. Rapid determination of long-lived artificial alpha radionuclides using time interval analysis

    International Nuclear Information System (INIS)

    Uezu, Yasuhiro; Koarashi, Jun; Sanada, Yukihisa; Hashimoto, Tetsuo

    2003-01-01

    It is important to monitor long lived alpha radionuclides as plutonium ( 238 Pu, 239+240 Pu) in the field of working area and environment of nuclear fuel cycle facilities, because it is well known that potential risks of cancer-causing from alpha radiation is higher than gamma radiations. Thus, these monitoring are required high sensitivity, high resolution and rapid determination in order to measure a very low-level concentration of plutonium isotopes. In such high sensitive monitoring, natural radionuclides, including radon ( 222 Rn or 220 Rn) and their progenies, should be eliminated as low as possible. In this situation, a sophisticated discrimination method between Pu and progenies of 222 Rn or 220 Rn using time interval analysis (TIA), which was able to subtract short-lived radionuclides using the time interval distributions calculation of successive alpha and beta decay events within millisecond or microsecond orders, was designed and developed. In this system, alpha rays from 214 Po, 216 Po and 212 Po are extractable. TIA measuring system composes of Silicon Surface Barrier Detector (SSD), an amplifier, an Analog to Digital Converter (ADC), a Multi-Channel Analyzer (MCA), a high-resolution timer (TIMER), a multi-parameter collector and a personal computer. In ADC, incidental alpha and beta pulses are sent to the MCA and the TIMER simultaneously. Pulses from them are synthesized by the multi-parameter collector. After measurement, natural radionuclides are subtracted. Airborne particles were collected on membrane filter for 60 minutes at 100 L/min. Small Pu particles were added on the surface of it. Alpha and beta rays were measured and natural radionuclides were subtracted within 5 times of 145 msec. by TIA. As a result of it, the hidden Pu in natural background could be recognized clearly. The lower limit of determination of 239 Pu is calculated as 6x10 -9 Bq/cm 3 . This level is satisfied with the derived air concentration (DAC) of 239 Pu (8x10 -9 Bq/cm 3

  9. Computing return times or return periods with rare event algorithms

    Science.gov (United States)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  10. Radiotherapy Monte Carlo simulation using cloud computing technology.

    Science.gov (United States)

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  11. Radiotherapy Monte Carlo simulation using cloud computing technology

    International Nuclear Information System (INIS)

    Poole, C.M.; Cornelius, I.; Trapp, J.V.; Langton, C.M.

    2012-01-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  12. Near real-time digital holographic microscope based on GPU parallel computing

    Science.gov (United States)

    Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan

    2018-01-01

    A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,

  13. Response time distributions in rapid chess: a large-scale decision making experiment.

    Science.gov (United States)

    Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A

    2010-01-01

    Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.

  14. RAPID TRANSFER ALIGNMENT USING FEDERATED KALMAN FILTER

    Institute of Scientific and Technical Information of China (English)

    GUDong-qing; QINYong-yuan; PENGRong; LIXin

    2005-01-01

    The dimension number of the centralized Kalman filter (CKF) for the rapid transfer alignment (TA) is as high as 21 if the aircraft wing flexure motion is considered in the rapid TA. The 21-dimensional CKF brings the calculation burden on the computer and the difficulty to meet a high filtering updating rate desired by rapid TA. The federated Kalman filter (FKF) for the rapid TA is proposed to solve the dilemma. The structure and the algorithm of the FKF, which can perform parallel computation and has less calculation burden, are designed.The wing flexure motion is modeled, and then the 12-order velocity matching local filter and the 15-order attitud ematching local filter are devised. Simulation results show that the proposed EKE for the rapid TA almost has the same performance as the CKF. Thus the calculation burden of the proposed FKF for the rapid TA is markedly decreased.

  15. Developments in time-resolved high pressure x-ray diffraction using rapid compression and decompression

    International Nuclear Information System (INIS)

    Smith, Jesse S.; Sinogeikin, Stanislav V.; Lin, Chuanlong; Rod, Eric; Bai, Ligang; Shen, Guoyin

    2015-01-01

    Complementary advances in high pressure research apparatus and techniques make it possible to carry out time-resolved high pressure research using what would customarily be considered static high pressure apparatus. This work specifically explores time-resolved high pressure x-ray diffraction with rapid compression and/or decompression of a sample in a diamond anvil cell. Key aspects of the synchrotron beamline and ancillary equipment are presented, including source considerations, rapid (de)compression apparatus, high frequency imaging detectors, and software suitable for processing large volumes of data. A number of examples are presented, including fast equation of state measurements, compression rate dependent synthesis of metastable states in silicon and germanium, and ultrahigh compression rates using a piezoelectric driven diamond anvil cell

  16. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    International Nuclear Information System (INIS)

    Lopes, Antonio Augusto; Miranda, Rogerio dos Anjos; Goncalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-01-01

    In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P <.001) and between-methods ( P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. (author)

  17. Time reversibility, computer simulation, algorithms, chaos

    CERN Document Server

    Hoover, William Graham

    2012-01-01

    A small army of physicists, chemists, mathematicians, and engineers has joined forces to attack a classic problem, the "reversibility paradox", with modern tools. This book describes their work from the perspective of computer simulation, emphasizing the author's approach to the problem of understanding the compatibility, and even inevitability, of the irreversible second law of thermodynamics with an underlying time-reversible mechanics. Computer simulation has made it possible to probe reversibility from a variety of directions and "chaos theory" or "nonlinear dynamics" has supplied a useful vocabulary and a set of concepts, which allow a fuller explanation of irreversibility than that available to Boltzmann or to Green, Kubo and Onsager. Clear illustration of concepts is emphasized throughout, and reinforced with a glossary of technical terms from the specialized fields which have been combined here to focus on a common theme. The book begins with a discussion, contrasting the idealized reversibility of ba...

  18. Conception and production of a time sharing system for a Mitra-15 CII mini-computer dedicated to APL

    International Nuclear Information System (INIS)

    Perrin, Rene

    1977-01-01

    The installation of a time-sharing system on a mini-computer poses several interesting problems. These technical problems are especially interesting when the goal is to equitably divide the physical resources of the machine amongst users of a high level, conservational language like APL. Original solutions were necessary to be able to retain the rapidity and performances of the original hard and software. The system has been implemented in such way that several users may simultaneously access logical resources, such as the library zones their read/write requests are managed by semaphores which may also be directly controlled by the APL programmer. (author) [fr

  19. A rapid and direct real time PCR-based method for identification of Salmonella spp

    DEFF Research Database (Denmark)

    Rodriguez-Lazaro, D.; Hernández, Marta; Esteve, T.

    2003-01-01

    The aim of this work was the validation of a rapid, real-time PCR assay based on TaqMan((R)) technology for the unequivocal identification of Salmonella spp. to be used directly on an agar-grown colony. A real-time PCR system targeting at the Salmonella spp. invA gene was optimized and validated ...

  20. Cone-beam computed tomography evaluation of dentoskeletal changes after asymmetric rapid maxillary expansion.

    Science.gov (United States)

    Baka, Zeliha Muge; Akin, Mehmet; Ucar, Faruk Izzet; Ileri, Zehra

    2015-01-01

    The aims of this study were to quantitatively evaluate the changes in arch widths and buccolingual inclinations of the posterior teeth after asymmetric rapid maxillary expansion (ARME) and to compare the measurements between the crossbite and the noncrossbite sides with cone-beam computed tomography (CBCT). From our clinic archives, we selected the CBCT records of 30 patients with unilateral skeletal crossbite (13 boys, 14.2 ± 1.3 years old; 17 girls, 13.8 ± 1.3 years old) who underwent ARME treatment. A modified acrylic bonded rapid maxillary expansion appliance including an occlusal locking mechanism was used in all patients. CBCT records had been taken before ARME treatment and after a 3-month retention period. Fourteen angular and 80 linear measurements were taken for the maxilla and the mandible. Frontally clipped CBCT images were used for the evaluation. Paired sample and independent sample t tests were used for statistical comparisons. Comparisons of the before-treatment and after-retention measurements showed that the arch widths and buccolingual inclinations of the posterior teeth increased significantly on the crossbite side of the maxilla and on the noncrossbite side of the mandible (P ARME treatment, the crossbite side of the maxilla and the noncrossbite side of the mandible were more affected than were the opposite sides. Copyright © 2015. Published by Elsevier Inc.

  1. SENSITIVITY OF HELIOSEISMIC TRAVEL TIMES TO THE IMPOSITION OF A LORENTZ FORCE LIMITER IN COMPUTATIONAL HELIOSEISMOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Moradi, Hamed; Cally, Paul S., E-mail: hamed.moradi@monash.edu [Monash Centre for Astrophysics, School of Mathematical Sciences, Monash University, Clayton, Victoria 3800 (Australia)

    2014-02-20

    The rapid exponential increase in the Alfvén wave speed with height above the solar surface presents a serious challenge to physical modeling of the effects of magnetic fields on solar oscillations, as it introduces a significant Courant-Friedrichs-Lewy time-step constraint for explicit numerical codes. A common approach adopted in computational helioseismology, where long simulations in excess of 10 hr (hundreds of wave periods) are often required, is to cap the Alfvén wave speed by artificially modifying the momentum equation when the ratio between the Lorentz and hydrodynamic forces becomes too large. However, recent studies have demonstrated that the Alfvén wave speed plays a critical role in the MHD mode conversion process, particularly in determining the reflection height of the upwardly propagating helioseismic fast wave. Using numerical simulations of helioseismic wave propagation in constant inclined (relative to the vertical) magnetic fields we demonstrate that the imposition of such artificial limiters significantly affects time-distance travel times unless the Alfvén wave-speed cap is chosen comfortably in excess of the horizontal phase speeds under investigation.

  2. Computation of reactor control rod drop time under accident conditions

    International Nuclear Information System (INIS)

    Dou Yikang; Yao Weida; Yang Renan; Jiang Nanyan

    1998-01-01

    The computational method of reactor control rod drop time under accident conditions lies mainly in establishing forced vibration equations for the components under action of outside forces on control rod driven line and motion equation for the control rod moving in vertical direction. The above two kinds of equations are connected by considering the impact effects between control rod and its outside components. Finite difference method is adopted to make discretization of the vibration equations and Wilson-θ method is applied to deal with the time history problem. The non-linearity caused by impact is iteratively treated with modified Newton method. Some experimental results are used to validate the validity and reliability of the computational method. Theoretical and experimental testing problems show that the computer program based on the computational method is applicable and reliable. The program can act as an effective tool of design by analysis and safety analysis for the relevant components

  3. Rapid-mixing studies on the time-scale of radiation damage in cells

    International Nuclear Information System (INIS)

    Adams, G.E.; Michael, B.D.; Asquith, J.C.; Shenoy, M.A.; Watts, M.E.; Whillans, D.W.

    1975-01-01

    Rapid mixing studies were performed to determine the time scale of radiation damage in cells. There is evidence that the sensitizing effects of oxygen and other chemical dose-modifying agents on the response of cells to ionizing radiation involve fast free-radical processes. Fast response technique studies in bacterial systems have shown that extremely fast processes occur when the bacteria are exposed to oxygen or other dose-modifying agents during irradiation. The time scales observed were consistent with the involvement of fast free-radical reactions in the expression of these effects

  4. A computationally simple model for determining the time dependent spectral neutron flux in a nuclear reactor core

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, E.A. [Department of Mechanical Engineering, University of Texas, Austin, TX (United States); Deinert, M.R. [Theoretical and Applied Mechanics, Cornell University, 219 Kimball Hall, Ithaca, NY 14853 (United States)]. E-mail: mrd6@cornell.edu; Cady, K.B. [Theoretical and Applied Mechanics, Cornell University, 219 Kimball Hall, Ithaca, NY 14853 (United States)

    2006-10-15

    The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.

  5. Rapid prototyping and stereolithography in dentistry

    Science.gov (United States)

    Nayar, Sanjna; Bhuminathan, S.; Bhat, Wasim Manzoor

    2015-01-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena. PMID:26015715

  6. Rapid prototyping and stereolithography in dentistry.

    Science.gov (United States)

    Nayar, Sanjna; Bhuminathan, S; Bhat, Wasim Manzoor

    2015-04-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena.

  7. Real-Time Accumulative Computation Motion Detectors

    Directory of Open Access Journals (Sweden)

    Saturnino Maldonado-Bascón

    2009-12-01

    Full Text Available The neurally inspired accumulative computation (AC method and its application to motion detection have been introduced in the past years. This paper revisits the fact that many researchers have explored the relationship between neural networks and finite state machines. Indeed, finite state machines constitute the best characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The article shows how to reach real-time performance after using a model described as a finite state machine. This paper introduces two steps towards that direction: (a A simplification of the general AC method is performed by formally transforming it into a finite state machine. (b A hardware implementation in FPGA of such a designed AC module, as well as an 8-AC motion detector, providing promising performance results. We also offer two case studies of the use of AC motion detectors in surveillance applications, namely infrared-based people segmentation and color-based people tracking, respectively.

  8. Rapid earthquake magnitude determination for Vrancea early warning system

    International Nuclear Information System (INIS)

    Marmureanu, Alexandru

    2009-01-01

    Due to the huge amount of recorded data, an automatic procedure was developed and used to test different methods to rapidly evaluate earthquake magnitude from the first seconds of the P wave. In order to test all the algorithms involved in detection and rapid earthquake magnitude estimation, several tests were performed, in order to avoid false alarms. A special detection algorithm was developed, that is based on the classical STA/LTA algorithm and tuned for early warning purpose. A method to rapidly estimate magnitude in 4 seconds from detection of P wave in the epicenter is proposed. The method was tested on al recorded data, and the magnitude error determination is acceptable taking into account that it is computed from only 3 stations in a very short time interval. (author)

  9. Rapid growth, early maturation and short generation time in African annual fishes

    Czech Academy of Sciences Publication Activity Database

    Blažek, Radim; Polačik, Matej; Reichard, Martin

    2013-01-01

    Roč. 4, č. 24 (2013), s. 24 ISSN 2041-9139 R&D Projects: GA ČR(CZ) GAP506/11/0112 Institutional support: RVO:68081766 Keywords : extreme life history * annual fish * explosive growth * rapid maturation * generation time * killifish * diapause * vertebrate * reaction norm * Savanna Subject RIV: EG - Zoology Impact factor: 3.104, year: 2013 http://www.evodevojournal.com/content/4/1/24

  10. Rapid, computer vision-enabled murine screening system identifies neuropharmacological potential of two new mechanisms

    Directory of Open Access Journals (Sweden)

    Steven L Roberds

    2011-09-01

    Full Text Available The lack of predictive in vitro models for behavioral phenotypes impedes rapid advancement in neuropharmacology and psychopharmacology. In vivo behavioral assays are more predictive of activity in human disorders, but such assays are often highly resource-intensive. Here we describe the successful application of a computer vision-enabled system to identify potential neuropharmacological activity of two new mechanisms. The analytical system was trained using multiple drugs that are used clinically to treat depression, schizophrenia, anxiety, and other psychiatric or behavioral disorders. During blinded testing the PDE10 inhibitor TP-10 produced a signature of activity suggesting potential antipsychotic activity. This finding is consistent with TP-10’s activity in multiple rodent models that is similar to that of clinically used antipsychotic drugs. The CK1ε inhibitor PF-670462 produced a signature consistent with anxiolytic activity and, at the highest dose tested, behavioral effects similar to that of opiate analgesics. Neither TP-10 nor PF-670462 was included in the training set. Thus, computer vision-based behavioral analysis can facilitate drug discovery by identifying neuropharmacological effects of compounds acting through new mechanisms.

  11. Television viewing, computer use and total screen time in Canadian youth.

    Science.gov (United States)

    Mark, Amy E; Boyce, William F; Janssen, Ian

    2006-11-01

    Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.

  12. Computational complexity of time-dependent density functional theory

    International Nuclear Information System (INIS)

    Whitfield, J D; Yung, M-H; Tempel, D G; Aspuru-Guzik, A; Boixo, S

    2014-01-01

    Time-dependent density functional theory (TDDFT) is rapidly emerging as a premier method for solving dynamical many-body problems in physics and chemistry. The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn–Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn–Sham system can be efficiently obtained given the time-dependent density. We introduce a V-representability parameter which diverges at the boundary of the existence domain and serves to quantify the numerical difficulty of constructing the Kohn-Sham potential. For bounded values of V-representability, we present a polynomial time quantum algorithm to generate the time-dependent Kohn–Sham potential with controllable error bounds. (paper)

  13. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  14. Space-Time Trellis Coded 8PSK Schemes for Rapid Rayleigh Fading Channels

    Directory of Open Access Journals (Sweden)

    Salam A. Zummo

    2002-05-01

    Full Text Available This paper presents the design of 8PSK space-time (ST trellis codes suitable for rapid fading channels. The proposed codes utilize the design criteria of ST codes over rapid fading channels. Two different approaches have been used. The first approach maximizes the symbol-wise Hamming distance (HD between signals leaving from or entering to the same encoder′s state. In the second approach, set partitioning based on maximizing the sum of squared Euclidean distances (SSED between the ST signals is performed; then, the branch-wise HD is maximized. The proposed codes were simulated over independent and correlated Rayleigh fading channels. Coding gains up to 4 dB have been observed over other ST trellis codes of the same complexity.

  15. [COMPUTER ASSISTED DESIGN AND ELECTRON BEAMMELTING RAPID PROTOTYPING METAL THREE-DIMENSIONAL PRINTING TECHNOLOGY FOR PREPARATION OF INDIVIDUALIZED FEMORAL PROSTHESIS].

    Science.gov (United States)

    Liu, Hongwei; Weng, Yiping; Zhang, Yunkun; Xu, Nanwei; Tong, Jing; Wang, Caimei

    2015-09-01

    To study the feasibility of preparation of the individualized femoral prosthesis through computer assisted design and electron beammelting rapid prototyping (EBM-RP) metal three-dimensional (3D) printing technology. One adult male left femur specimen was used for scanning with 64-slice spiral CT; tomographic image data were imported into Mimics15.0 software to reconstruct femoral 3D model, then the 3D model of individualized femoral prosthesis was designed through UG8.0 software. Finally the 3D model data were imported into EBM-RP metal 3D printer to print the individualized sleeve. According to the 3D model of individualized prosthesis, customized sleeve was successfully prepared through the EBM-RP metal 3D printing technology, assembled with the standard handle component of SR modular femoral prosthesis to make the individualized femoral prosthesis. Customized femoral prosthesis accurately matching with metaphyseal cavity can be designed through the thin slice CT scanning and computer assisted design technology. Titanium alloy personalized prosthesis with complex 3D shape, pore surface, and good matching with metaphyseal cavity can be manufactured by the technology of EBM-RP metal 3D printing, and the technology has convenient, rapid, and accurate advantages.

  16. Control of force during rapid visuomotor force-matching tasks can be described by discrete time PID control algorithms.

    Science.gov (United States)

    Dideriksen, Jakob Lund; Feeney, Daniel F; Almuklass, Awad M; Enoka, Roger M

    2017-08-01

    Force trajectories during isometric force-matching tasks involving isometric contractions vary substantially across individuals. In this study, we investigated if this variability can be explained by discrete time proportional, integral, derivative (PID) control algorithms with varying model parameters. To this end, we analyzed the pinch force trajectories of 24 subjects performing two rapid force-matching tasks with visual feedback. Both tasks involved isometric contractions to a target force of 10% maximal voluntary contraction. One task involved a single action (pinch) and the other required a double action (concurrent pinch and wrist extension). 50,000 force trajectories were simulated with a computational neuromuscular model whose input was determined by a PID controller with different PID gains and frequencies at which the controller adjusted muscle commands. The goal was to find the best match between each experimental force trajectory and all simulated trajectories. It was possible to identify one realization of the PID controller that matched the experimental force produced during each task for most subjects (average index of similarity: 0.87 ± 0.12; 1 = perfect similarity). The similarities for both tasks were significantly greater than that would be expected by chance (single action: p = 0.01; double action: p = 0.04). Furthermore, the identified control frequencies in the simulated PID controller with the greatest similarities decreased as task difficulty increased (single action: 4.0 ± 1.8 Hz; double action: 3.1 ± 1.3 Hz). Overall, the results indicate that discrete time PID controllers are realistic models for the neural control of force in rapid force-matching tasks involving isometric contractions.

  17. 12 CFR 516.10 - How does OTS compute time periods under this part?

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false How does OTS compute time periods under this part? 516.10 Section 516.10 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPLICATION PROCESSING PROCEDURES § 516.10 How does OTS compute time periods under this part? In computing...

  18. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  19. Effect of the MCNP model definition on the computation time

    International Nuclear Information System (INIS)

    Šunka, Michal

    2017-01-01

    The presented work studies the influence of the method of defining the geometry in the MCNP transport code and its impact on the computational time, including the difficulty of preparing an input file describing the given geometry. Cases using different geometric definitions including the use of basic 2-dimensional and 3-dimensional objects and theirs combinations were studied. The results indicate that an inappropriate definition can increase the computational time by up to 59% (a more realistic case indicates 37%) for the same results and the same statistical uncertainty. (orig.)

  20. Wellsite computers--their increasing role in drilling operations

    International Nuclear Information System (INIS)

    Keenan, P.G.; Dyson, P.M.

    1981-01-01

    The increasing expense and complexity of exploration drilling, coupled with rapid advances in computer and microprocessor technology, have led to the development of computer-assisted wellsite logging units from their humble beginnings as simple hot wire gas detectors. The main applications of this technology can be recognized in the following areas: (a) Safety of wellsite personnel, rig and downhole equipment. (b) Increased drilling efficiency with the resultant time and cost savings. (c) Simulation of possible events allowing comparisons between actual and expected data to assist decision making at the wellsite. (d) Storage of data on tape/disk to allow rapid retrieval of data for postwell analysis and report production. 6 refs

  1. Two schemes for rapid generation of digital video holograms using PC cluster

    Science.gov (United States)

    Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il

    2017-12-01

    Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.

  2. Scalable space-time adaptive simulation tools for computational electrocardiology

    OpenAIRE

    Krause, Dorian; Krause, Rolf

    2013-01-01

    This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...

  3. Time-Resolved Fluorescent Immunochromatography of Aflatoxin B1 in Soybean Sauce: A Rapid and Sensitive Quantitative Analysis.

    Science.gov (United States)

    Wang, Du; Zhang, Zhaowei; Li, Peiwu; Zhang, Qi; Zhang, Wen

    2016-07-14

    Rapid and quantitative sensing of aflatoxin B1 with high sensitivity and specificity has drawn increased attention of studies investigating soybean sauce. A sensitive and rapid quantitative immunochromatographic sensing method was developed for the detection of aflatoxin B1 based on time-resolved fluorescence. It combines the advantages of time-resolved fluorescent sensing and immunochromatography. The dynamic range of a competitive and portable immunoassay was 0.3-10.0 µg·kg(-1), with a limit of detection (LOD) of 0.1 µg·kg(-1) and recoveries of 87.2%-114.3%, within 10 min. The results showed good correlation (R² > 0.99) between time-resolved fluorescent immunochromatographic strip test and high performance liquid chromatography (HPLC). Soybean sauce samples analyzed using time-resolved fluorescent immunochromatographic strip test revealed that 64.2% of samples contained aflatoxin B1 at levels ranging from 0.31 to 12.5 µg·kg(-1). The strip test is a rapid, sensitive, quantitative, and cost-effective on-site screening technique in food safety analysis.

  4. Original Article. Evaluation of Rapid Detection of Nasopharyngeal Colonization with MRSA by Real-Time PCR

    Directory of Open Access Journals (Sweden)

    Kang Feng-feng

    2012-03-01

    Full Text Available Objective To investigate the clinical application of Real-Time PCR for rapid detection of methicillin-resistant Staphylococcus aureus (MRSA directly from nasopharyngeal swab specimens.

  5. NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation

    Energy Technology Data Exchange (ETDEWEB)

    Nikkel, D J

    2009-07-21

    This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3

  6. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  7. Real-time brain computer interface using imaginary movements

    DEFF Research Database (Denmark)

    El-Madani, Ahmad; Sørensen, Helge Bjarup Dissing; Kjær, Troels W.

    2015-01-01

    Background: Brain Computer Interface (BCI) is the method of transforming mental thoughts and imagination into actions. A real-time BCI system can improve the quality of life of patients with severe neuromuscular disorders by enabling them to communicate with the outside world. In this paper...

  8. Rapid lung MRI in children with pulmonary infections: Time to change our diagnostic algorithms.

    Science.gov (United States)

    Sodhi, Kushaljit Singh; Khandelwal, Niranjan; Saxena, Akshay Kumar; Singh, Meenu; Agarwal, Ritesh; Bhatia, Anmol; Lee, Edward Y

    2016-05-01

    To determine the diagnostic utility of a new rapid MRI protocol, as compared with computed tomography (CT) for the detection of various pulmonary and mediastinal abnormalities in children with suspected pulmonary infections. Seventy-five children (age range of 5 to 15 years) with clinically suspected pulmonary infections were enrolled in this prospective study, which was approved by the institutional ethics committee. All patients underwent thoracic MRI (1.5T) and CT (64 detector) scan within 48 h of each other. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of MRI were evaluated with CT as a standard of reference. Inter-observer agreement was measured with the kappa coefficient. MRI with a new rapid MRI protocol demonstrated sensitivity, specificity, PPV, and NPV of 100% for detecting pulmonary consolidation, nodules (>3 mm), cyst/cavity, hyperinflation, pleural effusion, and lymph nodes. The kappa-test showed almost perfect agreement between MRI and multidetector CT (MDCT) in detecting thoracic abnormalities (k = 0.9). No statistically significant difference was observed between MRI and MDCT for detecting thoracic abnormalities by the McNemar test (P = 0.125). Rapid lung MRI was found to be comparable to MDCT for detecting thoracic abnormalities in pediatric patients with clinically suspected pulmonary infections. It has a great potential as the first line cross-sectional imaging modality of choice in this patient population. However, further studies will be helpful for confirmation of our findings. © 2015 Wiley Periodicals, Inc.

  9. SU-F-J-102: Lower Esophagus Margin Implications Based On Rapid Computational Algorithm for SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Cardenas, M; Mazur, T; Li, H; Mutic, S; Bradley, J; Tsien, C; Green, O [Washington University School of Medicine, Saint Louis, MO (United States)

    2016-06-15

    Purpose: To quantify inter-fraction esophagus-variation. Methods: Computed tomography and daily on-treatment 0.3-T MRI data sets for 7 patients were analyzed using a novel Matlab-based (Mathworks, Natick, MA) rapid computational method. Rigid registration was performed from the cricoid to the gastro-esophageal junction. CT and MR-based contours were compared at slice intervals of 3mm. Variation was quantified by “expansion,” defined as additional length in any radial direction from CT contour to MR contour. Expansion computations were performed with 360° of freedom in each axial slice. We partitioned expansions into left anterior, right anterior, right posterior, and left posterior quadrants (LA, RA, RP, and LP, respectively). Sample means were compared by analysis of variance (ANOVA) and Fisher’s Protected Least Significant Difference test. Results: Fifteen fractions and 1121 axial slices from 7 patients undergoing SBRT for primary lung cancer (3) and metastatic lung disease (4) were analyzed, generating 41,970 measurements. Mean LA, RA, RP, and LP expansions were 4.30±0.05 mm, 3.71±0.05mm, 3.17±0.07, and 3.98±0.06mm, respectively. 50.13% of all axial slices showed variation > 5 mm in one or more directions. Variation was greatest in lower esophagus with mean LA, RA, RP, and LP expansion (5.98±0.09 mm, 4.59±0.09 mm, 4.04±0.16 mm, and 5.41±0.16 mm, respectively). The difference was significant compared to mid and upper esophagus (p<.0001). The 95th percentiles of expansion for LA, RA, RP, LP were 13.36 mm, 9.97 mm, 11.29 mm, and 12.19 mm, respectively. Conclusion: Analysis of on-treatment MR imaging of the lower esophagus during thoracic SBRT suggests margin expansions of 13.36 mm LA, 9.97 mm RA, 11.29 mm RP, 12.19 mm LP would account for 95% of measurements. Our novel algorithm for rapid assessment of margin expansion for critical structures with 360° of freedom in each axial slice enables continuously adaptive patient-specific margins which may

  10. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  11. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  12. Variation in computer time with geometry prescription in monte carlo code KENO-IV

    International Nuclear Information System (INIS)

    Gopalakrishnan, C.R.

    1988-01-01

    In most studies, the Monte Carlo criticality code KENO-IV has been compared with other Monte Carlo codes, but evaluation of its performance with different box descriptions has not been done so far. In Monte Carlo computations, any fractional savings of computing time is highly desirable. Variation in computation time with box description in KENO for two different fast reactor fuel subassemblies of FBTR and PFBR is studied. The K eff of an infinite array of fuel subassemblies is calculated by modelling the subassemblies in two different ways (i) multi-region, (ii) multi-box. In addition to these two cases, excess reactivity calculations of FBTR are also performed in two ways to study this effect in a complex geometry. It is observed that the K eff values calculated by multi-region and multi-box models agree very well. However the increase in computation time from the multi-box to the multi-region is considerable, while the difference in computer storage requirements for the two models is negligible. This variation in computing time arises from the way the neutron is tracked in the two cases. (author)

  13. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  14. Effects of oncoming target velocities on rapid force production and accuracy of force production intensity and timing.

    Science.gov (United States)

    Ohta, Yoichi

    2017-12-01

    The present study aimed to clarify the effects of oncoming target velocities on the ability of rapid force production and accuracy and variability of simultaneous control of both force production intensity and timing. Twenty male participants (age: 21.0 ± 1.4 years) performed rapid gripping with a handgrip dynamometer to coincide with the arrival of an oncoming target by using a horizontal electronic trackway. The oncoming target velocities were 4, 8, and 12 m · s -1 , which were randomly produced. The grip force required was 30% of the maximal voluntary contraction. Although the peak force (Pf) and rate of force development (RFD) increased with increasing target velocity, the value of the RFD to Pf ratio was constant across the 3 target velocities. The accuracy of both force production intensity and timing decreased at higher target velocities. Moreover, the intrapersonal variability in temporal parameters was lower in the fast target velocity condition, but constant variability in 3 target velocities was observed in force intensity parameters. These results suggest that oncoming target velocity does not intrinsically affect the ability for rapid force production. However, the oncoming target velocity affects accuracy and variability of force production intensity and timing during rapid force production.

  15. Improving multi-GNSS ultra-rapid orbit determination for real-time precise point positioning

    Science.gov (United States)

    Li, Xingxing; Chen, Xinghan; Ge, Maorong; Schuh, Harald

    2018-03-01

    Currently, with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSS), the real-time positioning and navigation are undergoing dramatic changes with potential for a better performance. To provide more precise and reliable ultra-rapid orbits is critical for multi-GNSS real-time positioning, especially for the three merging constellations Beidou, Galileo and QZSS which are still under construction. In this contribution, we present a five-system precise orbit determination (POD) strategy to fully exploit the GPS + GLONASS + BDS + Galileo + QZSS observations from CDDIS + IGN + BKG archives for the realization of hourly five-constellation ultra-rapid orbit update. After adopting the optimized 2-day POD solution (updated every hour), the predicted orbit accuracy can be obviously improved for all the five satellite systems in comparison to the conventional 1-day POD solution (updated every 3 h). The orbit accuracy for the BDS IGSO satellites can be improved by about 80, 45 and 50% in the radial, cross and along directions, respectively, while the corresponding accuracy improvement for the BDS MEO satellites reaches about 50, 20 and 50% in the three directions, respectively. Furthermore, the multi-GNSS real-time precise point positioning (PPP) ambiguity resolution has been performed by using the improved precise satellite orbits. Numerous results indicate that combined GPS + BDS + GLONASS + Galileo (GCRE) kinematic PPP ambiguity resolution (AR) solutions can achieve the shortest time to first fix (TTFF) and highest positioning accuracy in all coordinate components. With the addition of the BDS, GLONASS and Galileo observations to the GPS-only processing, the GCRE PPP AR solution achieves the shortest average TTFF of 11 min with 7{°} cutoff elevation, while the TTFF of GPS-only, GR, GE and GC PPP AR solution is 28, 15, 20 and 17 min, respectively. As the cutoff elevation increases, the reliability and accuracy of GPS-only PPP AR solutions

  16. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  17. Computational Procedures for a Class of GI/D/k Systems in Discrete Time

    Directory of Open Access Journals (Sweden)

    Md. Mostafizur Rahman

    2009-01-01

    Full Text Available A class of discrete time GI/D/k systems is considered for which the interarrival times have finite support and customers are served in first-in first-out (FIFO order. The system is formulated as a single server queue with new general independent interarrival times and constant service duration by assuming cyclic assignment of customers to the identical servers. Then the queue length is set up as a quasi-birth-death (QBD type Markov chain. It is shown that this transformed GI/D/1 system has special structures which make the computation of the matrix R simple and efficient, thereby reducing the number of multiplications in each iteration significantly. As a result we were able to keep the computation time very low. Moreover, use of the resulting structural properties makes the computation of the distribution of queue length of the transformed system efficient. The computation of the distribution of waiting time is also shown to be simple by exploiting the special structures.

  18. Comparison of Computational Approaches for Rapid Aerodynamic Assessment of Small UAVs

    Science.gov (United States)

    Shafer, Theresa C.; Lynch, C. Eric; Viken, Sally A.; Favaregh, Noah; Zeune, Cale; Williams, Nathan; Dansie, Jonathan

    2014-01-01

    Computational Fluid Dynamic (CFD) methods were used to determine the basic aerodynamic, performance, and stability and control characteristics of the unmanned air vehicle (UAV), Kahu. Accurate and timely prediction of the aerodynamic characteristics of small UAVs is an essential part of military system acquisition and air-worthiness evaluations. The forces and moments of the UAV were predicted using a variety of analytical methods for a range of configurations and conditions. The methods included Navier Stokes (N-S) flow solvers (USM3D, Kestrel and Cobalt) that take days to set up and hours to converge on a single solution; potential flow methods (PMARC, LSAERO, and XFLR5) that take hours to set up and minutes to compute; empirical methods (Datcom) that involve table lookups and produce a solution quickly; and handbook calculations. A preliminary aerodynamic database can be developed very efficiently by using a combination of computational tools. The database can be generated with low-order and empirical methods in linear regions, then replacing or adjusting the data as predictions from higher order methods are obtained. A comparison of results from all the data sources as well as experimental data obtained from a wind-tunnel test will be shown and the methods will be evaluated on their utility during each portion of the flight envelope.

  19. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo; Abdelaziz, Ibrahim; Aldilaijan, Abdulla; Canini, Marco; Kalnis, Panos

    2017-01-01

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.

  20. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo

    2017-11-27

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers\\' computation time.

  1. Rapid thermal transient in a reactor coolant channel

    International Nuclear Information System (INIS)

    Cherubini, A.

    1986-01-01

    This report deals with the problem of one-dimensional thermo-fluid-dynamics in a reactor coolant channel, with the aim of determining the evolution in time of the coolant (H*L2O), in one-and/or two-phase regimes, subjected to a great and rapid increase in heat flux (accident conditions). To this aim, the following are set out: a) the physical model used; b) the equations inherent in the above model; c) the numerical methods employed to solve them by means of a computer programme called CABO (CAnale BOllente). Next a typical problem of rapid thermal transient resolved by CABO is reported. The results obtained, expressed in form of graphs, are fully discussed. Finally comments on possible developments of CABO follow

  2. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  3. OVERVIEW OF DEVELOPMENT OF P-CARES: PROBABILISTIC COMPUTER ANALYSIS FOR RAPID EVALUATION OF STRUCTURES

    International Nuclear Information System (INIS)

    NIE, J.; XU, J.; COSTANTINO, C.; THOMAS, V.

    2007-01-01

    Brookhaven National Laboratory (BNL) undertook an effort to revise the CARES (Computer Analysis for Rapid Evaluation of Structures) program under the auspices of the US Nuclear Regulatory Commission (NRC). The CARES program provided the NRC staff a capability to quickly check the validity and/or accuracy of the soil-structure interaction (SSI) models and associated data received from various applicants. The aim of the current revision was to implement various probabilistic simulation algorithms in CARES (referred hereinafter as P-CARES [1]) for performing the probabilistic site response and soil-structure interaction (SSI) analyses. This paper provides an overview of the development process of P-CARES, including the various probabilistic simulation techniques used to incorporate the effect of site soil uncertainties into the seismic site response and SSI analyses and an improved graphical user interface (GUI)

  4. Real-Time Continuous Response Spectra Exceedance Calculation Displayed in a Web-Browser Enables Rapid and Robust Damage Evaluation by First Responders

    Science.gov (United States)

    Franke, M.; Skolnik, D. A.; Harvey, D.; Lindquist, K.

    2014-12-01

    A novel and robust approach is presented that provides near real-time earthquake alarms for critical structures at distributed locations and large facilities using real-time estimation of response spectra obtained from near free-field motions. Influential studies dating back to the 1980s identified spectral response acceleration as a key ground motion characteristic that correlates well with observed damage in structures. Thus, monitoring and reporting on exceedance of spectra-based thresholds are useful tools for assessing the potential for damage to facilities or multi-structure campuses based on input ground motions only. With as little as one strong-motion station per site, this scalable approach can provide rapid alarms on the damage status of remote towns, critical infrastructure (e.g., hospitals, schools) and points of interests (e.g., bridges) for a very large number of locations enabling better rapid decision making during critical and difficult immediate post-earthquake response actions. Details on the novel approach are presented along with an example implementation for a large energy company. Real-time calculation of PSA exceedance and alarm dissemination are enabled with Bighorn, an extension module based on the Antelope software package that combines real-time spectral monitoring and alarm capabilities with a robust built-in web display server. Antelope is an environmental data collection software package from Boulder Real Time Technologies (BRTT) typically used for very large seismic networks and real-time seismic data analyses. The primary processing engine produces continuous time-dependent response spectra for incoming acceleration streams. It utilizes expanded floating-point data representations within object ring-buffer packets and waveform files in a relational database. This leads to a very fast method for computing response spectra for a large number of channels. A Python script evaluates these response spectra for exceedance of one or more

  5. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  6. Unconventional Quantum Computing Devices

    OpenAIRE

    Lloyd, Seth

    2000-01-01

    This paper investigates a variety of unconventional quantum computation devices, including fermionic quantum computers and computers that exploit nonlinear quantum mechanics. It is shown that unconventional quantum computing devices can in principle compute some quantities more rapidly than `conventional' quantum computers.

  7. Rapid, Time-Division Multiplexed, Direct Absorption- and Wavelength Modulation-Spectroscopy

    Directory of Open Access Journals (Sweden)

    Alexander Klein

    2014-11-01

    Full Text Available We present a tunable diode laser spectrometer with a novel, rapid time multiplexed direct absorption- and wavelength modulation-spectroscopy operation mode. The new technique allows enhancing the precision and dynamic range of a tunable diode laser absorption spectrometer without sacrificing accuracy. The spectroscopic technique combines the benefits of absolute concentration measurements using calibration-free direct tunable diode laser absorption spectroscopy (dTDLAS with the enhanced noise rejection of wavelength modulation spectroscopy (WMS. In this work we demonstrate for the first time a 125 Hz time division multiplexed (TDM-dTDLAS-WMS spectroscopic scheme by alternating the modulation of a DFB-laser between a triangle-ramp (dTDLAS and an additional 20 kHz sinusoidal modulation (WMS. The absolute concentration measurement via the dTDLAS-technique allows one to simultaneously calibrate the normalized 2f/1f-signal of the WMS-technique. A dTDLAS/WMS-spectrometer at 1.37 µm for H2O detection was built for experimental validation of the multiplexing scheme over a concentration range from 50 to 3000 ppmV (0.1 MPa, 293 K. A precision of 190 ppbV was achieved with an absorption length of 12.7 cm and an averaging time of two seconds. Our results show a five-fold improvement in precision over the entire concentration range and a significantly decreased averaging time of the spectrometer.

  8. Real time animation of space plasma phenomena

    International Nuclear Information System (INIS)

    Jordan, K.F.; Greenstadt, E.W.

    1987-01-01

    In pursuit of real time animation of computer simulated space plasma phenomena, the code was rewritten for the Massively Parallel Processor (MPP). The program creates a dynamic representation of the global bowshock which is based on actual spacecraft data and designed for three dimensional graphic output. This output consists of time slice sequences which make up the frames of the animation. With the MPP, 16384, 512 or 4 frames can be calculated simultaneously depending upon which characteristic is being computed. The run time was greatly reduced which promotes the rapid sequence of images and makes real time animation a foreseeable goal. The addition of more complex phenomenology in the constructed computer images is now possible and work proceeds to generate these images

  9. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  10. Elucidating reaction mechanisms on quantum computers

    Science.gov (United States)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  11. Elucidating reaction mechanisms on quantum computers

    Science.gov (United States)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  12. Elucidating reaction mechanisms on quantum computers.

    Science.gov (United States)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  13. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  14. A survey of computational physics introductory computational science

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2008-01-01

    Computational physics is a rapidly growing subfield of computational science, in large part because computers can solve previously intractable problems or simulate natural processes that do not have analytic solutions. The next step beyond Landau's First Course in Scientific Computing and a follow-up to Landau and Páez's Computational Physics, this text presents a broad survey of key topics in computational physics for advanced undergraduates and beginning graduate students, including new discussions of visualization tools, wavelet analysis, molecular dynamics, and computational fluid dynamics

  15. A strategy for reducing turnaround time in design optimization using a distributed computer system

    Science.gov (United States)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  16. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  17. Patient specific ankle-foot orthoses using rapid prototyping.

    Science.gov (United States)

    Mavroidis, Constantinos; Ranky, Richard G; Sivak, Mark L; Patritti, Benjamin L; DiPisa, Joseph; Caddle, Alyssa; Gilhooly, Kara; Govoni, Lauren; Sivak, Seth; Lancia, Michael; Drillio, Robert; Bonato, Paolo

    2011-01-12

    Prefabricated orthotic devices are currently designed to fit a range of patients and therefore they do not provide individualized comfort and function. Custom-fit orthoses are superior to prefabricated orthotic devices from both of the above-mentioned standpoints. However, creating a custom-fit orthosis is a laborious and time-intensive manual process performed by skilled orthotists. Besides, adjustments made to both prefabricated and custom-fit orthoses are carried out in a qualitative manner. So both comfort and function can potentially suffer considerably. A computerized technique for fabricating patient-specific orthotic devices has the potential to provide excellent comfort and allow for changes in the standard design to meet the specific needs of each patient. In this paper, 3D laser scanning is combined with rapid prototyping to create patient-specific orthoses. A novel process was engineered to utilize patient-specific surface data of the patient anatomy as a digital input, manipulate the surface data to an optimal form using Computer Aided Design (CAD) software, and then download the digital output from the CAD software to a rapid prototyping machine for fabrication. Two AFOs were rapidly prototyped to demonstrate the proposed process. Gait analysis data of a subject wearing the AFOs indicated that the rapid prototyped AFOs performed comparably to the prefabricated polypropylene design. The rapidly prototyped orthoses fabricated in this study provided good fit of the subject's anatomy compared to a prefabricated AFO while delivering comparable function (i.e. mechanical effect on the biomechanics of gait). The rapid fabrication capability is of interest because it has potential for decreasing fabrication time and cost especially when a replacement of the orthosis is required.

  18. Replacing Heavily Damaged Teeth by Third Molar Autotransplantation With the Use of Cone-Beam Computed Tomography and Rapid Prototyping.

    Science.gov (United States)

    Verweij, Jop P; Anssari Moin, David; Wismeijer, Daniel; van Merkesteyn, J P Richard

    2017-09-01

    This article describes the autotransplantation of third molars to replace heavily damaged premolars and molars. Specifically, this article reports on the use of preoperative cone-beam computed tomographic planning and 3-dimensional (3D) printed replicas of donor teeth to prepare artificial tooth sockets. In the present case, an 18-year-old patient underwent autotransplantation of 3 third molars to replace 1 premolar and 2 molars that were heavily damaged after trauma. Approximately 1 year after the traumatic incident, autotransplantation with the help of 3D planning and rapid prototyping was performed. The right maxillary third molar replaced the right maxillary first premolar. The 2 mandibular wisdom teeth replaced the left mandibular first and second molars. During the surgical procedure, artificial tooth sockets were prepared with the help of 3D printed donor tooth copies to prevent iatrogenic damage to the actual donor teeth. These replicas of the donor teeth were designed based on the preoperative cone-beam computed tomogram and manufactured with the help of 3D printing techniques. The use of a replica of the donor tooth resulted in a predictable and straightforward procedure, with extra-alveolar times shorter than 2 minutes for all transplantations. The transplanted teeth were placed in infraocclusion and fixed with a suture splint. Postoperative follow-up showed physiologic integration of the transplanted teeth and a successful outcome for all transplants. In conclusion, this technique facilitates a straightforward and predictable procedure for autotransplantation of third molars. The use of printed analogues of the donor teeth decreases the risk of iatrogenic damage and the extra-alveolar time of the transplanted tooth is minimized. This facilitates a successful outcome. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Applications of parallel computer architectures to the real-time simulation of nuclear power systems

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1988-01-01

    In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area

  20. Computationally determining the salience of decision points for real-time wayfinding support

    Directory of Open Access Journals (Sweden)

    Makoto Takemiya

    2012-06-01

    Full Text Available This study introduces the concept of computational salience to explain the discriminatory efficacy of decision points, which in turn may have applications to providing real-time assistance to users of navigational aids. This research compared algorithms for calculating the computational salience of decision points and validated the results via three methods: high-salience decision points were used to classify wayfinders; salience scores were used to weight a conditional probabilistic scoring function for real-time wayfinder performance classification; and salience scores were correlated with wayfinding-performance metrics. As an exploratory step to linking computational and cognitive salience, a photograph-recognition experiment was conducted. Results reveal a distinction between algorithms useful for determining computational and cognitive saliences. For computational salience, information about the structural integration of decision points is effective, while information about the probability of decision-point traversal shows promise for determining cognitive salience. Limitations from only using structural information and motivations for future work that include non-structural information are elicited.

  1. Computation of transit times using the milestoning method with applications to polymer translocation

    Science.gov (United States)

    Hawk, Alexander T.; Konda, Sai Sriharsha M.; Makarov, Dmitrii E.

    2013-08-01

    Milestoning is an efficient approximation for computing long-time kinetics and thermodynamics of large molecular systems, which are inaccessible to brute-force molecular dynamics simulations. A common use of milestoning is to compute the mean first passage time (MFPT) for a conformational transition of interest. However, the MFPT is not always the experimentally observed timescale. In particular, the duration of the transition path, or the mean transit time, can be measured in single-molecule experiments, such as studies of polymers translocating through pores and fluorescence resonance energy transfer studies of protein folding. Here we show how to use milestoning to compute transit times and illustrate our approach by applying it to the translocation of a polymer through a narrow pore.

  2. A heterogeneous hierarchical architecture for real-time computing

    Energy Technology Data Exchange (ETDEWEB)

    Skroch, D.A.; Fornaro, R.J.

    1988-12-01

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  3. Modern EMC analysis I time-domain computational schemes

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of contemporary real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, the analysis covers the theory of the finite-difference time-domain, the transmission-line matrix/modeling, and the finite i

  4. On-site identification of meat species in processed foods by a rapid real-time polymerase chain reaction system.

    Science.gov (United States)

    Furutani, Shunsuke; Hagihara, Yoshihisa; Nagai, Hidenori

    2017-09-01

    Correct labeling of foods is critical for consumers who wish to avoid a specific meat species for religious or cultural reasons. Therefore, gene-based point-of-care food analysis by real-time Polymerase Chain Reaction (PCR) is expected to contribute to the quality control in the food industry. In this study, we perform rapid identification of meat species by our portable rapid real-time PCR system, following a very simple DNA extraction method. Applying these techniques, we correctly identified beef, pork, chicken, rabbit, horse, and mutton in processed foods in 20min. Our system was sensitive enough to detect the interfusion of about 0.1% chicken egg-derived DNA in a processed food sample. Our rapid real-time PCR system is expected to contribute to the quality control in food industries because it can be applied for the identification of meat species, and future applications can expand its functionality to the detection of genetically modified organisms or mutations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Efficient quantum algorithm for computing n-time correlation functions.

    Science.gov (United States)

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  6. Impacts of Earth rotation parameters on GNSS ultra-rapid orbit prediction: Derivation and real-time correction

    Science.gov (United States)

    Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto

    2017-12-01

    Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50

  7. Exploring data with RapidMiner

    CERN Document Server

    Chisholm, Andrew

    2013-01-01

    A step-by-step tutorial style using examples so that users of different levels will benefit from the facilities offered by RapidMiner.If you are a computer scientist or an engineer who has real data from which you want to extract value, this book is ideal for you. You will need to have at least a basic awareness of data mining techniques and some exposure to RapidMiner.

  8. Development of a Rapid Insulin Assay by Homogenous Time-Resolved Fluorescence.

    Directory of Open Access Journals (Sweden)

    Zachary J Farino

    Full Text Available Direct measurement of insulin is critical for basic and clinical studies of insulin secretion. However, current methods are expensive and time-consuming. We developed an insulin assay based on homogenous time-resolved fluorescence that is significantly more rapid and cost-effective than current commonly used approaches. This assay was applied effectively to an insulin secreting cell line, INS-1E cells, as well as pancreatic islets, allowing us to validate the assay by elucidating mechanisms by which dopamine regulates insulin release. We found that dopamine functioned as a significant negative modulator of glucose-stimulated insulin secretion. Further, we showed that bromocriptine, a known dopamine D2/D3 receptor agonist and newly approved drug used for treatment of type II diabetes mellitus, also decreased glucose-stimulated insulin secretion in islets to levels comparable to those caused by dopamine treatment.

  9. How Rapid is Rapid Prototyping? Analysis of ESPADON Programme Results

    Directory of Open Access Journals (Sweden)

    Ian D. Alston

    2003-05-01

    Full Text Available New methodologies, engineering processes, and support environments are beginning to emerge for embedded signal processing systems. The main objectives are to enable defence industry to field state-of-the-art products in less time and with lower costs, including retrofits and upgrades, based predominately on commercial off the shelf (COTS components and the model-year concept. One of the cornerstones of the new methodologies is the concept of rapid prototyping. This is the ability to rapidly and seamlessly move from functional design to the architectural design to the implementation, through automatic code generation tools, onto real-time COTS test beds. In this paper, we try to quantify the term “rapid” and provide results, the metrics, from two independent benchmarks, a radar and sonar beamforming application subset. The metrics show that the rapid prototyping process may be sixteen times faster than a conventional process.

  10. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  11. A Non-Linear Digital Computer Model Requiring Short Computation Time for Studies Concerning the Hydrodynamics of the BWR

    Energy Technology Data Exchange (ETDEWEB)

    Reisch, F; Vayssier, G

    1969-05-15

    This non-linear model serves as one of the blocks in a series of codes to study the transient behaviour of BWR or PWR type reactors. This program is intended to be the hydrodynamic part of the BWR core representation or the hydrodynamic part of the PWR heat exchanger secondary side representation. The equations have been prepared for the CSMP digital simulation language. By using the most suitable integration routine available, the ratio of simulation time to real time is about one on an IBM 360/75 digital computer. Use of the slightly different language DSL/40 on an IBM 7044 computer takes about four times longer. The code has been tested against the Eindhoven loop with satisfactory agreement.

  12. Climate Data Provenance Tracking for Just-In-Time Computation

    Science.gov (United States)

    Fries, S.; Nadeau, D.; Doutriaux, C.; Williams, D. N.

    2016-12-01

    The "Climate Data Management System" (CDMS) was created in 1996 as part of the Climate Data Analysis Tools suite of software. It provides a simple interface into a wide variety of climate data formats, and creates NetCDF CF-Compliant files. It leverages the NumPy framework for high performance computation, and is an all-in-one IO and computation package. CDMS has been extended to track manipulations of data, and trace that data all the way to the original raw data. This extension tracks provenance about data, and enables just-in-time (JIT) computation. The provenance for each variable is packaged as part of the variable's metadata, and can be used to validate data processing and computations (by repeating the analysis on the original data). It also allows for an alternate solution for sharing analyzed data; if the bandwidth for a transfer is prohibitively expensive, the provenance serialization can be passed in a much more compact format and the analysis rerun on the input data. Data provenance tracking in CDMS enables far-reaching and impactful functionalities, permitting implementation of many analytical paradigms.

  13. Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization

    Science.gov (United States)

    Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton

    As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.

  14. Computationally designed libraries for rapid enzyme stabilization

    NARCIS (Netherlands)

    Wijma, Hein J.; Floor, Robert J.; Jekel, Peter A.; Baker, David; Marrink, Siewert J.; Janssen, Dick B.

    The ability to engineer enzymes and other proteins to any desired stability would have wide-ranging applications. Here, we demonstrate that computational design of a library with chemically diverse stabilizing mutations allows the engineering of drastically stabilized and fully functional variants

  15. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra

    Energy Technology Data Exchange (ETDEWEB)

    Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu [Department of Chemistry, University of Washington, Seattle, Washington 98195 (United States)

    2016-06-21

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.

  16. Rapid monitoring of gaseous fission products in BWRs using a portable spectrometer

    International Nuclear Information System (INIS)

    Yeh, Wei-Wen; Lee, Cheng-Jong; Chen, Chen-Yi; Chung, Chien

    1996-01-01

    Rapid, quantitative determination of gaseous radionuclides is the most difficult task in the field of environmental monitoring for radiation. Although the identification of each gaseous radionuclide is relatively straightforward using its decayed gamma ray as an index, the quantitative measurement is hampered by the time-consuming sample collection procedures, in particular for the radioactive noble gaseous fission products of krypton and xenon. In this work, a field gamma-ray spectrometer consisting of a high-purity germanium detector, portable multichannel anlayzer, and a notebook computer was used to conduct rapid scanning of radioactive krypton and xenon in the air around a nuclear facility

  17. A technique to detect periodic and non-periodic ultra-rapid flux time variations with standard radio-astronomical data

    Science.gov (United States)

    Borra, Ermanno F.; Romney, Jonathan D.; Trottier, Eric

    2018-06-01

    We demonstrate that extremely rapid and weak periodic and non-periodic signals can easily be detected by using the autocorrelation of intensity as a function of time. We use standard radio-astronomical observations that have artificial periodic and non-periodic signals generated by the electronics of terrestrial origin. The autocorrelation detects weak signals that have small amplitudes because it averages over long integration times. Another advantage is that it allows a direct visualization of the shape of the signals, while it is difficult to see the shape with a Fourier transform. Although Fourier transforms can also detect periodic signals, a novelty of this work is that we demonstrate another major advantage of the autocorrelation, that it can detect non-periodic signals while the Fourier transform cannot. Another major novelty of our work is that we use electric fields taken in a standard format with standard instrumentation at a radio observatory and therefore no specialized instrumentation is needed. Because the electric fields are sampled every 15.625 ns, they therefore allow detection of very rapid time variations. Notwithstanding the long integration times, the autocorrelation detects very rapid intensity variations as a function of time. The autocorrelation could also detect messages from Extraterrestrial Intelligence as non-periodic signals.

  18. Autotransplantation of immature third molars using a computer-aided rapid prototyping model: a report of 4 cases.

    Science.gov (United States)

    Jang, Ji-Hyun; Lee, Seung-Jong; Kim, Euiseong

    2013-11-01

    Autotransplantation of immature teeth can be an option for premature tooth loss in young patients as an alternative to immediately replacing teeth with fixed or implant-supported prostheses. The present case series reports 4 successful autotransplantation cases using computer-aided rapid prototyping (CARP) models with immature third molars. The compromised upper and lower molars (n = 4) of patients aged 15-21 years old were transplanted with third molars using CARP models. Postoperatively, the pulp vitality and the development of the roots were examined clinically and radiographically. The patient follow-up period was 2-7.5 years after surgery. The long-term follow-up showed that all of the transplants were asymptomatic and functional. Radiographic examination indicated that the apices developed continuously and the root length and thickness increased. The final follow-up examination revealed that all of the transplants kept the vitality, and the apices were fully developed with normal periodontal ligaments and trabecular bony patterns. Based on long-term follow-up observations, our 4 cases of autotransplantation of immature teeth using CARP models resulted in favorable prognoses. The CARP model assisted in minimizing the extraoral time and the possible Hertwig epithelial root sheath injury of the transplanted tooth. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  19. Real-Time GPS Monitoring for Earthquake Rapid Assessment in the San Francisco Bay Area

    Science.gov (United States)

    Guillemot, C.; Langbein, J. O.; Murray, J. R.

    2012-12-01

    The U.S. Geological Survey Earthquake Science Center has deployed a network of eight real-time Global Positioning System (GPS) stations in the San Francisco Bay area and is implementing software applications to continuously evaluate the status of the deformation within the network. Real-time monitoring of the station positions is expected to provide valuable information for rapidly estimating source parameters should a large earthquake occur in the San Francisco Bay area. Because earthquake response applications require robust data access, as a first step we have developed a suite of web-based applications which are now routinely used to monitor the network's operational status and data streaming performance. The web tools provide continuously updated displays of important telemetry parameters such as data latency and receive rates, as well as source voltage and temperature information within each instrument enclosure. Automated software on the backend uses the streaming performance data to mitigate the impact of outages, radio interference and bandwidth congestion on deformation monitoring operations. A separate set of software applications manages the recovery of lost data due to faulty communication links. Displacement estimates are computed in real-time for various combinations of USGS, Plate Boundary Observatory (PBO) and Bay Area Regional Deformation (BARD) network stations. We are currently comparing results from two software packages (one commercial and one open-source) used to process 1-Hz data on the fly and produce estimates of differential positions. The continuous monitoring of telemetry makes it possible to tune the network to minimize the impact of transient interruptions of the data flow, from one or more stations, on the estimated positions. Ongoing work is focused on using data streaming performance history to optimize the quality of the position, reduce drift and outliers by switching to the best set of stations within the network, and

  20. Patient specific ankle-foot orthoses using rapid prototyping

    Directory of Open Access Journals (Sweden)

    Sivak Seth

    2011-01-01

    Full Text Available Abstract Background Prefabricated orthotic devices are currently designed to fit a range of patients and therefore they do not provide individualized comfort and function. Custom-fit orthoses are superior to prefabricated orthotic devices from both of the above-mentioned standpoints. However, creating a custom-fit orthosis is a laborious and time-intensive manual process performed by skilled orthotists. Besides, adjustments made to both prefabricated and custom-fit orthoses are carried out in a qualitative manner. So both comfort and function can potentially suffer considerably. A computerized technique for fabricating patient-specific orthotic devices has the potential to provide excellent comfort and allow for changes in the standard design to meet the specific needs of each patient. Methods In this paper, 3D laser scanning is combined with rapid prototyping to create patient-specific orthoses. A novel process was engineered to utilize patient-specific surface data of the patient anatomy as a digital input, manipulate the surface data to an optimal form using Computer Aided Design (CAD software, and then download the digital output from the CAD software to a rapid prototyping machine for fabrication. Results Two AFOs were rapidly prototyped to demonstrate the proposed process. Gait analysis data of a subject wearing the AFOs indicated that the rapid prototyped AFOs performed comparably to the prefabricated polypropylene design. Conclusions The rapidly prototyped orthoses fabricated in this study provided good fit of the subject's anatomy compared to a prefabricated AFO while delivering comparable function (i.e. mechanical effect on the biomechanics of gait. The rapid fabrication capability is of interest because it has potential for decreasing fabrication time and cost especially when a replacement of the orthosis is required.

  1. Static Schedulers for Embedded Real-Time Systems

    Science.gov (United States)

    1989-12-01

    Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required

  2. Rapid reformatting of cine CT data

    International Nuclear Information System (INIS)

    Wyatt, E.D.; Reynolds, R.A.

    1989-01-01

    Cine CT scanners acquire data sufficiently rapidly to freeze the cardiac motion. Display hardware with sufficient highspeed computer memory permits instantaneous reformatting of sections at any orientation. Normal or abnormal cardiac motion may be studies interactively along any axial, sagittal, coronal, or oblique plane through the beating heart. Cine Ct studies, consisting of eight levels through the heart with 8-mm interlevel spacing, acquired at 10 time intervals for a total of 80 sections, were acquired on an Imatron C-100 scanner and displayed. Each entire study was loaded into t = internal display processor memory, permitting instantaneous recall without loss of spatial or density resolution. Results are presented

  3. Characteristics of products generated by selective sintering and stereolithography rapid prototyping processes

    Science.gov (United States)

    Cariapa, Vikram

    1993-01-01

    The trend in the modern global economy towards free market policies has motivated companies to use rapid prototyping technologies to not only reduce product development cycle time but also to maintain their competitive edge. A rapid prototyping technology is one which combines computer aided design with computer controlled tracking of focussed high energy source (eg. lasers, heat) on modern ceramic powders, metallic powders, plastics or photosensitive liquid resins in order to produce prototypes or models. At present, except for the process of shape melting, most rapid prototyping processes generate products that are only dimensionally similar to those of the desired end product. There is an urgent need, therefore, to enhance the understanding of the characteristics of these processes in order to realize their potential for production. Currently, the commercial market is dominated by four rapid prototyping processes, namely selective laser sintering, stereolithography, fused deposition modelling and laminated object manufacturing. This phase of the research has focussed on the selective laser sintering and stereolithography rapid prototyping processes. A theoretical model for these processes is under development. Different rapid prototyping sites supplied test specimens (based on ASTM 638-84, Type I) that have been measured and tested to provide a data base on surface finish, dimensional variation and ultimate tensile strength. Further plans call for developing and verifying the theoretical models by carefully designed experiments. This will be a joint effort between NASA and other prototyping centers to generate a larger database, thus encouraging more widespread usage by product designers.

  4. Framework of Resource Management for Intercloud Computing

    Directory of Open Access Journals (Sweden)

    Mohammad Aazam

    2014-01-01

    Full Text Available There has been a very rapid increase in digital media content, due to which media cloud is gaining importance. Cloud computing paradigm provides management of resources and helps create extended portfolio of services. Through cloud computing, not only are services managed more efficiently, but also service discovery is made possible. To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible for standalone clouds to handle everything with the increasing user demands. For scalability and better service provisioning, at times, clouds have to communicate with other clouds and share their resources. This scenario is called Intercloud computing or cloud federation. The study on Intercloud computing is still in its start. Resource management is one of the key concerns to be addressed in Intercloud computing. Already done studies discuss this issue only in a trivial and simplistic way. In this study, we present a resource management model, keeping in view different types of services, different customer types, customer characteristic, pricing, and refunding. The presented framework was implemented using Java and NetBeans 8.0 and evaluated using CloudSim 3.0.3 toolkit. Presented results and their discussion validate our model and its efficiency.

  5. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  6. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  7. A computer-based time study system for timber harvesting operations

    Science.gov (United States)

    Jingxin Wang; Joe McNeel; John Baumgras

    2003-01-01

    A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...

  8. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  9. Comparative study of manufacturing condyle implant using rapid prototyping and CNC machining

    Science.gov (United States)

    Bojanampati, S.; Karthikeyan, R.; Islam, MD; Venugopal, S.

    2018-04-01

    Injuries to the cranio-maxillofacial area caused by road traffic accidents (RTAs), fall from heights, birth defects, metabolic disorders and tumors affect a rising number of patients in the United Arab Emirates (UAE), and require maxillofacial surgery. Mandibular reconstruction poses a specific challenge in both functionality and aesthetics, and involves replacement of the damaged bone by a custom made implant. Due to material, design cycle time and manufacturing process time, such implants are in many instances not affordable to patients. In this paper, the feasibility of designing and manufacturing low-cost, custom made condyle implant is assessed using two different approaches, consisting of rapid prototyping and three-axis computer numerically controlled (CNC) machining. Two candidate rapid prototyping techniques are considered, namely fused deposition modeling (FDM) and three-dimensional printing followed by sand casting The feasibility of the proposed manufacturing processes is evaluated based on manufacturing time, cost, quality, and reliability.

  10. VNAP2: a computer program for computation of two-dimensional, time-dependent, compressible, turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Cline, M.C.

    1981-08-01

    VNAP2 is a computer program for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow. VNAP2 solves the two-dimensional, time-dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing-length model, a one-equation model, or the Jones-Launder two-equation model. The geometry may be a single- or a dual-flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference-plane-characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free-jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet-powered afterbodies, airfoils, and free-jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  11. A Matter of Computer Time

    Science.gov (United States)

    Celano, Donna; Neuman, Susan B.

    2010-01-01

    Many low-income children do not have the opportunity to develop the computer skills necessary to succeed in our technological economy. Their only access to computers and the Internet--school, afterschool programs, and community organizations--is woefully inadequate. Educators must work to close this knowledge gap and to ensure that low-income…

  12. A note on computing average state occupation times

    Directory of Open Access Journals (Sweden)

    Jan Beyersmann

    2014-05-01

    Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.

  13. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  14. Real-time data-intensive computing

    Energy Technology Data Exchange (ETDEWEB)

    Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander; MacDowell, Alastair A.; Padmore, Howard A.; Shapiro, David; Tamura, Nobumichi [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Beattie, Keith; Krishnan, Harinarayan; Patton, Simon J.; Perciano, Talita; Stromsness, Rune; Tull, Craig E.; Ushizima, Daniela [Computational Research Division, Lawrence Berkeley National Laboratory Berkeley CA 94720 (United States); Correa, Joaquin; Deslippe, Jack R. [National Energy Research Scientific Computing Center, Berkeley, CA 94720 (United States); Dart, Eli; Tierney, Brian L. [Energy Sciences Network, Berkeley, CA 94720 (United States); Daurer, Benedikt J.; Maia, Filipe R. N. C. [Uppsala University, Uppsala (Sweden); and others

    2016-07-27

    Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficient closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.

  15. GRAPHIC, time-sharing magnet design computer programs at Argonne

    International Nuclear Information System (INIS)

    Lari, R.J.

    1974-01-01

    This paper describes three magnet design computer programs in use at the Zero Gradient Synchrotron of Argonne National Laboratory. These programs are used in the time sharing mode in conjunction with a Tektronix model 4012 graphic display terminal. The first program in called TRIM, the second MAGNET, and the third GFUN. (U.S.)

  16. Critical capacity, travel time delays and travel time distribution of rapid mass transit systems

    Science.gov (United States)

    Legara, Erika Fille; Monterola, Christopher; Lee, Kee Khoon; Hung, Gih Guang

    2014-07-01

    We set up a mechanistic agent-based model of a rapid mass transit system. Using empirical data from Singapore's unidentifiable smart fare card, we validate our model by reconstructing actual travel demand and duration of travel statistics. We subsequently use this model to investigate two phenomena that are known to significantly affect the dynamics within the RTS: (1) overloading in trains and (2) overcrowding in the RTS platform. We demonstrate that by varying the loading capacity of trains, a tipping point emerges at which an exponential increase in the duration of travel time delays is observed. We also probe the impact on the rail system dynamics of three types of passenger growth distribution across stations: (i) Dirac delta, (ii) uniform and (iii) geometric, which is reminiscent of the effect of land use on transport. Under the assumption of a fixed loading capacity, we demonstrate the dependence of a given origin-destination (OD) pair on the flow volume of commuters in station platforms.

  17. Relatively Inexpensive Rapid Prototyping of Small Parts

    Science.gov (United States)

    Swan, Scott A.

    2003-01-01

    Parts with complex three-dimensional shapes and with dimensions up to 8 by 8 by 10 in. (20.3 by 20.3 by 25.4 cm) can be made as unitary pieces of a room-temperature-curing polymer, with relatively little investment in time and money, by a process now in use at Johnson Space Center. The process is one of a growing number of processes and techniques that are known collectively as the art of rapid prototyping. The main advantages of this process over other rapid-prototyping processes are greater speed and lower cost: There is no need to make paper drawings and take them to a shop for fabrication, and thus no need for the attendant paperwork and organizational delays. Instead, molds for desired parts are made automatically on a machine that is guided by data from a computer-aided design (CAD) system and can reside in an engineering office.

  18. FRANTIC: a computer code for time dependent unavailability analysis

    International Nuclear Information System (INIS)

    Vesely, W.E.; Goldberg, F.F.

    1977-03-01

    The FRANTIC computer code evaluates the time dependent and average unavailability for any general system model. The code is written in FORTRAN IV for the IBM 370 computer. Non-repairable components, monitored components, and periodically tested components are handled. One unique feature of FRANTIC is the detailed, time dependent modeling of periodic testing which includes the effects of test downtimes, test overrides, detection inefficiencies, and test-caused failures. The exponential distribution is used for the component failure times and periodic equations are developed for the testing and repair contributions. Human errors and common mode failures can be included by assigning an appropriate constant probability for the contributors. The output from FRANTIC consists of tables and plots of the system unavailability along with a breakdown of the unavailability contributions. Sensitivity studies can be simply performed and a wide range of tables and plots can be obtained for reporting purposes. The FRANTIC code represents a first step in the development of an approach that can be of direct value in future system evaluations. Modifications resulting from use of the code, along with the development of reliability data based on operating reactor experience, can be expected to provide increased confidence in its use and potential application to the licensing process

  19. Rapid improvement teams.

    Science.gov (United States)

    Alemi, F; Moore, S; Headrick, L; Neuhauser, D; Hekelman, F; Kizys, N

    1998-03-01

    Suggestions, most of which are supported by empirical studies, are provided on how total quality management (TQM) teams can be used to bring about faster organizationwide improvements. Ideas are offered on how to identify the right problem, have rapid meetings, plan rapidly, collect data rapidly, and make rapid whole-system changes. Suggestions for identifying the right problem include (1) postpone benchmarking when problems are obvious, (2) define the problem in terms of customer experience so as not to blame employees nor embed a solution in the problem statement, (3) communicate with the rest of the organization from the start, (4) state the problem from different perspectives, and (5) break large problems into smaller units. Suggestions for having rapid meetings include (1) choose a nonparticipating facilitator to expedite meetings, (2) meet with each team member before the team meeting, (3) postpone evaluation of ideas, and (4) rethink conclusions of a meeting before acting on them. Suggestions for rapid planning include reducing time spent on flowcharting by focusing on the future, not the present. Suggestions for rapid data collection include (1) sample patients for surveys, (2) rely on numerical estimates by process owners, and (3) plan for rapid data collection. Suggestions for rapid organizationwide implementation include (1) change membership on cross-functional teams, (2) get outside perspectives, (3) use unfolding storyboards, and (4) go beyond self-interest to motivate lasting change in the organization. Additional empirical investigations of time saved as a consequence of the strategies provided are needed. If organizations solve their problems rapidly, fewer unresolved problems may remain.

  20. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  1. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  2. Dosimetric comparison between RapidArc and fixed gantry intensity modulated radiation therapy in treatment of liver carcinoma

    International Nuclear Information System (INIS)

    Ma Changsheng; Yin Yong; Liu Tonghai; Chen Jinhu; Sun Tao; Lin Xiutong

    2010-01-01

    Objective: To compare the dosimetric difference of RapidArc and fixed gantry IMRT for liver carcinoma. Methods: The CT data of 10 liver cancer patients were used to design 3 groups of treatment plan: IMRT plan, single arc RapidArc plan (RA1), and dual arc RapidArc plan (RA2). The planning target volume (PTV) dosimetric distribution, the organs at risk (OAR) dose, the normal tissue dose, mornitor units (MU) and treatment time were compared. Results: The maximum dose of PTV in RA1 and RA2 plans were lower than that of IMRT (Z=-2.0990, -2.666, P 40 of stomach small bowel than IMRT plan, but higher in mean dose of left kidney (Z=-1.988, -2.191, P 5 , V 10 and 15 of healthy tissue in RapidArc plan groups were higher than those in IMRT plan, while the values of V 20 , V 25 and V 30 of healthy tissue in RapidArc plan groups were than those in IMRT plan. The number of computed MU/fraction of Rapid Arc plan was 40% or 46% of IMRT plan and the treatment time was 30% and 40% of IMRT. Conclusions: RapidArc showed improvements in conformity index and healthy tissue sparing with uncompromised target coverage. RapidArc could lead to the less MU and shorter delivery time compared to IMRT. (authors)

  3. Computer Assisted Instructional Design for Computer-Based Instruction. Final Report. Working Papers.

    Science.gov (United States)

    Russell, Daniel M.; Pirolli, Peter

    Recent advances in artificial intelligence and the cognitive sciences have made it possible to develop successful intelligent computer-aided instructional systems for technical and scientific training. In addition, computer-aided design (CAD) environments that support the rapid development of such computer-based instruction have also been recently…

  4. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    Science.gov (United States)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  5. Rapid freeze-drying cycle optimization using computer programs developed based on heat and mass transfer models and facilitated by tunable diode laser absorption spectroscopy (TDLAS).

    Science.gov (United States)

    Kuu, Wei Y; Nail, Steven L

    2009-09-01

    Computer programs in FORTRAN were developed to rapidly determine the optimal shelf temperature, T(f), and chamber pressure, P(c), to achieve the shortest primary drying time. The constraint for the optimization is to ensure that the product temperature profile, T(b), is below the target temperature, T(target). Five percent mannitol was chosen as the model formulation. After obtaining the optimal sets of T(f) and P(c), each cycle was assigned with a cycle rank number in terms of the length of drying time. Further optimization was achieved by dividing the drying time into a series of ramping steps for T(f), in a cascading manner (termed the cascading T(f) cycle), to further shorten the cycle time. For the purpose of demonstrating the validity of the optimized T(f) and P(c), four cycles with different predicted lengths of drying time, along with the cascading T(f) cycle, were chosen for experimental cycle runs. Tunable diode laser absorption spectroscopy (TDLAS) was used to continuously measure the sublimation rate. As predicted, maximum product temperatures were controlled slightly below the target temperature of -25 degrees C, and the cascading T(f)-ramping cycle is the most efficient cycle design. In addition, the experimental cycle rank order closely matches with that determined by modeling.

  6. Case studies in intelligent computing achievements and trends

    CERN Document Server

    Issac, Biju

    2014-01-01

    Although the field of intelligent systems has grown rapidly in recent years, there has been a need for a book that supplies a timely and accessible understanding of this important technology. Filling this need, Case Studies in Intelligent Computing: Achievements and Trends provides an up-to-date introduction to intelligent systems.This edited book captures the state of the art in intelligent computing research through case studies that examine recent developments, developmental tools, programming, and approaches related to artificial intelligence (AI). The case studies illustrate successful ma

  7. Design and implementation of a rapid-mixer flow cell for time-resolved infrared microspectroscopy

    International Nuclear Information System (INIS)

    Marinkovic, Nebojsa S.; Adzic, Aleksandar R.; Sullivan, Michael; Kovacs, Kevin; Miller, Lisa M.; Rousseau, Denis L.; Yeh, Syun-Ru; Chance, Mark R.

    2000-01-01

    A rapid mixer for the analysis of reactions in the millisecond and submillisecond time domains by Fourier-transform infrared microspectroscopy has been constructed. The cell was tested by examination of cytochrome-c folding kinetics. The device allows collection of full infrared spectral data on millisecond and faster time scales subsequent to chemical jump reaction initiation. The data quality is sufficiently good such that spectral fitting techniques could be applied to analysis of the data. Thus, this method provides an advantage over kinetic measurements at single wavelengths using infrared laser or diode sources, particularly where band overlap exists

  8. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Marlowe Thomas J.

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants. We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results. We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.

  9. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  10. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  11. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  12. Real-Time Digital Bright Field Technology for Rapid Antibiotic Susceptibility Testing.

    Science.gov (United States)

    Canali, Chiara; Spillum, Erik; Valvik, Martin; Agersnap, Niels; Olesen, Tom

    2018-01-01

    Optical scanning through bacterial samples and image-based analysis may provide a robust method for bacterial identification, fast estimation of growth rates and their modulation due to the presence of antimicrobial agents. Here, we describe an automated digital, time-lapse, bright field imaging system (oCelloScope, BioSense Solutions ApS, Farum, Denmark) for rapid and higher throughput antibiotic susceptibility testing (AST) of up to 96 bacteria-antibiotic combinations at a time. The imaging system consists of a digital camera, an illumination unit and a lens where the optical axis is tilted 6.25° relative to the horizontal plane of the stage. Such tilting grants more freedom of operation at both high and low concentrations of microorganisms. When considering a bacterial suspension in a microwell, the oCelloScope acquires a sequence of 6.25°-tilted images to form an image Z-stack. The stack contains the best-focus image, as well as the adjacent out-of-focus images (which contain progressively more out-of-focus bacteria, the further the distance from the best-focus position). The acquisition process is repeated over time, so that the time-lapse sequence of best-focus images is used to generate a video. The setting of the experiment, image analysis and generation of time-lapse videos can be performed through a dedicated software (UniExplorer, BioSense Solutions ApS). The acquired images can be processed for online and offline quantification of several morphological parameters, microbial growth, and inhibition over time.

  13. Computing Refined Buneman Trees in Cubic Time

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Östlin, A.

    2003-01-01

    Reconstructing the evolutionary tree for a set of n species based on pairwise distances between the species is a fundamental problem in bioinformatics. Neighbor joining is a popular distance based tree reconstruction method. It always proposes fully resolved binary trees despite missing evidence...... in the underlying distance data. Distance based methods based on the theory of Buneman trees and refined Buneman trees avoid this problem by only proposing evolutionary trees whose edges satisfy a number of constraints. These trees might not be fully resolved but there is strong combinatorial evidence for each...... proposed edge. The currently best algorithm for computing the refined Buneman tree from a given distance measure has a running time of O(n 5) and a space consumption of O(n 4). In this paper, we present an algorithm with running time O(n 3) and space consumption O(n 2). The improved complexity of our...

  14. Computational intelligence in time series forecasting theory and engineering applications

    CERN Document Server

    Palit, Ajoy K

    2005-01-01

    Foresight in an engineering enterprise can make the difference between success and failure, and can be vital to the effective control of industrial systems. Applying time series analysis in the on-line milieu of most industrial plants has been problematic owing to the time and computational effort required. The advent of soft computing tools offers a solution. The authors harness the power of intelligent technologies individually and in combination. Examples of the particular systems and processes susceptible to each technique are investigated, cultivating a comprehensive exposition of the improvements on offer in quality, model building and predictive control and the selection of appropriate tools from the plethora available. Application-oriented engineers in process control, manufacturing, production industry and research centres will find much to interest them in this book. It is suitable for industrial training purposes, as well as serving as valuable reference material for experimental researchers.

  15. Theory and computation of disturbance invariant sets for discrete-time linear systems

    Directory of Open Access Journals (Sweden)

    Kolmanovsky Ilya

    1998-01-01

    Full Text Available This paper considers the characterization and computation of invariant sets for discrete-time, time-invariant, linear systems with disturbance inputs whose values are confined to a specified compact set but are otherwise unknown. The emphasis is on determining maximal disturbance-invariant sets X that belong to a specified subset Γ of the state space. Such d-invariant sets have important applications in control problems where there are pointwise-in-time state constraints of the form χ ( t ∈ Γ . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.

  16. 11th International Conference on Computer and Information Science

    CERN Document Server

    Computer and Information 2012

    2012-01-01

    The series "Studies in Computational Intelligence" (SCI) publishes new developments and advances in the various areas of computational intelligence – quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life science, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Critical to both contributors and readers are the short publication time and world-wide distribution - this permits a rapid and broad dissemination of research results.   The purpose of the 11th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2012...

  17. Ultrasonic divergent-beam scanner for time-of-flight tomography with computer evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Glover, G H

    1978-03-02

    The rotatable ultrasonic divergent-beam scanner is designed for time-of-flight tomography with computer evaluation. With it there can be measured parameters that are of importance for the structure of soft tissues, e.g. time as a function of the velocity distribution along a certain path of flight(the method is analogous to the transaxial X-ray tomography). Moreover it permits to perform the quantitative measurement of two-dimensional velocity distributions and may therefore be applied to serial examinations for detecting cancer of the breast. As computers digital memories as well as analog-digital-hybrid systems are suitable.

  18. Computing moment to moment BOLD activation for real-time neurofeedback

    Science.gov (United States)

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  19. Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to demonstrate key elements of feasibility for a high speed automated time domain terahertz computed axial tomography (TD-THz CT) non destructive...

  20. Identifying a Computer Forensics Expert: A Study to Measure the Characteristics of Forensic Computer Examiners

    Directory of Open Access Journals (Sweden)

    Gregory H. Carlton

    2010-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

  1. More power : Accelerating sequential Computer Vision algorithms using commodity parallel hardware

    NARCIS (Netherlands)

    Jaap van de Loosdrecht; K. Dijkstra

    2014-01-01

    The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer

  2. Impact of rapid molecular diagnostic tests on time to treatment initiation and outcomes in patients with multidrug-resistant tuberculosis, Tamil Nadu, India.

    Science.gov (United States)

    Nair, Dina; Navneethapandian, Pooranaganga D; Tripathy, Jaya Prasad; Harries, Anthony D; Klinton, Joel S; Watson, Basilea; Sivaramakrishnan, Gomathi N; Reddy, Devarajulu S; Murali, Lakshmi; Natrajan, Mohan; Swaminathan, Soumya

    2016-09-01

    India is replacing culture and drug sensitivity testing (CDST) with rapid molecular tests for diagnosing MDR-TB. We assessed the impact of rapid tests on time to initiation of treatment and outcomes in patients with MDR-TB compared with CDST. A retrospective cohort study involving MDR-TB patients from six districts in Tamil Nadu state, who underwent CDST (2010-2011) and rapid tests (2012-2013). There were 135 patients in the CDST group and 389 in the rapid diagnostic test group. Median time from sputum receipt at the laboratory to initiation of MDR-TB treatment was 130 days (IQR 75-213) in the CDST group and 22 days (IQR 14-38) in the rapid diagnostic test group (p30% in both groups and missing data were higher in CDST (13%) compared with rapid tests (3%). There were significantly higher risks of unfavourable treatment outcomes in males (aRR 1.3, 95% CI 1.1-1.5) and those with treatment initiation delays >30 days (aRR 1.3, 95% CI 1.0-1.6). Rapid molecular diagnostic tests shortened the time to initiate treatment which was associated with reduced unfavourable outcomes in MDR-TB patients. This supports the policy to scale up these tests in India. © The Author 2016. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. A New Rapid Simplified Model for Urban Rainstorm Inundation with Low Data Requirements

    Directory of Open Access Journals (Sweden)

    Ji Shen

    2016-11-01

    Full Text Available This paper proposes a new rapid simplified inundation model (NRSIM for flood inundation caused by rainstorms in an urban setting that can simulate the urban rainstorm inundation extent and depth in a data-scarce area. Drainage basins delineated from a floodplain map according to the distribution of the inundation sources serve as the calculation cells of NRSIM. To reduce data requirements and computational costs of the model, the internal topography of each calculation cell is simplified to a circular cone, and a mass conservation equation based on a volume spreading algorithm is established to simulate the interior water filling process. Moreover, an improved D8 algorithm is outlined for the simulation of water spilling between different cells. The performance of NRSIM is evaluated by comparing the simulated results with those from a traditional rapid flood spreading model (TRFSM for various resolutions of digital elevation model (DEM data. The results are as follows: (1 given high-resolution DEM data input, the TRFSM model has better performance in terms of precision than NRSIM; (2 the results from TRFSM are seriously affected by the decrease in DEM data resolution, whereas those from NRSIM are not; and (3 NRSIM always requires less computational time than TRFSM. Apparently, compared with the complex hydrodynamic or traditional rapid flood spreading model, NRSIM has much better applicability and cost-efficiency in real-time urban inundation forecasting for data-sparse areas.

  4. Real-time PCR-based method for the rapid detection of extended RAS mutations using bridged nucleic acids in colorectal cancer.

    Science.gov (United States)

    Iida, Takao; Mizuno, Yukie; Kaizaki, Yasuharu

    2017-10-27

    Mutations in RAS and BRAF are predictors of the efficacy of anti-epidermal growth factor receptor (EGFR) therapy in patients with metastatic colorectal cancer (mCRC). Therefore, simple, rapid, cost-effective methods to detect these mutations in the clinical setting are greatly needed. In the present study, we evaluated BNA Real-time PCR Mutation Detection Kit Extended RAS (BNA Real-time PCR), a real-time PCR method that uses bridged nucleic acid clamping technology to rapidly detect mutations in RAS exons 2-4 and BRAF exon 15. Genomic DNA was extracted from 54 formalin-fixed paraffin-embedded (FFPE) tissue samples obtained from mCRC patients. Among the 54 FFPE samples, BNA Real-time PCR detected 21 RAS mutations (38.9%) and 5 BRAF mutations (9.3%), and the reference assay (KRAS Mutation Detection Kit and MEBGEN™ RASKET KIT) detected 22 RAS mutations (40.7%). The concordance rate of detected RAS mutations between the BNA Real-time PCR assay and the reference assays was 98.2% (53/54). The BNA Real-time PCR assay proved to be a more simple, rapid, and cost-effective method for detecting KRAS and RAS mutations compared with existing assays. These findings suggest that BNA Real-time PCR is a valuable tool for predicting the efficacy of early anti-EGFR therapy in mCRC patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Rapid Prototyping Enters Mainstream Manufacturing.

    Science.gov (United States)

    Winek, Gary

    1996-01-01

    Explains rapid prototyping, a process that uses computer-assisted design files to create a three-dimensional object automatically, speeding the industrial design process. Five commercially available systems and two emerging types--the 3-D printing process and repetitive masking and depositing--are described. (SK)

  6. Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — In this Phase 2 project, we propose to develop, construct, and deliver to NASA a computed axial tomography time-domain terahertz (CT TD-THz) non destructive...

  7. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  8. 78 FR 38949 - Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response

    Science.gov (United States)

    2013-06-28

    ... exposed to various forms of cyber attack. In some cases, attacks can be thwarted through the use of...-3383-01] Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response... systems will be successfully attacked. When a successful attack occurs, the job of a Computer Security...

  9. CROSAT: A digital computer program for statistical-spectral analysis of two discrete time series

    International Nuclear Information System (INIS)

    Antonopoulos Domis, M.

    1978-03-01

    The program CROSAT computes directly from two discrete time series auto- and cross-spectra, transfer and coherence functions, using a Fast Fourier Transform subroutine. Statistical analysis of the time series is optional. While of general use the program is constructed to be immediately compatible with the ICL 4-70 and H316 computers at AEE Winfrith, and perhaps with minor modifications, with any other hardware system. (author)

  10. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  11. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  12. It is Time to Ban Rapid Weight Loss from Combat Sports.

    Science.gov (United States)

    Artioli, Guilherme G; Saunders, Bryan; Iglesias, Rodrigo T; Franchini, Emerson

    2016-11-01

    Most competitions in combat sports are divided into weight classes, theoretically allowing for fairer and more evenly contested disputes between athletes of similar body size, strength and agility. It has been well documented that most athletes, regardless of the combat sports discipline, reduce significant amounts of body weight in the days prior to competition to qualify for lighter weight classes. Rapid weight loss is characterised by the reduction of a significant amount of body weight (typically 2-10 %, although larger reductions are often seen) in a few days prior to weigh-in (mostly in the last 2-3 days) achieved by a combination of methods that include starvation, severe restriction of fluid intake and intentional sweating. In doing so, athletes try to gain a competitive advantage against lighter, smaller and weaker opponents. Such a drastic and rapid weight reduction is only achievable via a combination of aggressive strategies that lead to hypohydration and starvation. The negative impact of these procedures on health is well described in the literature. Although the impact of rapid weight loss on performance is debated, there remains robust evidence showing that rapid weight loss may not impair performance, and translates into an actual competitive advantage. In addition to the health and performance implications, rapid weight loss clearly breaches fair play and stands against the spirit of the sport because an athlete unwilling to compete having rapidly reduced weight would face unfair contests against opponents who are 'artificially' bigger and stronger. The World Anti-Doping Agency Code states that a prohibited method must meet at least two of the following criteria: (1) enhances performance; (2) endangers an athlete's health; and (3) violates the spirit of the sport. We herein argue that rapid weight loss clearly meets all three criteria and, therefore, should be banned from the sport. To quote the World Anti-Doping Agency Code, this would "protect

  13. TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization

    Science.gov (United States)

    Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi

    2011-12-01

    We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.

  14. Retrieving 3D Wind Field from Phased Array Radar Rapid Scans

    Directory of Open Access Journals (Sweden)

    Xiaobin Qiu

    2013-01-01

    Full Text Available The previous two-dimensional simple adjoint method for retrieving horizontal wind field from a time sequence of single-Doppler scans of reflectivity and/or radial velocity is further developed into a new method to retrieve both horizontal and vertical winds at high temporal and spatial resolutions. This new method performs two steps. First, the horizontal wind field is retrieved on the conical surface at each tilt (elevation angle of radar scan. Second, the vertical velocity field is retrieved in a vertical cross-section along the radar beam with the horizontal velocity given from the first step. The method is applied to phased array radar (PAR rapid scans of the storm winds and reflectivity in a strong microburst event and is shown to be able to retrieve the three-dimensional wind field around a targeted downdraft within the storm that subsequently produced a damaging microburst. The method is computationally very efficient and can be used for real-time applications with PAR rapid scans.

  15. Rapid hyperspectral image classification to enable autonomous search systems

    Directory of Open Access Journals (Sweden)

    Raj Bridgelal

    2016-11-01

    Full Text Available The emergence of lightweight full-frame hyperspectral cameras is destined to enable autonomous search vehicles in the air, on the ground and in water. Self-contained and long-endurance systems will yield important new applications, for example, in emergency response and the timely identification of environmental hazards. One missing capability is rapid classification of hyperspectral scenes so that search vehicles can immediately take actions to verify potential targets. Onsite verifications minimise false positives and preclude the expense of repeat missions. Verifications will require enhanced image quality, which is achievable by either moving closer to the potential target or by adjusting the optical system. Such a solution, however, is currently impractical for small mobile platforms with finite energy sources. Rapid classifications with current methods demand large computing capacity that will quickly deplete the on-board battery or fuel. To develop the missing capability, the authors propose a low-complexity hyperspectral image classifier that approaches the performance of prevalent classifiers. This research determines that the new method will require at least 19-fold less computing capacity than the prevalent classifier. To assess relative performances, the authors developed a benchmark that compares a statistic of library endmember separability in their respective feature spaces.

  16. Advanced computational approaches to biomedical engineering

    CERN Document Server

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  17. Rapid evolution in insect pests: the importance of space and time in population genomics studies.

    Science.gov (United States)

    Pélissié, Benjamin; Crossley, Michael S; Cohen, Zachary Paul; Schoville, Sean D

    2018-04-01

    Pest species in agroecosystems often exhibit patterns of rapid evolution to environmental and human-imposed selection pressures. Although the role of adaptive processes is well accepted, few insect pests have been studied in detail and most research has focused on selection at insecticide resistance candidate genes. Emerging genomic datasets provide opportunities to detect and quantify selection in insect pest populations, and address long-standing questions about mechanisms underlying rapid evolutionary change. We examine the strengths of recent studies that stratify population samples both in space (along environmental gradients and comparing ancestral vs. derived populations) and in time (using chronological sampling, museum specimens and comparative phylogenomics), resulting in critical insights on evolutionary processes, and providing new directions for studying pests in agroecosystems. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    Science.gov (United States)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  19. Rapid detection of undesired cosmetic ingredients by matrix-assisted laser desorption ionization time-of-flight mass spectrometry.

    Science.gov (United States)

    Ouyang, Jie; An, Dongli; Chen, Tengteng; Lin, Zhiwei

    2017-10-01

    In recent years, cosmetic industry profits soared due to the widespread use of cosmetics, which resulted in illicit manufacturers and products of poor quality. Therefore, the rapid and accurate detection of the composition of cosmetics has become crucial. At present, numerous methods, such as gas chromatography and liquid chromatography-mass spectrometry, were available for the analysis of cosmetic ingredients. However, these methods present several limitations, such as failure to perform comprehensive and rapid analysis of the samples. Compared with other techniques, matrix-assisted laser desorption ionization time-of-flight mass spectrometry offered the advantages of wide detection range, fast speed and high accuracy. In this article, we briefly summarized how to select a suitable matrix and adjust the appropriate laser energy. We also discussed the rapid identification of undesired ingredients, focusing on antibiotics and hormones in cosmetics.

  20. Early Flood Detection for Rapid Humanitarian Response: Harnessing Near Real-Time Satellite and Twitter Signals

    Directory of Open Access Journals (Sweden)

    Brenden Jongman

    2015-10-01

    Full Text Available Humanitarian organizations have a crucial role in response and relief efforts after floods. The effectiveness of disaster response is contingent on accurate and timely information regarding the location, timing and impacts of the event. Here we show how two near-real-time data sources, satellite observations of water coverage and flood-related social media activity from Twitter, can be used to support rapid disaster response, using case-studies in the Philippines and Pakistan. For these countries we analyze information from disaster response organizations, the Global Flood Detection System (GFDS satellite flood signal, and flood-related Twitter activity analysis. The results demonstrate that these sources of near-real-time information can be used to gain a quicker understanding of the location, the timing, as well as the causes and impacts of floods. In terms of location, we produce daily impact maps based on both satellite information and social media, which can dynamically and rapidly outline the affected area during a disaster. In terms of timing, the results show that GFDS and/or Twitter signals flagging ongoing or upcoming flooding are regularly available one to several days before the event was reported to humanitarian organizations. In terms of event understanding, we show that both GFDS and social media can be used to detect and understand unexpected or controversial flood events, for example due to the sudden opening of hydropower dams or the breaching of flood protection. The performance of the GFDS and Twitter data for early detection and location mapping is mixed, depending on specific hydrological circumstances (GFDS and social media penetration (Twitter. Further research is needed to improve the interpretation of the GFDS signal in different situations, and to improve the pre-processing of social media data for operational use.

  1. Computer-related standards for the petroleum industry

    International Nuclear Information System (INIS)

    Winczewski, L.M.

    1992-01-01

    Rapid application of the computer to all areas of the petroleum industry is straining the capabilities of corporations and vendors to efficiently integrate computer tools into the work environment. Barriers to this integration arose form decades of competitive development of proprietary applications formats, along with compilation of data bases in isolation. Rapidly emerging industry-wide standards relating to computer applications and data management are poised to topple these barriers. This paper identifies the most active players within a rapidly evolving group of cooperative standardization activities sponsored by the petroleum industry. Summarized are their objectives, achievements, current activities and relationships to each other. The trends of these activities are assessed and projected

  2. On some methods for improving time of reachability sets computation for the dynamic system control problem

    Science.gov (United States)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  3. Forensic Computing (Dagstuhl Seminar 13482)

    OpenAIRE

    Freiling, Felix C.; Hornung, Gerrit; Polcák, Radim

    2014-01-01

    Forensic computing} (sometimes also called digital forensics, computer forensics or IT forensics) is a branch of forensic science pertaining to digital evidence, i.e., any legal evidence that is processed by digital computer systems or stored on digital storage media. Forensic computing is a new discipline evolving within the intersection of several established research areas such as computer science, computer engineering and law. Forensic computing is rapidly gaining importance since the...

  4. Simulation-based assessment of thermal aware computation of a bespoke data centre

    NARCIS (Netherlands)

    Zavrel, V.; Torrens, J.Ignacio; Hensen, J.L.M.; Heiselberg, P.K.

    2016-01-01

    The role of Data Centres (DCs) as global electricity consumers is growing rapidly due to the exponential increase of computational demand that modern times require. Control strategies that minimize energy consumption while guaranteeing optimal operation conditions in DC are essential to achieve

  5. Right-hand-side updating for fast computing of genomic breeding values

    NARCIS (Netherlands)

    Calus, M.P.L.

    2014-01-01

    Since both the number of SNPs (single nucleotide polymorphisms) used in genomic prediction and the number of individuals used in training datasets are rapidly increasing, there is an increasing need to improve the efficiency of genomic prediction models in terms of computing time and memory (RAM)

  6. A computationally simple and robust method to detect determinism in a time series

    DEFF Research Database (Denmark)

    Lu, Sheng; Ju, Ki Hwan; Kanters, Jørgen K.

    2006-01-01

    We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals. The IS ......We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals...

  7. A multi-directional rapidly exploring random graph (mRRG) for protein folding

    KAUST Repository

    Nath, Shuvra Kanti; Thomas, Shawna; Ekenna, Chinwe; Amato, Nancy M.

    2012-01-01

    Modeling large-scale protein motions, such as those involved in folding and binding interactions, is crucial to better understanding not only how proteins move and interact with other molecules but also how proteins misfold, thus causing many devastating diseases. Robotic motion planning algorithms, such as Rapidly Exploring Random Trees (RRTs), have been successful in simulating protein folding pathways. Here, we propose a new multi-directional Rapidly Exploring Random Graph (mRRG) specifically tailored for proteins. Unlike traditional RRGs which only expand a parent conformation in a single direction, our strategy expands the parent conformation in multiple directions to generate new samples. Resulting samples are connected to the parent conformation and its nearest neighbors. By leveraging multiple directions, mRRG can model the protein motion landscape with reduced computational time compared to several other robotics-based methods for small to moderate-sized proteins. Our results on several proteins agree with experimental hydrogen out-exchange, pulse-labeling, and F-value analysis. We also show that mRRG covers the conformation space better as compared to the other computation methods. Copyright © 2012 ACM.

  8. Hardware architecture design of image restoration based on time-frequency domain computation

    Science.gov (United States)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  9. Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment

    Science.gov (United States)

    He, Aiguo

    2011-01-01

    Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…

  10. Rapid Spontaneously Resolving Acute Subdural Hematoma

    Science.gov (United States)

    Gan, Qi; Zhao, Hexiang; Zhang, Hanmei; You, Chao

    2017-01-01

    Introduction: This study reports a rare patient of a rapid spontaneously resolving acute subdural hematoma. In addition, an analysis of potential clues for the phenomenon is presented with a review of the literature. Patient Presentation: A 1-year-and-2-month-old boy fell from a height of approximately 2 m. The patient was in a superficial coma with a Glasgow Coma Scale of 8 when he was transferred to the authors’ hospital. Computed tomography revealed the presence of an acute subdural hematoma with a midline shift beyond 1 cm. His guardians refused invasive interventions and chose conservative treatment. Repeat imaging after 15 hours showed the evident resolution of the hematoma and midline reversion. Progressive magnetic resonance imaging demonstrated the complete resolution of the hematoma, without redistribution to a remote site. Conclusions: Even though this phenomenon has a low incidence, the probability of a rapid spontaneously resolving acute subdural hematoma should be considered when patients present with the following characteristics: children or elderly individuals suffering from mild to moderate head trauma; stable or rapidly recovered consciousness; and simple acute subdural hematoma with a moderate thickness and a particularly low-density band in computed tomography scans. PMID:28468224

  11. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  12. A real-time computational model for estimating kinematics of ankle ligaments.

    Science.gov (United States)

    Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Quan

    2016-01-01

    An accurate assessment of ankle ligament kinematics is crucial in understanding the injury mechanisms and can help to improve the treatment of an injured ankle, especially when used in conjunction with robot-assisted therapy. A number of computational models have been developed and validated for assessing the kinematics of ankle ligaments. However, few of them can do real-time assessment to allow for an input into robotic rehabilitation programs. An ankle computational model was proposed and validated to quantify the kinematics of ankle ligaments as the foot moves in real-time. This model consists of three bone segments with three rotational degrees of freedom (DOFs) and 12 ankle ligaments. This model uses inputs for three position variables that can be measured from sensors in many ankle robotic devices that detect postures within the foot-ankle environment and outputs the kinematics of ankle ligaments. Validation of this model in terms of ligament length and strain was conducted by comparing it with published data on cadaver anatomy and magnetic resonance imaging. The model based on ligament lengths and strains is in concurrence with those from the published studies but is sensitive to ligament attachment positions. This ankle computational model has the potential to be used in robot-assisted therapy for real-time assessment of ligament kinematics. The results provide information regarding the quantification of kinematics associated with ankle ligaments related to the disability level and can be used for optimizing the robotic training trajectory.

  13. Real time simulation of large systems on mini-computer

    International Nuclear Information System (INIS)

    Nakhle, Michel; Roux, Pierre.

    1979-01-01

    Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr

  14. Real-time computing in environmental monitoring of a nuclear power plant

    International Nuclear Information System (INIS)

    Deme, S.; Lang, E.; Nagy, Gy.

    1987-06-01

    A real-time computing method is described for calculating the environmental radiation exposure due to a nuclear power plant both at normal operation and at accident. The effects of the Gaussian plume are recalculated in every ten minutes based on meteorological parameters measured at a height of 20 and 120 m as well as on emission data. At normal operation the quantity of radioactive materials released through the stacks is measured and registered while, at an accident, the source strength is unknown and the calculated relative data are normalized to the values measured at the eight environmental monitoring stations. The doses due to noble gases and to dry and wet deposition as well as the time integral of 131 I concentration are calculated and stored by a professional personal computer for 720 points of the environment of 11 km radius. (author)

  15. Research on rapid agile metrology for manufacturing based on real-time multitask operating system

    Science.gov (United States)

    Chen, Jihong; Song, Zhen; Yang, Daoshan; Zhou, Ji; Buckley, Shawn

    1996-10-01

    Rapid agile metrology for manufacturing (RAMM) using multiple non-contact sensors is likely to remain a growing trend in manufacturing. High speed inspecting systems for manufacturing is characterized by multitasks implemented in parallel and real-time events which occur simultaneously. In this paper, we introduce a real-time operating system into RAMM research. A general task model of a class-based object- oriented technology is proposed. A general multitask frame of a typical RAMM system using OPNet is discussed. Finally, an application example of a machine which inspects parts held on a carrier strip is described. With RTOS and OPNet, this machine can measure two dimensions of the contacts at 300 parts/second.

  16. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    Science.gov (United States)

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  17. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junghoon Lee

    2011-03-01

    Full Text Available Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  18. Computations of concentration of radon and its decay products against time. Computer program; Obliczanie koncentracji radonu i jego produktow rozpadu w funkcji czasu. Program komputerowy

    Energy Technology Data Exchange (ETDEWEB)

    Machaj, B. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1996-12-31

    This research is aimed to develop a device for continuous monitoring of radon in the air, by measuring alpha activity of radon and its short lived decay products. The influence of alpha activity variation of radon and its daughters on the measured results is of importance and requires a knowledge of this variation with time. Employing the measurement of alpha radiation of radon and of its short lived decay products, require knowledge of radon concentration variation and its decay products against the time. A computer program in Turbo Pascal language was therefore developed performing the computations employing the known relations involved, the program being adapted for IBM PC computers. The presented program enables computation of activity of {sup 222}Rn and its daughter products: {sup 218}Po, {sup 214}Pb, {sup 214}Bi and {sup 214}Po every 1 min within the period of 0-255 min for any state of radiation equilibrium between the radon and its daughter products. The program permits also to compute alpha activity of {sup 222}Rn + {sup 218}Po + {sup 214}Po against time and the total alpha activity at selected interval of time. The results of computations are stored on the computer hard disk in ASCII format and are used a graphic program e.g. by DrawPerfect program to make diagrams. Equations employed for computation of the alpha activity of radon and its decay products as well as the description of program functions are given. (author). 2 refs, 4 figs.

  19. Bound on quantum computation time: Quantum error correction in a critical environment

    International Nuclear Information System (INIS)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-01-01

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  20. Electronic patient-reported data capture as a foundation of rapid learning cancer care.

    Science.gov (United States)

    Abernethy, Amy P; Ahmad, Asif; Zafar, S Yousuf; Wheeler, Jane L; Reese, Jennifer Barsky; Lyerly, H Kim

    2010-06-01

    "Rapid learning healthcare" presents a new infrastructure to support comparative effectiveness research. By leveraging heterogeneous datasets (eg, clinical, administrative, genomic, registry, and research), health information technology, and sophisticated iterative analyses, rapid learning healthcare provides a real-time framework in which clinical studies can evaluate the relative impact of therapeutic approaches on a diverse array of measures. This article describes an effort, at 1 academic medical center, to demonstrate what rapid learning healthcare might look like in operation. The article describes the process of developing and testing the components of this new model of integrated clinical/research function, with the pilot site being an academic oncology clinic and with electronic patient-reported outcomes (ePROs) being the foundational dataset. Steps included: feasibility study of the ePRO system; validation study of ePRO collection across 3 cancers; linking ePRO and other datasets; implementation; stakeholder alignment and buy in, and; demonstration through use cases. Two use cases are presented; participants were metastatic breast cancer (n = 65) and gastrointestinal cancer (n = 113) patients at 2 academic medical centers. (1) Patient-reported symptom data were collected with tablet computers; patients with breast and gastrointestinal cancer indicated high levels of sexual distress, which prompted multidisciplinary response, design of an intervention, and successful application for funding to study the intervention's impact. (2) The system evaluated the longitudinal impact of a psychosocial care program provided to patients with breast cancer. Participants used tablet computers to complete PRO surveys; data indicated significant impact on psychosocial outcomes, notably distress and despair, despite advanced disease. Results return to the clinic, allowing iterative update and evaluation. An ePRO-based rapid learning cancer clinic is feasible, providing

  1. In vitro ceramic scaffold mineralization: comparison between histological and micro-computed topographical analysis

    NARCIS (Netherlands)

    Thimm, B.W.; Wechsler, O.; Bohner, M.; Müller, R.; Hofmann, S.

    2013-01-01

    The porous structure of beta-tricalcium phosphate (b-TCP) scaffolds was assessed by conventional histomor- phometry and micro-computed tomography (micro-CT) to evaluate the substitutability of time-consuming histomor- phometry by rapid micro-CT. Extracellular matrix mineral- ization on human

  2. The Computational Materials Repository

    DEFF Research Database (Denmark)

    Landis, David D.; Hummelshøj, Jens S.; Nestorov, Svetlozar

    2012-01-01

    The possibilities for designing new materials based on quantum physics calculations are rapidly growing, but these design efforts lead to a significant increase in the amount of computational data created. The Computational Materials Repository (CMR) addresses this data challenge and provides...

  3. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan

    2016-01-01

    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  4. The Educator´s Approach to Media Training and Computer Games within Leisure Time of School-children

    OpenAIRE

    MORAVCOVÁ, Dagmar

    2009-01-01

    The paper describes possible ways of approaching computer games playing as part of leisure time of school-children and deals with the significance of media training in leisure time. At first it specifies the concept of leisure time and its functions, then shows some positive and negative effects of the media. It further describes classical computer games, the problem of excess computer game playing and means of prevention. The paper deals with the educator's personality and the importance of ...

  5. Rapid MR imaging

    International Nuclear Information System (INIS)

    Edelman, R.R.; Buxton, R.B.; Brady, T.J.

    1988-01-01

    Conventional magnetic resonance (MR) imaging methods typically require several minutes to produce an image, but the periods of respiration, cardiac motion and peristalsis are on the order of seconds or less. The need to reduce motion artifact, as well as the need to reduce imaging time for patient comfort and efficiency, have provided a strong impetus for the development of rapid imaging methods. For abdominal imaging, motion artifacts due to respiration can be significantly reduced by collecting the entire image during one breath hold. For other applications, such as following the kinetics of administered contrast agents, rapid imaging is essential to achieve adequate time resolution. A shorter imaging time entails a cost in image signal/noise (S/N), but improvements in recent years in magnet homogeneity, gradient and radiofrequency coil design have led to steady improvements in S/N and consequently in image quality. For many chemical applications the available S/N is greater than needed, and a trade-off of lower S/N for a shorter imaging time is acceptable. In this chapter, the authors consider the underlying principles of rapid imaging as well as clinical applications of these methods. The bulk of this review concentrates on short TR imaging, but methods that provide for a more modest decrease in imaging time as well as or those that dramatically shorten the imaging time to tens of milliseconds are also discussed

  6. Threshold resummation of the rapidity distribution for Higgs production at NNLO +NNLL

    Science.gov (United States)

    Banerjee, Pulak; Das, Goutam; Dhani, Prasanna K.; Ravindran, V.

    2018-03-01

    We present a formalism that resums threshold-enhanced logarithms to all orders in perturbative QCD for the rapidity distribution of any colorless particle produced in hadron colliders. We achieve this by exploiting the factorization properties and K +G equations satisfied by the soft and virtual parts of the cross section. We compute for the first time compact and most general expressions in two-dimensional Mellin space for the resummed coefficients. Using various state-of-the-art multiloop and multileg results, we demonstrate the numerical impact of our resummed results up to next-to-next-to-leading order for the rapidity distribution of the Higgs boson at the LHC. We find that inclusion of these threshold logs through resummation improves the reliability of perturbative predictions.

  7. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    Science.gov (United States)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  8. A Real-Time Plagiarism Detection Tool for Computer-Based Assessments

    Science.gov (United States)

    Jeske, Heimo J.; Lall, Manoj; Kogeda, Okuthe P.

    2018-01-01

    Aim/Purpose: The aim of this article is to develop a tool to detect plagiarism in real time amongst students being evaluated for learning in a computer-based assessment setting. Background: Cheating or copying all or part of source code of a program is a serious concern to academic institutions. Many academic institutions apply a combination of…

  9. Instructional Advice, Time Advice and Learning Questions in Computer Simulations

    Science.gov (United States)

    Rey, Gunter Daniel

    2010-01-01

    Undergraduate students (N = 97) used an introductory text and a computer simulation to learn fundamental concepts about statistical analyses (e.g., analysis of variance, regression analysis and General Linear Model). Each learner was randomly assigned to one cell of a 2 (with or without instructional advice) x 2 (with or without time advice) x 2…

  10. Verification of computer system PROLOG - software tool for rapid assessments of consequences of short-term radioactive releases to the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Kiselev, Alexey A.; Krylov, Alexey L.; Bogatov, Sergey A. [Nuclear Safety Institute (IBRAE), Bolshaya Tulskaya st. 52, 115191, Moscow (Russian Federation)

    2014-07-01

    In case of nuclear and radiation accidents emergency response authorities require a tool for rapid assessments of possible consequences. One of the most significant problems is lack of data on initial state of an accident. The lack can be especially critical in case the accident occurred in a location that was not thoroughly studied beforehand (during transportation of radioactive materials for example). One of possible solutions is the hybrid method when a model that enables rapid assessments with the use of reasonable minimum of input data is used conjointly with an observed data that can be collected shortly after accidents. The model is used to estimate parameters of the source and uncertain meteorological parameters on the base of some observed data. For example, field of fallout density can be observed and measured within hours after an accident. After that the same model with the use of estimated parameters is used to assess doses and necessity of recommended and mandatory countermeasures. The computer system PROLOG was designed to solve the problem. It is based on the widely used Gaussian model. The standard Gaussian model is supplemented with several sub-models that allow to take into account: polydisperse aerosols, aerodynamic shade from buildings in the vicinity of the place of accident, terrain orography, initial size of the radioactive cloud, effective height of the release, influence of countermeasures on the doses of radioactive exposure of humans. It uses modern GIS technologies and can use web map services. To verify ability of PROLOG to solve the problem it is necessary to test its ability to assess necessary parameters of real accidents in the past. Verification of the computer system on the data of Chazhma Bay accident (Russian Far East, 1985) was published previously. In this work verification was implemented on the base of observed contamination from the Kyshtym disaster (PA Mayak, 1957) and the Tomsk accident (1993). Observations of Sr-90

  11. Alternative majority-voting methods for real-time computing systems

    Science.gov (United States)

    Shin, Kang G.; Dolter, James W.

    1989-01-01

    Two techniques that provide a compromise between the high time overhead in maintaining synchronous voting and the difficulty of combining results in asynchronous voting are proposed. These techniques are specifically suited for real-time applications with a single-source/single-sink structure that need instantaneous error masking. They provide a compromise between a tightly synchronized system in which the synchronization overhead can be quite high, and an asynchronous system which lacks suitable algorithms for combining the output data. Both quorum-majority voting (QMV) and compare-majority voting (CMV) are most applicable to distributed real-time systems with single-source/single-sink tasks. All real-time systems eventually have to resolve their outputs into a single action at some stage. The development of the advanced information processing system (AIPS) and other similar systems serve to emphasize the importance of these techniques. Time bounds suggest that it is possible to reduce the overhead for quorum-majority voting to below that for synchronous voting. All the bounds assume that the computation phase is nonpreemptive and that there is no multitasking.

  12. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lent, Wineke A.M. van, E-mail: w.v.lent@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands); Deetman, Joost W., E-mail: j.deetman@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Teertstra, H. Jelle, E-mail: h.teertstra@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Muller, Sara H., E-mail: s.muller@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Hans, Erwin W., E-mail: e.w.hans@utwente.nl [University of Twente, School of Management and Governance, Dept. of Industrial Engineering and Business Intelligence Systems, Enschede (Netherlands); Harten, Wim H. van, E-mail: w.v.harten@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands)

    2012-11-15

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  13. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    International Nuclear Information System (INIS)

    Lent, Wineke A.M. van; Deetman, Joost W.; Teertstra, H. Jelle; Muller, Sara H.; Hans, Erwin W.; Harten, Wim H. van

    2012-01-01

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  14. Modeling of requirement specification for safety critical real time computer system using formal mathematical specifications

    International Nuclear Information System (INIS)

    Sankar, Bindu; Sasidhar Rao, B.; Ilango Sambasivam, S.; Swaminathan, P.

    2002-01-01

    Full text: Real time computer systems are increasingly used for safety critical supervision and control of nuclear reactors. Typical application areas are supervision of reactor core against coolant flow blockage, supervision of clad hot spot, supervision of undesirable power excursion, power control and control logic for fuel handling systems. The most frequent cause of fault in safety critical real time computer system is traced to fuzziness in requirement specification. To ensure the specified safety, it is necessary to model the requirement specification of safety critical real time computer systems using formal mathematical methods. Modeling eliminates the fuzziness in the requirement specification and also helps to prepare the verification and validation schemes. Test data can be easily designed from the model of the requirement specification. Z and B are the popular languages used for modeling the requirement specification. A typical safety critical real time computer system for supervising the reactor core of prototype fast breeder reactor (PFBR) against flow blockage is taken as case study. Modeling techniques and the actual model are explained in detail. The advantages of modeling for ensuring the safety are summarized

  15. Rapid and efficient radiosynthesis of [123I]I-PK11195, a single photon emission computed tomography tracer for peripheral benzodiazepine receptors

    International Nuclear Information System (INIS)

    Pimlott, Sally L.; Stevenson, Louise; Wyper, David J.; Sutherland, Andrew

    2008-01-01

    Introduction: [ 123 I]I-PK11195 is a high-affinity single photon emission computed tomography radiotracer for peripheral benzodiazepine receptors that has previously been used to measure activated microglia and to assess neuroinflammation in the living human brain. This study investigates the radiosynthesis of [ 123 I]I-PK11195 in order to develop a rapid and efficient method that obtains [ 123 I]I-PK11195 with a high specific activity for in vivo animal and human imaging studies. Methods: The synthesis of [ 123 I]I-PK11195 was evaluated using a solid-state interhalogen exchange method and an electrophilic iododestannylation method, where bromine and trimethylstannyl derivatives were used as precursors, respectively. In the electrophilic iododestannylation method, the oxidants peracetic acid and chloramine-T were both investigated. Results: Electrophilic iododestannylation produced [ 123 I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than achievable using the halogen exchange method investigated. Using chloramine-T as oxidant provided a rapid and efficient method of choice for the synthesis of [ 123 I]I-PK11195. Conclusions: [ 123 I]I-PK11195 has been successfully synthesized via a rapid and efficient electrophilic iododestannylation method, producing [ 123 I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than previously achieved

  16. Rapid prototyping modelling in oral and maxillofacial surgery: A two year retrospective study.

    Science.gov (United States)

    Suomalainen, Anni; Stoor, Patricia; Mesimäki, Karri; Kontio, Risto K

    2015-12-01

    The use of rapid prototyping (RP) models in medicine to construct bony models is increasing. The aim of the study was to evaluate retrospectively the indication for the use of RP models in oral and maxillofacial surgery at Helsinki University Central Hospital during 2009-2010. Also, the used computed tomography (CT) examination - multislice CT (MSCT) or cone beam CT (CBCT) - method was evaluated. In total 114 RP models were fabricated for 102 patients. The mean age of the patients at the time of the production of the model was 50.4 years. The indications for the modelling included malignant lesions (29%), secondary reconstruction (25%), prosthodontic treatment (22%), orthognathic surgery or asymmetry (13%), benign lesions (8%), and TMJ disorders (4%). MSCT examination was used in 92 and CBCT examination in 22 cases. Most of the models (75%) were conventional hard tissue models. Models with colored tumour or other structure(s) of interest were ordered in 24%. Two out of the 114 models were soft tissue models. The main benefit of the models was in treatment planning and in connection with the production of pre-bent plates or custom made implants. The RP models both facilitate and improve treatment planning and intraoperative efficiency. Rapid prototyping, radiology, computed tomography, cone beam computed tomography.

  17. The relationship between TV/computer time and adolescents' health-promoting behavior: a secondary data analysis.

    Science.gov (United States)

    Chen, Mei-Yen; Liou, Yiing-Mei; Wu, Jen-Yee

    2008-03-01

    Television and computers provide significant benefits for learning about the world. Some studies have linked excessive television (TV) watching or computer game playing to disadvantage of health status or some unhealthy behavior among adolescents. However, the relationships between watching TV/playing computer games and adolescents adopting health promoting behavior were limited. This study aimed to discover the relationship between time spent on watching TV and on leisure use of computers and adolescents' health promoting behavior, and associated factors. This paper used secondary data analysis from part of a health promotion project in Taoyuan County, Taiwan. A cross-sectional design was used and purposive sampling was conducted among adolescents in the original project. A total of 660 participants answered the questions appropriately for this work between January and June 2004. Findings showed the mean age of the respondents was 15.0 +/- 1.7 years. The mean numbers of TV watching hours were 2.28 and 4.07 on weekdays and weekends respectively. The mean hours of leisure (non-academic) computer use were 1.64 and 3.38 on weekdays and weekends respectively. Results indicated that adolescents spent significant time watching TV and using the computer, which was negatively associated with adopting health-promoting behaviors such as life appreciation, health responsibility, social support and exercise behavior. Moreover, being boys, being overweight, living in a rural area, and being middle-school students were significantly associated with spending long periods watching TV and using the computer. Therefore, primary health care providers should record the TV and non-academic computer time of youths when conducting health promotion programs, and educate parents on how to become good and healthy electronic media users.

  18. Rapid phenotyping of crop root systems in undisturbed field soils using X-ray computed tomography.

    Science.gov (United States)

    Pfeifer, Johannes; Kirchgessner, Norbert; Colombi, Tino; Walter, Achim

    2015-01-01

    X-ray computed tomography (CT) has become a powerful tool for root phenotyping. Compared to rather classical, destructive methods, CT encompasses various advantages. In pot experiments the growth and development of the same individual root can be followed over time and in addition the unaltered configuration of the 3D root system architecture (RSA) interacting with a real field soil matrix can be studied. Yet, the throughput, which is essential for a more widespread application of CT for basic research or breeding programs, suffers from the bottleneck of rapid and standardized segmentation methods to extract root structures. Using available methods, root segmentation is done to a large extent manually, as it requires a lot of interactive parameter optimization and interpretation and therefore needs a lot of time. Based on commercially available software, this paper presents a protocol that is faster, more standardized and more versatile compared to existing segmentation methods, particularly if used to analyse field samples collected in situ. To the knowledge of the authors this is the first study approaching to develop a comprehensive segmentation method suitable for comparatively large columns sampled in situ which contain complex, not necessarily connected root systems from multiple plants grown in undisturbed field soil. Root systems from several crops were sampled in situ and CT-volumes determined with the presented method were compared to root dry matter of washed root samples. A highly significant (P < 0.01) and strong correlation (R(2) = 0.84) was found, demonstrating the value of the presented method in the context of field research. Subsequent to segmentation, a method for the measurement of root thickness distribution has been used. Root thickness is a central RSA trait for various physiological research questions such as root growth in compacted soil or under oxygen deficient soil conditions, but hardly assessable in high throughput until today, due

  19. Rapid process development of chromatographic process using direct analysis in real time mass spectrometry as a process analytical technology tool.

    Science.gov (United States)

    Yan, Binjun; Chen, Teng; Xu, Zhilin; Qu, Haibin

    2014-06-01

    The concept of quality by design (QbD) is widely applied in the process development of pharmaceuticals. However, the additional cost and time have caused some resistance about QbD implementation. To show a possible solution, this work proposed a rapid process development method, which used direct analysis in real time mass spectrometry (DART-MS) as a process analytical technology (PAT) tool for studying the chromatographic process of Ginkgo biloba L., as an example. The breakthrough curves were fast determined by DART-MS at-line. A high correlation coefficient of 0.9520 was found between the concentrations of ginkgolide A determined by DART-MS and HPLC. Based on the PAT tool, the impacts of process parameters on the adsorption capacity were discovered rapidly, which showed a decreased adsorption capacity with the increase of the flow rate. This work has shown the feasibility and advantages of integrating PAT into QbD implementation for rapid process development. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Rapid Mental Сomputation System as a Tool for Algorithmic Thinking of Elementary School Students Development

    Directory of Open Access Journals (Sweden)

    Rushan Ziatdinov

    2012-07-01

    Full Text Available In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  1. Cloud Computing : Research Issues and Implications

    OpenAIRE

    Marupaka Rajenda Prasad; R. Lakshman Naik; V. Bapuji

    2013-01-01

    Cloud computing is a rapidly developing and excellent promising technology. It has aroused the concern of the computer society of whole world. Cloud computing is Internet-based computing, whereby shared information, resources, and software, are provided to terminals and portable devices on-demand, like the energy grid. Cloud computing is the product of the combination of grid computing, distributed computing, parallel computing, and ubiquitous computing. It aims to build and forecast sophisti...

  2. Cryptography and computational number theory

    CERN Document Server

    Shparlinski, Igor; Wang, Huaxiong; Xing, Chaoping; Workshop on Cryptography and Computational Number Theory, CCNT'99

    2001-01-01

    This volume contains the refereed proceedings of the Workshop on Cryptography and Computational Number Theory, CCNT'99, which has been held in Singapore during the week of November 22-26, 1999. The workshop was organized by the Centre for Systems Security of the Na­ tional University of Singapore. We gratefully acknowledge the financial support from the Singapore National Science and Technology Board under the grant num­ ber RP960668/M. The idea for this workshop grew out of the recognition of the recent, rapid development in various areas of cryptography and computational number the­ ory. The event followed the concept of the research programs at such well-known research institutions as the Newton Institute (UK), Oberwolfach and Dagstuhl (Germany), and Luminy (France). Accordingly, there were only invited lectures at the workshop with plenty of time for informal discussions. It was hoped and successfully achieved that the meeting would encourage and stimulate further research in information and computer s...

  3. Practical clinical applications of the computer in nuclear medicine

    International Nuclear Information System (INIS)

    Price, R.R.; Erickson, J.J.; Patton, J.A.; Jones, J.P.; Lagan, J.E.; Rollo, F.D.

    1978-01-01

    The impact of the computer on the practice of nuclear medicine has been felt primarily in the area of rapid dynamic studies. At this time it is difficult to find a clinic which routinely performs computer processing of static images. The general purpose digital computer is a sophisticated and flexible instrument. The number of applications for which one can use the computer to augment data acquisition, analysis, or display is essentially unlimited. In this light, the purpose of this exhibit is not to describe all possible applications of the computer in nuclear medicine but rather to illustrate those applications which have generally been accepted as practical in the routine clinical environment. Specifically, we have chosen examples of computer augmented cardiac, and renal function studies as well as examples of relative organ blood flow studies. In addition, a short description of basic computer components and terminology along with a few examples of non-imaging applications are presented

  4. Rapid world modeling

    International Nuclear Information System (INIS)

    Little, Charles; Jensen, Ken

    2002-01-01

    Sandia National Laboratories has designed and developed systems capable of large-scale, three-dimensional mapping of unstructured environments in near real time. This mapping technique is called rapid world modeling and has proven invaluable when used by prototype systems consisting of sensory detection devices mounted on mobile platforms. These systems can be deployed into previously unmapped environments and transmit real-time 3-D visual images to operators located remotely. This paper covers a brief history of the rapid world modeling system, its implementation on mobile platforms, and the current state of the technology. Applications to the nuclear power industry are discussed. (author)

  5. Joint Time-Frequency-Space Classification of EEG in a Brain-Computer Interface Application

    Directory of Open Access Journals (Sweden)

    Molina Gary N Garcia

    2003-01-01

    Full Text Available Brain-computer interface is a growing field of interest in human-computer interaction with diverse applications ranging from medicine to entertainment. In this paper, we present a system which allows for classification of mental tasks based on a joint time-frequency-space decorrelation, in which mental tasks are measured via electroencephalogram (EEG signals. The efficiency of this approach was evaluated by means of real-time experimentations on two subjects performing three different mental tasks. To do so, a number of protocols for visualization, as well as training with and without feedback, were also developed. Obtained results show that it is possible to obtain good classification of simple mental tasks, in view of command and control, after a relatively small amount of training, with accuracies around 80%, and in real time.

  6. Computational model for real-time determination of tritium inventory in a detritiation installation

    International Nuclear Information System (INIS)

    Bornea, Anisia; Stefanescu, Ioan; Zamfirache, Marius; Stefan, Iuliana; Sofalca, Nicolae; Bidica, Nicolae

    2008-01-01

    Full text: At ICIT Rm.Valcea an experimental pilot plant was built having as main objective the development of a technology for detritiation of heavy water processed in the CANDU-type reactors of the nuclear power plant at Cernavoda, Romania. The aspects related to safeguards and safety for such a detritiation installation being of great importance, a complex computational model has been developed. The model allows real-time calculation of tritium inventory in a working installation. The applied detritiation technology is catalyzed isotopic exchange coupled with cryogenic distillation. Computational models for non-steady working conditions have been developed for each process of isotopic exchange. By coupling these processes tritium inventory can be determined in real-time. The computational model was developed based on the experience gained on the pilot installation. The model uses a set of parameters specific to isotopic exchange processes. These parameters were experimentally determined in the pilot installation. The model is included in the monitoring system and uses as input data the parameters acquired in real-time from automation system of the pilot installation. A friendly interface has been created to visualize the final results as data or graphs. (authors)

  7. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A real-time computer simulation of nuclear simulator software using standard PC hardware and linux environments

    International Nuclear Information System (INIS)

    Cha, K. H.; Kweon, K. C.

    2001-01-01

    A feasibility study, which standard PC hardware and Real-Time Linux are applied to real-time computer simulation of software for a nuclear simulator, is presented in this paper. The feasibility prototype was established with the existing software in the Compact Nuclear Simulator (CNS). Throughout the real-time implementation in the feasibility prototype, we has identified that the approach can enable the computer-based predictive simulation to be approached, due to both the remarkable improvement in real-time performance and the less efforts for real-time implementation under standard PC hardware and Real-Time Linux envrionments

  9. Facing Two Rapidly Spreading Internet Worms

    CERN Multimedia

    IT Department

    2009-01-01

    The Internet is currently facing a growing number of computer infections due to two rapidly spreading worms. The "Conficker" and "Downadup" worms have infected an estimated 1.1 million PCs in a 24-hour period, bringing the total number of infected computers to 3.5 million [1]. Via a single USB stick, these worms were also responsible for the infection of about 40 laptops at the last EGEE conference in Istanbul. In order to reduce the impact of these worms on CERN Windows computers, the Computer Security Team has suggested several preventive measures described here. Disabling the Windows AutoRun and AutoPlay Features The Computer Security Team and the IT/IS group have decided to disable the "AutoRun" and "AutoPlay" functionality on all centrally-managed Windows computers at CERN. When inserting CDs, DVDs or USB sticks into a PC, "AutoRun" and "AutoPlay" are responsible for automatically playing music or films stored on these media, or ...

  10. Effects on mortality, treatment, and time management as a result of routine use of total body computed tomography in blunt high-energy trauma patients.

    Science.gov (United States)

    van Vugt, Raoul; Kool, Digna R; Deunk, Jaap; Edwards, Michael J R

    2012-03-01

    Currently, total body computed tomography (TBCT) is rapidly implemented in the evaluation of trauma patients. With this review, we aim to evaluate the clinical implications-mortality, change in treatment, and time management-of the routine use of TBCT in adult blunt high-energy trauma patients compared with a conservative approach with the use of conventional radiography, ultrasound, and selective computed tomography. A literature search for original studies on TBCT in blunt high-energy trauma patients was performed. Two independent observers included studies concerning mortality, change of treatment, and/or time management as outcome measures. For each article, relevant data were extracted and analyzed. In addition, the quality according to the Oxford levels of evidence was assessed. From 183 articles initially identified, the observers included nine original studies in consensus. One of three studies described a significant difference in mortality; four described a change of treatment in 2% to 27% of patients because of the use of TBCT. Five studies found a gain in time with the use of immediate routine TBCT. Eight studies scored a level of evidence of 2b and one of 3b. Current literature has predominantly suboptimal design to prove terminally that the routine use of TBCT results in improved survival of blunt high-energy trauma patients. TBCT can give a change of treatment and improves time intervals in the emergency department as compared with its selective use.

  11. Innovative procedure for computer-assisted genioplasty: three-dimensional cephalometry, rapid-prototyping model and surgical splint.

    Science.gov (United States)

    Olszewski, R; Tranduy, K; Reychler, H

    2010-07-01

    The authors present a new procedure of computer-assisted genioplasty. They determined the anterior, posterior and inferior limits of the chin in relation to the skull and face with the newly developed and validated three-dimensional cephalometric planar analysis (ACRO 3D). Virtual planning of the osteotomy lines was carried out with Mimics (Materialize) software. The authors built a three-dimensional rapid-prototyping multi-position model of the chin area from a medical low-dose CT scan. The transfer of virtual information to the operating room consisted of two elements. First, the titanium plates on the 3D RP model were pre-bent. Second, a surgical guide for the transfer of the osteotomy lines and the positions of the screws to the operating room was manufactured. The authors present the first case of the use of this model on a patient. The postoperative results are promising, and the technique is fast and easy-to-use. More patients are needed for a definitive clinical validation of this procedure. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  13. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  14. International assessment of functional computer abilities

    NARCIS (Netherlands)

    Anderson, Ronald E.; Collis, Betty

    1993-01-01

    After delineating the major rationale for computer education, data are presented from Stage 1 of the IEA Computers in Education Study showing international comparisons that may reflect differential priorities. Rapid technological change and the lack of consensus on goals of computer education

  15. pulver: an R package for parallel ultra-rapid p-value computation for linear regression interaction terms.

    Science.gov (United States)

    Molnos, Sophie; Baumbach, Clemens; Wahl, Simone; Müller-Nurasyid, Martina; Strauch, Konstantin; Wang-Sattler, Rui; Waldenberger, Melanie; Meitinger, Thomas; Adamski, Jerzy; Kastenmüller, Gabi; Suhre, Karsten; Peters, Annette; Grallert, Harald; Theis, Fabian J; Gieger, Christian

    2017-09-29

    Genome-wide association studies allow us to understand the genetics of complex diseases. Human metabolism provides information about the disease-causing mechanisms, so it is usual to investigate the associations between genetic variants and metabolite levels. However, only considering genetic variants and their effects on one trait ignores the possible interplay between different "omics" layers. Existing tools only consider single-nucleotide polymorphism (SNP)-SNP interactions, and no practical tool is available for large-scale investigations of the interactions between pairs of arbitrary quantitative variables. We developed an R package called pulver to compute p-values for the interaction term in a very large number of linear regression models. Comparisons based on simulated data showed that pulver is much faster than the existing tools. This is achieved by using the correlation coefficient to test the null-hypothesis, which avoids the costly computation of inversions. Additional tricks are a rearrangement of the order, when iterating through the different "omics" layers, and implementing this algorithm in the fast programming language C++. Furthermore, we applied our algorithm to data from the German KORA study to investigate a real-world problem involving the interplay among DNA methylation, genetic variants, and metabolite levels. The pulver package is a convenient and rapid tool for screening huge numbers of linear regression models for significant interaction terms in arbitrary pairs of quantitative variables. pulver is written in R and C++, and can be downloaded freely from CRAN at https://cran.r-project.org/web/packages/pulver/ .

  16. Cloud Computing: An Overview

    Directory of Open Access Journals (Sweden)

    Libor Sarga

    2012-10-01

    Full Text Available As cloud computing is gaining acclaim as a cost-effective alternative to acquiring processing resources for corporations, scientific applications and individuals, various challenges are rapidly coming to the fore. While academia struggles to procure a concise definition, corporations are more interested in competitive advantages it may generate and individuals view it as a way of speeding up data access times or a convenient backup solution. Properties of the cloud architecture largely preclude usage of existing practices while achieving end-users’ and companies’ compliance requires considering multiple infrastructural as well as commercial factors, such as sustainability in case of cloud-side interruptions, identity management and off-site corporate data handling policies. The article overviews recent attempts at formal definitions of cloud computing, summarizes and critically evaluates proposed delimitations, and specifies challenges associated with its further proliferation. Based on the conclusions, future directions in the field of cloud computing are also briefly hypothesized to include deeper focus on community clouds and bolstering innovative cloud-enabled platforms and devices such as tablets, smart phones, as well as entertainment applications.

  17. Cochrane Rapid Reviews Methods Group to play a leading role in guiding the production of informed high-quality, timely research evidence syntheses.

    Science.gov (United States)

    Garritty, Chantelle; Stevens, Adrienne; Gartlehner, Gerald; King, Valerie; Kamel, Chris

    2016-10-28

    Policymakers and healthcare stakeholders are increasingly seeking evidence to inform the policymaking process, and often use existing or commissioned systematic reviews to inform decisions. However, the methodologies that make systematic reviews authoritative take time, typically 1 to 2 years to complete. Outside the traditional SR timeline, "rapid reviews" have emerged as an efficient tool to get evidence to decision-makers more quickly. However, the use of rapid reviews does present challenges. To date, there has been limited published empirical information about this approach to compiling evidence. Thus, it remains a poorly understood and ill-defined set of diverse methodologies with various labels. In recent years, the need to further explore rapid review methods, characteristics, and their use has been recognized by a growing network of healthcare researchers, policymakers, and organizations, several with ties to Cochrane, which is recognized as representing an international gold standard for high-quality, systematic reviews. In this commentary, we introduce the newly established Cochrane Rapid Reviews Methods Group developed to play a leading role in guiding the production of rapid reviews given they are increasingly employed as a research synthesis tool to support timely evidence-informed decision-making. We discuss how the group was formed and outline the group's structure and remit. We also discuss the need to establish a more robust evidence base for rapid reviews in the published literature, and the importance of promoting registration of rapid review protocols in an effort to promote efficiency and transparency in research. As with standard systematic reviews, the core principles of evidence-based synthesis should apply to rapid reviews in order to minimize bias to the extent possible. The Cochrane Rapid Reviews Methods Group will serve to establish a network of rapid review stakeholders and provide a forum for discussion and training. By facilitating

  18. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  19. A neural computational model for animal's time-to-collision estimation.

    Science.gov (United States)

    Wang, Ling; Yao, Dezhong

    2013-04-17

    The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.

  20. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Fumi, E-mail: fumikato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kamishima, Tamotsu, E-mail: ktamotamo2@yahoo.co.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Morita, Ken, E-mail: kenordic@carrot.ocn.ne.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Muto, Natalia S., E-mail: nataliamuto@gmail.com [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Okamoto, Syozou, E-mail: shozo@med.hokudai.ac.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Omatsu, Tokuhiko, E-mail: omatoku@nirs.go.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Oyama, Noriko, E-mail: ZAT04404@nifty.ne.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Terae, Satoshi, E-mail: saterae@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kanegae, Kakuko, E-mail: IZW00143@nifty.ne.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Nonomura, Katsuya, E-mail: k-nonno@med.hokudai.ac.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Shirato, Hiroki, E-mail: shirato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan)

    2011-07-15

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using {sup 99m}Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 {+-} 3.9 s) was significantly shorter than that of the workstation (102.6 {+-} 38.9 s, p < 0.0001). The results of n-SRV (49.7 {+-} 4.0%) were highly consistent with those of z-SRV (49.9 {+-} 3.6%), with a mean discrepancy of 0.12 {+-} 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 {+-} 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  1. Efficient 3D frequency response modeling with spectral accuracy by the rapid expansion method

    KAUST Repository

    Chu, Chunlei

    2012-07-01

    Frequency responses of seismic wave propagation can be obtained either by directly solving the frequency domain wave equations or by transforming the time domain wavefields using the Fourier transform. The former approach requires solving systems of linear equations, which becomes progressively difficult to tackle for larger scale models and for higher frequency components. On the contrary, the latter approach can be efficiently implemented using explicit time integration methods in conjunction with running summations as the computation progresses. Commonly used explicit time integration methods correspond to the truncated Taylor series approximations that can cause significant errors for large time steps. The rapid expansion method (REM) uses the Chebyshev expansion and offers an optimal solution to the second-order-in-time wave equations. When applying the Fourier transform to the time domain wavefield solution computed by the REM, we can derive a frequency response modeling formula that has the same form as the original time domain REM equation but with different summation coefficients. In particular, the summation coefficients for the frequency response modeling formula corresponds to the Fourier transform of those for the time domain modeling equation. As a result, we can directly compute frequency responses from the Chebyshev expansion polynomials rather than the time domain wavefield snapshots as do other time domain frequency response modeling methods. When combined with the pseudospectral method in space, this new frequency response modeling method can produce spectrally accurate results with high efficiency. © 2012 Society of Exploration Geophysicists.

  2. Rapid on-site sensing aflatoxin B1 in food and feed via a chromatographic time-resolved fluoroimmunoassay.

    Directory of Open Access Journals (Sweden)

    Zhaowei Zhang

    Full Text Available Aflatoxin B1 poses grave threats to food and feed safety due to its strong carcinogenesis and toxicity, thus requiring ultrasensitive rapid on-site determination. Herein, a portable immunosensor based on chromatographic time-resolved fluoroimmunoassay was developed for sensitive and on-site determination of aflatoxin B1 in food and feed samples. Chromatographic time-resolved fluoroimmunoassay offered a magnified positive signal and low signal-to-noise ratio in time-resolved mode due to the absence of noise interference caused by excitation light sources. Compared with the immunosensing performance in previous studies, this platform demonstrated a wider dynamic range of 0.2-60 μg/kg, lower limit of detection from 0.06 to 0.12 µg/kg, and considerable recovery from 80.5% to 116.7% for different food and feed sample matrices. It was found to be little cross-reactivity with other aflatoxins (B2, G1, G2, and M1. In the case of determination of aflatoxin B1 in peanuts, corn, soy sauce, vegetable oil, and mouse feed, excellent agreement was found when compared with aflatoxin B1 determination via the conversational high-performance liquid chromatography method. The chromatographic time-resolved fluoroimmunoassay affords a powerful alternative for rapid on-site determination of aflatoxin B1 and holds a promise for food safety in consideration of practical food safety and environmental monitoring.

  3. Rapid on-site sensing aflatoxin B1 in food and feed via a chromatographic time-resolved fluoroimmunoassay.

    Science.gov (United States)

    Zhang, Zhaowei; Tang, Xiaoqian; Wang, Du; Zhang, Qi; Li, Peiwu; Ding, Xiaoxia

    2015-01-01

    Aflatoxin B1 poses grave threats to food and feed safety due to its strong carcinogenesis and toxicity, thus requiring ultrasensitive rapid on-site determination. Herein, a portable immunosensor based on chromatographic time-resolved fluoroimmunoassay was developed for sensitive and on-site determination of aflatoxin B1 in food and feed samples. Chromatographic time-resolved fluoroimmunoassay offered a magnified positive signal and low signal-to-noise ratio in time-resolved mode due to the absence of noise interference caused by excitation light sources. Compared with the immunosensing performance in previous studies, this platform demonstrated a wider dynamic range of 0.2-60 μg/kg, lower limit of detection from 0.06 to 0.12 µg/kg, and considerable recovery from 80.5% to 116.7% for different food and feed sample matrices. It was found to be little cross-reactivity with other aflatoxins (B2, G1, G2, and M1). In the case of determination of aflatoxin B1 in peanuts, corn, soy sauce, vegetable oil, and mouse feed, excellent agreement was found when compared with aflatoxin B1 determination via the conversational high-performance liquid chromatography method. The chromatographic time-resolved fluoroimmunoassay affords a powerful alternative for rapid on-site determination of aflatoxin B1 and holds a promise for food safety in consideration of practical food safety and environmental monitoring.

  4. Portable computers - portable operating systems

    International Nuclear Information System (INIS)

    Wiegandt, D.

    1985-01-01

    Hardware development has made rapid progress over the past decade. Computers used to have attributes like ''general purpose'' or ''universal'', nowadays they are labelled ''personal'' and ''portable''. Recently, a major manufacturing company started marketing a portable version of their personal computer. But even for these small computers the old truth still holds that the biggest disadvantage of a computer is that it must be programmed, hardware by itself does not make a computer. (orig.)

  5. Development and Validation of a Real-Time PCR Assay for Rapid Detection of Candida auris from Surveillance Samples.

    Science.gov (United States)

    Leach, L; Zhu, Y; Chaturvedi, S

    2018-02-01

    Candida auris is an emerging multidrug-resistant yeast causing invasive health care-associated infection with high mortality worldwide. Rapid identification of C. auris is of primary importance for the implementation of public health measures to control the spread of infection. To achieve these goals, we developed and validated a TaqMan-based real-time PCR assay targeting the internal transcribed spacer 2 ( ITS 2) region of the ribosomal gene. The assay was highly specific, reproducible, and sensitive, with the detection limit of 1 C. auris CFU/PCR. The performance of the C. auris real-time PCR assay was evaluated by using 623 surveillance samples, including 365 patient swabs and 258 environmental sponges. Real-time PCR yielded positive results from 49 swab and 58 sponge samples, with 89% and 100% clinical sensitivity with regard to their respective culture-positive results. The real-time PCR also detected C. auris DNA from 1% and 12% of swab and sponge samples with culture-negative results, indicating the presence of dead or culture-impaired C. auris The real-time PCR yielded results within 4 h of sample processing, compared to 4 to 14 days for culture, reducing turnaround time significantly. The new real-time PCR assay allows for accurate and rapid screening of C. auris and can increase effective control and prevention of this emerging multidrug-resistant fungal pathogen in health care facilities. Copyright © 2018 Leach et al.

  6. Time-resolved temperature measurements in a rapid compression machine using quantum cascade laser absorption in the intrapulse mode

    KAUST Repository

    Nasir, Ehson Fawad; Farooq, Aamir

    2016-01-01

    A temperature sensor based on the intrapulse absorption spectroscopy technique has been developed to measure in situ temperature time-histories in a rapid compression machine (RCM). Two quantum-cascade lasers (QCLs) emitting near 4.55μm and 4.89μm

  7. Numerical computation of generalized importance functions

    International Nuclear Information System (INIS)

    Gomit, J.M.; Nasr, M.; Ngyuen van Chi, G.; Pasquet, J.P.; Planchard, J.

    1981-01-01

    Thus far, an important effort has been devoted to developing and applying generalized perturbation theory in reactor physics analysis. In this work we are interested in the calculation of the importance functions by the method of A. Gandini. We have noted that in this method the convergence of the iterative procedure adopted is not rapid. Hence to accelerate this convergence we have used the semi-iterative technique. Two computer codes have been developed for one and two dimensional calculations (SPHINX-1D and SPHINX-2D). The advantage of our calculation was confirmed by some comparative tests in which the iteration number and the computing time were highly reduced with respect to classical calculation (CIAP-1D and CIAP-2D). (orig.) [de

  8. Rapid earthquake characterization using MEMS accelerometers and volunteer hosts following the M 7.2 Darfield, New Zealand, Earthquake

    Science.gov (United States)

    Lawrence, J. F.; Cochran, E.S.; Chung, A.; Kaiser, A.; Christensen, C. M.; Allen, R.; Baker, J.W.; Fry, B.; Heaton, T.; Kilb, Debi; Kohler, M.D.; Taufer, M.

    2014-01-01

    We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections.

  9. Variable dead time counters: 2. A computer simulation

    International Nuclear Information System (INIS)

    Hooton, B.W.; Lees, E.W.

    1980-09-01

    A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)

  10. Polarization control of spontaneous emission for rapid quantum-state initialization

    Science.gov (United States)

    DiLoreto, C. S.; Rangan, C.

    2017-04-01

    We propose an efficient method to selectively enhance the spontaneous emission rate of a quantum system by changing the polarization of an incident control field, and exploiting the polarization dependence of the system's spontaneous emission rate. This differs from the usual Purcell enhancement of spontaneous emission rates as it can be selectively turned on and off. Using a three-level Λ system in a quantum dot placed in between two silver nanoparticles and a linearly polarized, monochromatic driving field, we present a protocol for rapid quantum state initialization, while maintaining long coherence times for control operations. This process increases the overall amount of time that a quantum system can be effectively utilized for quantum operations, and presents a key advance in quantum computing.

  11. Inspecting rapidly moving surfaces for small defects using CNN cameras

    Science.gov (United States)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  12. Rapidly re-computable EEG (electroencephalography) forward models for realistic head shapes

    International Nuclear Information System (INIS)

    Ermer, J.J.; Mosher, J.C.; Baillet, S.; Leahy, R.M.

    2001-01-01

    Solution of the EEG source localization (inverse) problem utilizing model-based methods typically requires a significant number of forward model evaluations. For subspace based inverse methods like MUSIC (6), the total number of forward model evaluations can often approach an order of 10 3 or 10 4 . Techniques based on least-squares minimization may require significantly more evaluations. The observed set of measurements over an M-sensor array is often expressed as a linear forward spatio-temporal model of the form: F = GQ + N (1) where the observed forward field F (M-sensors x N-time samples) can be expressed in terms of the forward model G, a set of dipole moment(s) Q (3xP-dipoles x N-time samples) and additive noise N. Because of their simplicity, ease of computation, and relatively good accuracy, multi-layer spherical models (7) (or fast approximations described in (1), (7)) have traditionally been the 'forward model of choice' for approximating the human head. However, approximation of the human head via a spherical model does have several key drawbacks. By its very shape, the use of a spherical model distorts the true distribution of passive currents in the skull cavity. Spherical models also require that the sensor positions be projected onto the fitted sphere (Fig. 1), resulting in a distortion of the true sensor-dipole spatial geometry (and ultimately the computed surface potential). The use of a single 'best-fitted' sphere has the added drawback of incomplete coverage of the inner skull region, often ignoring areas such as the frontal cortex. In practice, this problem is typically countered by fitting additional sphere(s) to those region(s) not covered by the primary sphere. The use of these additional spheres results in added complication to the forward model. Using high-resolution spatial information obtained via X-ray CT or MR imaging, a realistic head model can be formed by tessellating the head into a set of contiguous regions (typically the scalp

  13. Real-time quantitative phase reconstruction in off-axis digital holography using multiplexing.

    Science.gov (United States)

    Girshovitz, Pinhas; Shaked, Natan T

    2014-04-15

    We present a new approach for obtaining significant speedup in the digital processing of extracting unwrapped phase profiles from off-axis digital holograms. The new technique digitally multiplexes two orthogonal off-axis holograms, where the digital reconstruction, including spatial filtering and two-dimensional phase unwrapping on a decreased number of pixels, can be performed on both holograms together, without redundant operations. Using this technique, we were able to reconstruct, for the first time to our knowledge, unwrapped phase profiles from off-axis holograms with 1 megapixel in more than 30 frames per second using a standard single-core personal computer on a MATLAB platform, without using graphic-processing-unit programming or parallel computing. This new technique is important for real-time quantitative visualization and measurements of highly dynamic samples and is applicable for a wide range of applications, including rapid biological cell imaging and real-time nondestructive testing. After comparing the speedups obtained by the new technique for holograms of various sizes, we present experimental results of real-time quantitative phase visualization of cells flowing rapidly through a microchannel.

  14. Cloud Computing Law

    CERN Document Server

    Millard, Christopher

    2013-01-01

    This book is about the legal implications of cloud computing. In essence, ‘the cloud’ is a way of delivering computing resources as a utility service via the internet. It is evolving very rapidly with substantial investments being made in infrastructure, platforms and applications, all delivered ‘as a service’. The demand for cloud resources is enormous, driven by such developments as the deployment on a vast scale of mobile apps and the rapid emergence of ‘Big Data’. Part I of this book explains what cloud computing is and how it works. Part II analyses contractual relationships between cloud service providers and their customers, as well as the complex roles of intermediaries. Drawing on primary research conducted by the Cloud Legal Project at Queen Mary University of London, cloud contracts are analysed in detail, including the appropriateness and enforceability of ‘take it or leave it’ terms of service, as well as the scope for negotiating cloud deals. Specific arrangements for public sect...

  15. Rapid measurement of residual dipolar couplings for fast fold elucidation of proteins

    Energy Technology Data Exchange (ETDEWEB)

    Rasia, Rodolfo M. [Jean-Pierre Ebel CNRS/CEA/UJF, Institut de Biologie Structurale (France); Lescop, Ewen [CNRS, Institut de Chimie des Substances Naturelles (France); Palatnik, Javier F. [Universidad Nacional de Rosario, Instituto de Biologia Molecular y Celular de Rosario, Facultad de Ciencias Bioquimicas y Farmaceuticas (Argentina); Boisbouvier, Jerome, E-mail: jerome.boisbouvier@ibs.fr; Brutscher, Bernhard, E-mail: Bernhard.brutscher@ibs.fr [Jean-Pierre Ebel CNRS/CEA/UJF, Institut de Biologie Structurale (France)

    2011-11-15

    It has been demonstrated that protein folds can be determined using appropriate computational protocols with NMR chemical shifts as the sole source of experimental restraints. While such approaches are very promising they still suffer from low convergence resulting in long computation times to achieve accurate results. Here we present a suite of time- and sensitivity optimized NMR experiments for rapid measurement of up to six RDCs per residue. Including such an RDC data set, measured in less than 24 h on a single aligned protein sample, greatly improves convergence of the Rosetta-NMR protocol, allowing for overnight fold calculation of small proteins. We demonstrate the performance of our fast fold calculation approach for ubiquitin as a test case, and for two RNA-binding domains of the plant protein HYL1. Structure calculations based on simulated RDC data highlight the importance of an accurate and precise set of several complementary RDCs as additional input restraints for high-quality de novo structure determination.

  16. Preverbal and verbal counting and computation.

    Science.gov (United States)

    Gallistel, C R; Gelman, R

    1992-08-01

    We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.

  17. Computing for magnetic fusion energy research: An updated vision

    International Nuclear Information System (INIS)

    Henline, P.; Giarrusso, J.; Davis, S.; Casper, T.

    1993-01-01

    This Fusion Computing Council perspective is written to present the primary of the fusion computing community at the time of publication of the report necessarily as a summary of the information contained in the individual sections. These concerns reflect FCC discussions during final review of contributions from the various working groups and portray our latest information. This report itself should be considered as dynamic, requiring periodic updating in an attempt to track rapid evolution of the computer industry relevant to requirements for magnetic fusion research. The most significant common concern among the Fusion Computing Council working groups is networking capability. All groups see an increasing need for network services due to the use of workstations, distributed computing environments, increased use of graphic services, X-window usage, remote experimental collaborations, remote data access for specific projects and other collaborations. Other areas of concern include support for workstations, enhanced infrastructure to support collaborations, the User Service Centers, NERSC and future massively parallel computers, and FCC sponsored workshops

  18. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    Science.gov (United States)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  19. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    International Nuclear Information System (INIS)

    Wang, Henry; Ma Yunzhi; Pratx, Guillem; Xing Lei

    2011-01-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  20. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Henry [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Ma Yunzhi; Pratx, Guillem; Xing Lei, E-mail: hwang41@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847 (United States)

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  1. Television Viewing, Computer Use, Time Driving and All‐Cause Mortality: The SUN Cohort

    Science.gov (United States)

    Basterra‐Gortari, Francisco Javier; Bes‐Rastrollo, Maira; Gea, Alfredo; Núñez‐Córdoba, Jorge María; Toledo, Estefanía; Martínez‐González, Miguel Ángel

    2014-01-01

    Background Sedentary behaviors have been directly associated with all‐cause mortality. However, little is known about different types of sedentary behaviors in relation to overall mortality. Our objective was to assess the association between different sedentary behaviors and all‐cause mortality. Methods and Results In this prospective, dynamic cohort study (the SUN Project) 13 284 Spanish university graduates with a mean age of 37 years were followed‐up for a median of 8.2 years. Television, computer, and driving time were assessed at baseline. Poisson regression models were fitted to examine the association between each sedentary behavior and total mortality. All‐cause mortality incidence rate ratios (IRRs) per 2 hours per day were 1.40 (95% confidence interval (CI): 1.06 to 1.84) for television viewing, 0.96 (95% CI: 0.79 to 1.18) for computer use, and 1.14 (95% CI: 0.90 to 1.44) for driving, after adjustment for age, sex, smoking status, total energy intake, Mediterranean diet adherence, body mass index, and physical activity. The risk of mortality was twofold higher for participants reporting ≥3 h/day of television viewing than for those reporting Television viewing was directly associated with all‐cause mortality. However, computer use and time spent driving were not significantly associated with higher mortality. Further cohort studies and trials designed to assess whether reductions in television viewing are able to reduce mortality are warranted. The lack of association between computer use or time spent driving and mortality needs further confirmation. PMID:24965030

  2. Rapid detection and typing of pathogenic nonpneumophila Legionella spp. isolates using a multiplex real-time PCR assay.

    Science.gov (United States)

    Benitez, Alvaro J; Winchell, Jonas M

    2016-04-01

    We developed a single tube multiplex real-time PCR assay that allows for the rapid detection and typing of 9 nonpneumophila Legionella spp. isolates that are clinically relevant. The multiplex assay is capable of simultaneously detecting and discriminating L. micdadei, L. bozemanii, L. dumoffii, L. longbeachae, L. feeleii, L. anisa, L. parisiensis, L. tucsonensis serogroup (sg) 1 and 3, and L. sainthelensis sg 1 and 2 isolates. Evaluation of the assay with nucleic acid from each of these species derived from both clinical and environmental isolates and typing strains demonstrated 100% sensitivity and 100% specificity when tested against 43 other Legionella spp. Typing of L. anisa, L. parisiensis, and L. tucsonensis sg 1 and 3 isolates was accomplished by developing a real-time PCR assay followed by high-resolution melt (HRM) analysis targeting the ssrA gene. Further typing of L. bozemanii, L. longbeachae, and L. feeleii isolates to the serogroup level was accomplished by developing a real-time PCR assay followed by HRM analysis targeting the mip gene. When used in conjunction with other currently available diagnostic tests, these assays may aid in rapidly identifying specific etiologies associated with Legionella outbreaks, clusters, sporadic cases, and potential environmental sources. Published by Elsevier Inc.

  3. Multi-detector computed tomography of acute abdomen

    International Nuclear Information System (INIS)

    Leschka, Sebastian; Alkadhi, Hatem; Wildermuth, Simon; Marincek, Borut; University Hospital of Zurich

    2005-01-01

    Acute abdominal pain is one of the most common causes for referrals to the emergency department. The sudden onset of severe abdominal pain characterising the ''acute abdomen'' requires rapid and accurate identification of a potentially life-threatening abdominal pathology to provide a timely referral to the appropriate physician. While the physical examination and laboratory investigations are often non-specific, computed tomography (CT) has evolved as the first-line imaging modality in patients with an acute abdomen. Because the new multi-detector CT (MDCT) scanner generations provide increased speed, greater volume coverage and thinner slices, the acceptance of CT for abdominal imaging has increased rapidly. The goal of this article is to discuss the role of MDCT in the diagnostic work-up of acute abdominal pain. (orig.)

  4. Application verification research of cloud computing technology in the field of real time aerospace experiment

    Science.gov (United States)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  5. 21 CFR 10.20 - Submission of documents to Division of Dockets Management; computation of time; availability for...

    Science.gov (United States)

    2010-04-01

    ... Management; computation of time; availability for public disclosure. 10.20 Section 10.20 Food and Drugs FOOD... Management; computation of time; availability for public disclosure. (a) A submission to the Division of Dockets Management of a petition, comment, objection, notice, compilation of information, or any other...

  6. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  7. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  8. Research for the design of visual fatigue based on the computer visual communication

    Science.gov (United States)

    Deng, Hu-Bin; Ding, Bao-min

    2013-03-01

    With the era of rapid development of computer networks. The role of network communication in the social, economic, political, become more and more important and suggested their special role. The computer network communicat ion through the modern media and byway of the visual communication effect the public of the emotional, spiritual, career and other aspects of the life. While its rapid growth also brought some problems, It is that their message across to the public, its design did not pass a relat ively perfect manifestation to express the informat ion. So this not only leads to convey the error message, but also to cause the physical and psychological fatigue for the audiences. It is said that the visual fatigue. In order to reduce the fatigue when people obtain the useful information in using computer. Let the audience in a short time to obtain the most useful informat ion, this article gave a detailed account of its causes, and propose effective solutions and, through the specific examples to explain it, also in the future computer design visual communicat ion applications development prospect.

  9. Reliability of real-time computing with radiation data feedback at accidental release

    International Nuclear Information System (INIS)

    Deme, S.; Feher, I.; Lang, E.

    1990-01-01

    At the first workshop in 1985 we reported on the real-time dose computing method used at the Paks Nuclear Power Plant and on the telemetric system developed for the normalization of the computed data. At present, the computing method normalized for the telemetric data represents the primary information for deciding on any necessary counter measures in case of a nuclear reactor accident. In this connection we analyzed the reliability of the results obtained in this manner. The points of the analysis were: how the results are influenced by the choice of certain parameters that cannot be determined by direct methods and how the improperly chosen diffusion parameters would distort the determination of environmental radiation parameters normalized on the basis of the measurements ( 131 I activity concentration, gamma dose rate) at points lying at a given distance from the measuring stations. A further source of errors may be that, when determining the level of gamma radiation, the radionuclide doses in the cloud and on the ground surface are measured together by the environmental monitoring stations, whereas these doses appear separately in the computations. At the Paks NPP it is the time integral of the aiborne activity concentration of vapour form 131 I which is determined. This quantity includes neither the other physical and chemical forms of 131 I nor the other isotopes of radioiodine. We gave numerical examples for the uncertainties due to the above factors. As a result, we arrived at the conclusions that there is a need to decide on accident-related measures based on the computing method that the dose uncertainties may reach one order of magnitude for points lying far from the monitoring stations. Different measures are discussed to make the uncertainties significantly lower

  10. High-Throughput Sequencing Reveals Hypothalamic MicroRNAs as Novel Partners Involved in Timing the Rapid Development of Chicken (Gallus gallus) Gonads.

    Science.gov (United States)

    Han, Wei; Zou, Jianmin; Wang, Kehua; Su, Yijun; Zhu, Yunfen; Song, Chi; Li, Guohui; Qu, Liang; Zhang, Huiyong; Liu, Honglin

    2015-01-01

    Onset of the rapid gonad growth is a milestone in sexual development that comprises many genes and regulatory factors. The observations in model organisms and mammals including humans have shown a potential link between miRNAs and development timing. To determine whether miRNAs play roles in this process in the chicken (Gallus gallus), the Solexa deep sequencing was performed to analyze the profiles of miRNA expression in the hypothalamus of hens from two different pubertal stages, before onset of the rapid gonad development (BO) and after onset of the rapid gonad development (AO). 374 conserved and 46 novel miRNAs were identified as hypothalamus-expressed miRNAs in the chicken. 144 conserved miRNAs were showed to be differentially expressed (reads > 10, P time quantitative RT-PCR (qRT-PCR) method. 2013 putative genes were predicted as the targets of the 15 most differentially expressed miRNAs (fold-change > 4.0, P times by the miRNAs. qRT-PCR revealed the basic transcription levels of these clock genes were much higher (P development of chicken gonads. Considering the characteristics of miRNA functional conservation, the results will contribute to the research on puberty onset in humans.

  11. Age-related differences in lower-limb force-time relation during the push-off in rapid voluntary stepping.

    Science.gov (United States)

    Melzer, I; Krasovsky, T; Oddsson, L I E; Liebermann, D G

    2010-12-01

    This study investigated the force-time relationship during the push-off stage of a rapid voluntary step in young and older healthy adults, to study the assumption that when balance is lost a quick step may preserve stability. The ability to achieve peak propulsive force within a short time is critical for the performance of such a quick powerful step. We hypothesized that older adults would achieve peak force and power in significantly longer times compared to young people, particularly during the push-off preparatory phase. Fifteen young and 15 older volunteers performed rapid forward steps while standing on a force platform. Absolute anteroposterior and body weight normalized vertical forces during the push-off in the preparation and swing phases were used to determine time to peak and peak force, and step power. Two-way analyses of variance ('Group' [young-older] by 'Phase' [preparation-swing]) were used to assess our hypothesis (P ≤ 0.05). Older people exerted lower peak forces (anteroposterior and vertical) than young adults, but not necessarily lower peak power. More significantly, they showed a longer time to peak force, particularly in the vertical direction during the preparation phase. Older adults generate propulsive forces slowly and reach lower magnitudes, mainly during step preparation. The time to achieve a peak force and power, rather than its actual magnitude, may account for failures in quickly performing a preventive action. Such delay may be associated with the inability to react and recruit muscles quickly. Thus, training elderly to step fast in response to relevant cues may be beneficial in the prevention of falls. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  13. Rigorous bounds on survival times in circular accelerators and efficient computation of fringe-field transfer maps

    International Nuclear Information System (INIS)

    Hoffstaetter, G.H.

    1994-12-01

    Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)

  14. Design considerations for computationally constrained two-way real-time video communication

    Science.gov (United States)

    Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.

    2009-08-01

    Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.

  15. Real-Time Joint Streaming Data Processing from Social and Physical Sensors

    Science.gov (United States)

    Kropivnitskaya, Y. Y.; Qin, J.; Tiampo, K. F.; Bauer, M.

    2014-12-01

    The results of the technological breakthroughs in computing that have taken place over the last few decades makes it possible to achieve emergency management objectives that focus on saving human lives and decreasing economic effects. In particular, the integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better real-time seismic hazard analysis through distributed computing networks. The main goal of this work is to utilize innovative computational algorithms for better real-time seismic risk analysis by integrating different data sources and processing tools into streaming and cloud computing applications. The Geological Survey of Canada operates the Canadian National Seismograph Network (CNSN) with over 100 high-gain instruments and 60 low-gain or strong motion seismographs. The processing of the continuous data streams from each station of the CNSN provides the opportunity to detect possible earthquakes in near real-time. The information from physical sources is combined to calculate a location and magnitude for an earthquake. The automatically calculated results are not always sufficiently precise and prompt that can significantly reduce the response time to a felt or damaging earthquake. Social sensors, here represented as Twitter users, can provide information earlier to the general public and more rapidly to the emergency planning and disaster relief agencies. We introduce joint streaming data processing from social and physical sensors in real-time based on the idea that social media observations serve as proxies for physical sensors. By using the streams of data in the form of Twitter messages, each of which has an associated time and location, we can extract information related to a target event and perform enhanced analysis by combining it with physical sensor data. Results of this work suggest that the use of data from social media, in conjunction

  16. Cloud computing platform for real-time measurement and verification of energy performance

    International Nuclear Information System (INIS)

    Ke, Ming-Tsun; Yeh, Chia-Hung; Su, Cheng-Jie

    2017-01-01

    Highlights: • Application of PSO algorithm can improve the accuracy of the baseline model. • M&V cloud platform automatically calculates energy performance. • M&V cloud platform can be applied in all energy conservation measures. • Real-time operational performance can be monitored through the proposed platform. • M&V cloud platform facilitates the development of EE programs and ESCO industries. - Abstract: Nations worldwide are vigorously promoting policies to improve energy efficiency. The use of measurement and verification (M&V) procedures to quantify energy performance is an essential topic in this field. Currently, energy performance M&V is accomplished via a combination of short-term on-site measurements and engineering calculations. This requires extensive amounts of time and labor and can result in a discrepancy between actual energy savings and calculated results. In addition, the M&V period typically lasts for periods as long as several months or up to a year, the failure to immediately detect abnormal energy performance not only decreases energy performance, results in the inability to make timely correction, and misses the best opportunity to adjust or repair equipment and systems. In this study, a cloud computing platform for the real-time M&V of energy performance is developed. On this platform, particle swarm optimization and multivariate regression analysis are used to construct accurate baseline models. Instantaneous and automatic calculations of the energy performance and access to long-term, cumulative information about the energy performance are provided via a feature that allows direct uploads of the energy consumption data. Finally, the feasibility of this real-time M&V cloud platform is tested for a case study involving improvements to a cold storage system in a hypermarket. Cloud computing platform for real-time energy performance M&V is applicable to any industry and energy conservation measure. With the M&V cloud platform, real-time

  17. The influence of wavelength-dependent radiation in simulation of lamp-heated rapid thermal processing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ting, A. [Sandia National Labs., Livermore, CA (United States). Computational Mechanics Dept.

    1994-08-01

    Understanding the thermal response of lamp-heated rapid thermal processing (RTP) systems requires understanding relatively complex radiation exchange among opaque and partially transmitting surfaces and materials. The objective of this paper is to investigate the influence of wavelength-dependent radiative properties. The examples used for the analysis consider axisymmetric systems of the kind that were developed by Texas Instruments (TI) for the Microelectronics Manufacturing Science and Technology (MMST) Program and illustrate a number of wavelength-dependent (spectral) effects. The models execute quickly on workstation class computing flatforms, and thus permit rapid comparison of alternative reactor designs and physical models. The fast execution may also permit the incorporation of these models into real-time model-based process control algorithms.

  18. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Iliopoulos, AS; Sun, X [Duke University, Durham, NC (United States); Pitsianis, N [Aristotle University of Thessaloniki (Greece); Duke University, Durham, NC (United States); Yin, FF; Ren, L [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digital projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub

  19. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    International Nuclear Information System (INIS)

    Iliopoulos, AS; Sun, X; Pitsianis, N; Yin, FF; Ren, L

    2015-01-01

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digital projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub

  20. Computer-games for gravitational wave science outreach: Black Hole Pong and Space Time Quest

    International Nuclear Information System (INIS)

    Carbone, L; Bond, C; Brown, D; Brückner, F; Grover, K; Lodhia, D; Mingarelli, C M F; Fulda, P; Smith, R J E; Unwin, R; Vecchio, A; Wang, M; Whalley, L; Freise, A

    2012-01-01

    We have established a program aimed at developing computer applications and web applets to be used for educational purposes as well as gravitational wave outreach activities. These applications and applets teach gravitational wave physics and technology. The computer programs are generated in collaboration with undergraduates and summer students as part of our teaching activities, and are freely distributed on a dedicated website. As part of this program, we have developed two computer-games related to gravitational wave science: 'Black Hole Pong' and 'Space Time Quest'. In this article we present an overview of our computer related outreach activities and discuss the games and their educational aspects, and report on some positive feedback received.

  1. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    Science.gov (United States)

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  2. Optimizing the magnetization-prepared rapid gradient-echo (MP-RAGE sequence.

    Directory of Open Access Journals (Sweden)

    Jinghua Wang

    Full Text Available The three-dimension (3D magnetization-prepared rapid gradient-echo (MP-RAGE sequence is one of the most popular sequences for structural brain imaging in clinical and research settings. The sequence captures high tissue contrast and provides high spatial resolution with whole brain coverage in a short scan time. In this paper, we first computed the optimal k-space sampling by optimizing the contrast of simulated images acquired with the MP-RAGE sequence at 3.0 Tesla using computer simulations. Because the software of our scanner has only limited settings for k-space sampling, we then determined the optimal k-space sampling for settings that can be realized on our scanner. Subsequently we optimized several major imaging parameters to maximize normal brain tissue contrasts under the optimal k-space sampling. The optimal parameters are flip angle of 12°, effective inversion time within 900 to 1100 ms, and delay time of 0 ms. In vivo experiments showed that the quality of images acquired with our optimal protocol was significantly higher than that of images obtained using recommended protocols in prior publications. The optimization of k-spacing sampling and imaging parameters significantly improved the quality and detection sensitivity of brain images acquired with MP-RAGE.

  3. Kajian dan Implementasi Real TIME Operating System pada Single Board Computer Berbasis Arm

    OpenAIRE

    A, Wiedjaja; M, Handi; L, Jonathan; Christian, Benyamin; Kristofel, Luis

    2014-01-01

    Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system) which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC) ARM-based, namely Pandaboard ES with ...

  4. Centrifuge: rapid and sensitive classification of metagenomic sequences.

    Science.gov (United States)

    Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L

    2016-12-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.

  5. An FPGA-based rapid prototyping platform for wavelet coprocessors

    Science.gov (United States)

    Vera, Alonzo; Meyer-Baese, Uwe; Pattichis, Marios

    2007-04-01

    MatLab/Simulink-based design flows are being used by DSP designers to improve time-to-market of FPGA implementations. 1 Commonly, digital signal processing cores are integrated in an embedded system as coprocessors. Existing CAD tools do not fully address the integration of a DSP coprocessor into an embedded system design. This integration might prove to be time consuming and error prone. It also requires that the DSP designer has an excellent knowledge of embedded systems and computer architecture details. We present a prototyping platform and design flow that allows rapid integration of embedded systems with a wavelet coprocessor. The platform comprises of software and hardware modules that allow a DSP designer a painless integration of a coprocessor with a PowerPC-based embedded system. The platform has a wide range of applications, from industrial to educational environments.

  6. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    Science.gov (United States)

    Jiang, M.; de Vries, W.; Pertica, A.; Olivier, S.

    2011-09-01

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the "point-cloud" of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  7. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    International Nuclear Information System (INIS)

    Jiang, M.; de Vries, W.H.; Pertica, A.J.; Olivier, S.S.

    2011-01-01

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the 'point-cloud' of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  8. TaqMan MGB probe fluorescence real-time quantitative PCR for rapid detection of Chinese Sacbrood virus.

    Directory of Open Access Journals (Sweden)

    Ma Mingxiao

    Full Text Available Sacbrood virus (SBV is a picorna-like virus that affects honey bees (Apis mellifera and results in the death of the larvae. Several procedures are available to detect Chinese SBV (CSBV in clinical samples, but not to estimate the level of CSBV infection. The aim of this study was develop an assay for rapid detection and quantification of this virus. Primers and probes were designed that were specific for CSBV structural protein genes. A TaqMan minor groove binder (MGB probe-based, fluorescence real-time quantitative PCR was established. The specificity, sensitivity and stability of the assay were assessed; specificity was high and there were no cross-reactivity with healthy larvae or other bee viruses. The assay was applied to detect CSBV in 37 clinical samples and its efficiency was compared with clinical diagnosis, electron microscopy observation, and conventional RT-PCR. The TaqMan MGB-based probe fluorescence real-time quantitative PCR for CSBV was more sensitive than other methods tested. This assay was a reliable, fast, and sensitive method that was used successfully to detect CSBV in clinical samples. The technology can provide a useful tool for rapid detection of CSBV. This study has established a useful protocol for CSBV testing, epidemiological investigation, and development of animal models.

  9. The Healthcare Improvement Scotland evidence note rapid review process: providing timely, reliable evidence to inform imperative decisions on healthcare.

    Science.gov (United States)

    McIntosh, Heather M; Calvert, Julie; Macpherson, Karen J; Thompson, Lorna

    2016-06-01

    Rapid review has become widely adopted by health technology assessment agencies in response to demand for evidence-based information to support imperative decisions. Concern about the credibility of rapid reviews and the reliability of their findings has prompted a call for wider publication of their methods. In publishing this overview of the accredited rapid review process developed by Healthcare Improvement Scotland, we aim to raise awareness of our methods and advance the discourse on best practice. Healthcare Improvement Scotland produces rapid reviews called evidence notes using a process that has achieved external accreditation through the National Institute for Health and Care Excellence. Key components include a structured approach to topic selection, initial scoping, considered stakeholder involvement, streamlined systematic review, internal quality assurance, external peer review and updating. The process was introduced in 2010 and continues to be refined over time in response to user feedback and operational experience. Decision-makers value the responsiveness of the process and perceive it as being a credible source of unbiased evidence-based information supporting advice for NHSScotland. Many agencies undertaking rapid reviews are striving to balance efficiency with methodological rigour. We agree that there is a need for methodological guidance and that it should be informed by better understanding of current approaches and the consequences of different approaches to streamlining systematic review methods. Greater transparency in the reporting of rapid review methods is essential to enable that to happen.

  10. BCILAB: a platform for brain-computer interface development

    Science.gov (United States)

    Kothe, Christian Andreas; Makeig, Scott

    2013-10-01

    Objective. The past two decades have seen dramatic progress in our ability to model brain signals recorded by electroencephalography, functional near-infrared spectroscopy, etc., and to derive real-time estimates of user cognitive state, response, or intent for a variety of purposes: to restore communication by the severely disabled, to effect brain-actuated control and, more recently, to augment human-computer interaction. Continuing these advances, largely achieved through increases in computational power and methods, requires software tools to streamline the creation, testing, evaluation and deployment of new data analysis methods. Approach. Here we present BCILAB, an open-source MATLAB-based toolbox built to address the need for the development and testing of brain-computer interface (BCI) methods by providing an organized collection of over 100 pre-implemented methods and method variants, an easily extensible framework for the rapid prototyping of new methods, and a highly automated framework for systematic testing and evaluation of new implementations. Main results. To validate and illustrate the use of the framework, we present two sample analyses of publicly available data sets from recent BCI competitions and from a rapid serial visual presentation task. We demonstrate the straightforward use of BCILAB to obtain results compatible with the current BCI literature. Significance. The aim of the BCILAB toolbox is to provide the BCI community a powerful toolkit for methods research and evaluation, thereby helping to accelerate the pace of innovation in the field, while complementing the existing spectrum of tools for real-time BCI experimentation, deployment and use.

  11. A Rapid, Onsite, Ultrasensitive Melamine Quantitation Method for Protein Beverages Using Time-Resolved Fluorescence Detection Paper.

    Science.gov (United States)

    Li, Guanghua; Wang, Du; Zhou, Aijun; Sun, Yimin; Zhang, Qi; Poapolathep, Amnart; Zhang, Li; Fan, Zhiyong; Zhang, Zhaowei; Li, Peiwu

    2018-05-02

    To ensure protein beverage safety and prevent illegal melamine use to artificially increase protein content, a rapid, onsite, ultrasensitive detection method for melamine must be developed because melamine is detrimental to human health and life. Herein, an ultrasensitive time-resolved fluorescence detection paper (TFDP) was developed to detect melamine in protein beverages within 15 min using a one-step sample preparation. The lower limits of detection were 0.89, 0.94, and 1.05 ng/mL, and the linear ranges were 2.67-150, 2.82-150, and 3.15-150 ng/mL (R2>0.982) for peanut, walnut, and coconut beverages, respectively. The recovery rates were 85.86-110.60% with a coefficient of variation beverage samples, the TFDP and ultra-performance liquid chromatography-tandem mass spectrometer (UPLC-MS/MS) results were consistent. This method is a promising alternative for rapid, onsite detection of melamine in beverages.

  12. Time-resolved photoluminescence of Ga(NAsP) multiple quantum wells grown on Si substrate: Effects of rapid thermal annealing

    Energy Technology Data Exchange (ETDEWEB)

    Woscholski, R., E-mail: ronja.woscholski@physik.uni-marburg.de; Shakfa, M.K.; Gies, S.; Wiemer, M.; Rahimi-Iman, A.; Zimprich, M.; Reinhard, S.; Jandieri, K.; Baranovskii, S.D.; Heimbrodt, W.; Volz, K.; Stolz, W.; Koch, M.

    2016-08-31

    Time-resolved photoluminescence (TR-PL) spectroscopy has been used to study the impact of rapid thermal annealing (RTA) on the optical properties and carrier dynamics in Ga(NAsP) multiple quantum well heterostructures (MQWHs) grown on silicon substrates. TR-PL measurements reveal an enhancement in the PL efficiency when the RTA temperature is increased up to 925 °C. Then, the PL intensity dramatically decreases with the annealing temperature. This behavior is explained by the variation of the disorder degree in the studied structures. The analysis of the low-temperature emission-energy-dependent PL decay time enables us to characterize the disorder in the Ga(NAsP) MQWHs. The theoretically extracted energy-scales of disorder confirm the experimental observations. - Highlights: • Ga(NAsP) multiple quantum well heterostructures (MQWHs) grown on silicon substrates • Impact of rapid thermal annealing on the optical properties and carrier dynamics • Time resolved photoluminescence spectroscopy was applied. • PL transients became continuously faster with increasing annealing temperature. • Enhancement in the PL efficiency with increasing annealing temperature up to 925 °C.

  13. Time-resolved photoluminescence of Ga(NAsP) multiple quantum wells grown on Si substrate: Effects of rapid thermal annealing

    International Nuclear Information System (INIS)

    Woscholski, R.; Shakfa, M.K.; Gies, S.; Wiemer, M.; Rahimi-Iman, A.; Zimprich, M.; Reinhard, S.; Jandieri, K.; Baranovskii, S.D.; Heimbrodt, W.; Volz, K.; Stolz, W.; Koch, M.

    2016-01-01

    Time-resolved photoluminescence (TR-PL) spectroscopy has been used to study the impact of rapid thermal annealing (RTA) on the optical properties and carrier dynamics in Ga(NAsP) multiple quantum well heterostructures (MQWHs) grown on silicon substrates. TR-PL measurements reveal an enhancement in the PL efficiency when the RTA temperature is increased up to 925 °C. Then, the PL intensity dramatically decreases with the annealing temperature. This behavior is explained by the variation of the disorder degree in the studied structures. The analysis of the low-temperature emission-energy-dependent PL decay time enables us to characterize the disorder in the Ga(NAsP) MQWHs. The theoretically extracted energy-scales of disorder confirm the experimental observations. - Highlights: • Ga(NAsP) multiple quantum well heterostructures (MQWHs) grown on silicon substrates • Impact of rapid thermal annealing on the optical properties and carrier dynamics • Time resolved photoluminescence spectroscopy was applied. • PL transients became continuously faster with increasing annealing temperature. • Enhancement in the PL efficiency with increasing annealing temperature up to 925 °C

  14. Computing time-series suspended-sediment concentrations and loads from in-stream turbidity-sensor and streamflow data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Doug; Ziegler, Andrew C.

    2010-01-01

    Over the last decade, use of a method for computing suspended-sediment concentration and loads using turbidity sensors—primarily nephelometry, but also optical backscatter—has proliferated. Because an in- itu turbidity sensor is capa le of measuring turbidity instantaneously, a turbidity time series can be recorded and related directly to time-varying suspended-sediment concentrations. Depending on the suspended-sediment characteristics of the measurement site, this method can be more reliable and, in many cases, a more accurate means for computing suspended-sediment concentrations and loads than traditional U.S. Geological Survey computational methods. Guidelines and procedures for estimating time s ries of suspended-sediment concentration and loading as a function of turbidity and streamflow data have been published in a U.S. Geological Survey Techniques and Methods Report, Book 3, Chapter C4. This paper is a summary of these guidelines and discusses some of the concepts, s atistical procedures, and techniques used to maintain a multiyear suspended sediment time series.

  15. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  16. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    International Nuclear Information System (INIS)

    Kato, Fumi; Kamishima, Tamotsu; Morita, Ken; Muto, Natalia S.; Okamoto, Syozou; Omatsu, Tokuhiko; Oyama, Noriko; Terae, Satoshi; Kanegae, Kakuko; Nonomura, Katsuya; Shirato, Hiroki

    2011-01-01

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using 99m Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 ± 3.9 s) was significantly shorter than that of the workstation (102.6 ± 38.9 s, p < 0.0001). The results of n-SRV (49.7 ± 4.0%) were highly consistent with those of z-SRV (49.9 ± 3.6%), with a mean discrepancy of 0.12 ± 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 ± 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  17. Green computing: power optimisation of vfi-based real-time multiprocessor dataflow applications

    NARCIS (Netherlands)

    Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2015-01-01

    Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor

  18. In this issue: Time to replace doctors’ judgement with computers

    Directory of Open Access Journals (Sweden)

    Simon de Lusignan

    2015-11-01

    Full Text Available Informaticians continue to rise to the challenge, set by the English Health Minister, of trying to replace doctors’ judgement with computers. This issue describes successes and where there are barriers. However, whilst there is progress this tends to be incremental and there are grand challenges to be overcome before computers can replace clinician. These grand challenges include: (1 improving usability so it is possible to more readily incorporate technology into clinical workflow; (2 rigorous new analytic methods that make use of the mass of available data, ‘Big data’, to create real-world evidence; (3 faster ways of meeting regulatory and legal requirements including ensuring privacy; (4 provision of reimbursement models to fund innovative technology that can substitute for clinical time and (5 recognition that innovations that improve quality also often increase cost. Informatics more is likely to support and augment clinical decision making rather than replace clinicians.

  19. Time-of-Flight Sensors in Computer Graphics

    DEFF Research Database (Denmark)

    Kolb, Andreas; Barth, Erhardt; Koch, Reinhard

    2009-01-01

    , including Computer Graphics, Computer Vision and Man Machine Interaction (MMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real...

  20. Usefulness of Computed Tomography in pre-surgical evaluation of maxillo-facial pathology with rapid prototyping and surgical pre-planning by virtual reality

    International Nuclear Information System (INIS)

    Toso, Francesco; Zuiani, Chiara; Vergendo, Maurizio; Bazzocchi, Massimo; Salvo, Iolanda; Robiony, Massimo; Politi, Massimo

    2005-01-01

    Purpose. To validate a protocol for creating virtual models to be used in the construction of solid prototypes useful for the planning-simulation of maxillo-facial surgery, in particular for very complex anatomical and pathologic problems. To optimize communications between the radiology, engineering and surgical laboratories. Methods and materials. We studied 16 patients with different clinical problems of the maxillo-facial district. Exams were performed with multidetector computed tomography (MDCT) and single slice computed tomography (SDCT) with axial scans and collimation of 0.5-2 mm, and reconstruction interval of 1 mm. Subsequently we performed 2D multiplanar reconstructions and 3D volume-rendering reconstructions. We exported the DICOM images to the engineering laboratory, to recognize and isolate the bony structures by software. With these data the solid prototypes were generated using stereolitography. To date, surgery has been preformed on 12 patients after simulation of the procedure on the stereolitography model. Results. The solid prototypes constructed in the difficult cases were sufficiently detailed despite problems related to the artefacts generated by dental fillings and prostheses. In the remaining cases the MPR/3D images were sufficiently detailed for surgical planning. The surgical results were excellent in all patients who underwent surgery, and the surgeons were satisfied with the improvement in quality and the reduction in time required for the procedure. Conclusions. MDCT enables rapid prototyping using solid replication, which was very helpful in maxillofacial surgery, despite problems related to artifacts due to dental fillings and prosthesis within the acquisition field; solutions for this problem are work in progress. The protocol used for communication between the different laboratories was valid and reproducible [it

  1. Real-time PCR-based method for rapid detection of Aspergillus niger and Aspergillus welwitschiae isolated from coffee.

    Science.gov (United States)

    von Hertwig, Aline Morgan; Sant'Ana, Anderson S; Sartori, Daniele; da Silva, Josué José; Nascimento, Maristela S; Iamanaka, Beatriz Thie; Pelegrinelli Fungaro, Maria Helena; Taniwaki, Marta Hiromi

    2018-05-01

    Some species from Aspergillus section Nigri are morphologically very similar and altogether have been called A. niger aggregate. Although the species included in this group are morphologically very similar, they differ in their ability to produce mycotoxins and other metabolites and their taxonomical status has evolved continuously. Among them, A. niger and A. welwitschiae are ochratoxin A and fumonisin B 2 producers and their detection and/or identification is of crucial importance for food safety. The aim of this study was the development of a real-time PCR-based method for simultaneous discrimination of A. niger and A. welwitschiae from other species of the A. niger aggregate isolated from coffee beans. One primer pair and a hybridization probe specific for detection of A. niger and A. welwitschiae strains were designed based on the BenA gene sequences, and used in a Real-time PCR assay for the rapid discrimination between both these species from all others of the A. niger aggregate. The Real-time PCR assay was shown to be 100% efficient in discriminating the 73 isolates of A. niger/A. welwitschiae from the other A. niger aggregate species analyzed as a negative control. This result testifies to the use of this technique as a good tool in the rapid detection of these important toxigenic species. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Individual and family environmental correlates of television and computer time in 10- to 12-year-old European children: the ENERGY-project.

    Science.gov (United States)

    Verloigne, Maïté; Van Lippevelde, Wendy; Bere, Elling; Manios, Yannis; Kovács, Éva; Grillenberger, Monika; Maes, Lea; Brug, Johannes; De Bourdeaudhuij, Ilse

    2015-09-18

    The aim was to investigate which individual and family environmental factors are related to television and computer time separately in 10- to-12-year-old children within and across five European countries (Belgium, Germany, Greece, Hungary, Norway). Data were used from the ENERGY-project. Children and one of their parents completed a questionnaire, including questions on screen time behaviours and related individual and family environmental factors. Family environmental factors included social, political, economic and physical environmental factors. Complete data were obtained from 2022 child-parent dyads (53.8 % girls, mean child age 11.2 ± 0.8 years; mean parental age 40.5 ± 5.1 years). To examine the association between individual and family environmental factors (i.e. independent variables) and television/computer time (i.e. dependent variables) in each country, multilevel regression analyses were performed using MLwiN 2.22, adjusting for children's sex and age. In all countries, children reported more television and/or computer time, if children and their parents thought that the maximum recommended level for watching television and/or using the computer was higher and if children had a higher preference for television watching and/or computer use and a lower self-efficacy to control television watching and/or computer use. Most physical and economic environmental variables were not significantly associated with television or computer time. Slightly more individual factors were related to children's computer time and more parental social environmental factors to children's television time. We also found different correlates across countries: parental co-participation in television watching was significantly positively associated with children's television time in all countries, except for Greece. A higher level of parental television and computer time was only associated with a higher level of children's television and computer time in Hungary. Having rules

  3. International assessment of functional computer abilities

    OpenAIRE

    Anderson, Ronald E.; Collis, Betty

    1993-01-01

    After delineating the major rationale for computer education, data are presented from Stage 1 of the IEA Computers in Education Study showing international comparisons that may reflect differential priorities. Rapid technological change and the lack of consensus on goals of computer education impedes the establishment of stable curricula for ¿general computer education¿ or computer literacy. In this context the construction of instruments for student assessment remains a challenge. Seeking to...

  4. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  5. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    Science.gov (United States)

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward

  6. Real-time computer treatment of THz passive device images with the high image quality

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  7. Achieving high performance in numerical computations on RISC workstations and parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  8. A nationwide web-based automated system for early outbreak detection and rapid response in China

    Directory of Open Access Journals (Sweden)

    Yilan Liao

    2011-03-01

    Full Text Available Timely reporting, effective analyses and rapid distribution of surveillance data can assist in detecting the aberration of disease occurrence and further facilitate a timely response. In China, a new nationwide web-based automated system for outbreak detection and rapid response was developed in 2008. The China Infectious Disease Automated-alert and Response System (CIDARS was developed by the Chinese Center for Disease Control and Prevention based on the surveillance data from the existing electronic National Notifiable Infectious Diseases Reporting Information System (NIDRIS started in 2004. NIDRIS greatly improved the timeliness and completeness of data reporting with real time reporting information via the Internet. CIDARS further facilitates the data analysis, aberration detection, signal dissemination, signal response and information communication needed by public health departments across the country. In CIDARS, three aberration detection methods are used to detect the unusual occurrence of 28 notifiable infectious diseases at the county level and to transmit that information either in real-time or on a daily basis. The Internet, computers and mobile phones are used to accomplish rapid signal generation and dissemination, timely reporting and reviewing of the signal response results. CIDARS has been used nationwide since 2008; all Centers for Disease Control and Prevention (CDC in China at the county, prefecture, provincial and national levels are involved in the system. It assists with early outbreak detection at the local level and prompts reporting of unusual disease occurrences or potential outbreaks to CDCs throughout the country.

  9. Rapid and efficient radiosynthesis of [{sup 123}I]I-PK11195, a single photon emission computed tomography tracer for peripheral benzodiazepine receptors

    Energy Technology Data Exchange (ETDEWEB)

    Pimlott, Sally L. [Department of Clinical Physics, West of Scotland Radionuclide Dispensary, Western Infirmary, G11 6NT Glasgow (United Kingdom)], E-mail: s.pimlott@clinmed.gla.ac.uk; Stevenson, Louise [Department of Chemistry, WestCHEM, University of Glasgow, G12 8QQ Glasgow (United Kingdom); Wyper, David J. [Institute of Neurological Sciences, Southern General Hospital, G51 4TF Glasgow (United Kingdom); Sutherland, Andrew [Department of Chemistry, WestCHEM, University of Glasgow, G12 8QQ Glasgow (United Kingdom)

    2008-07-15

    Introduction: [{sup 123}I]I-PK11195 is a high-affinity single photon emission computed tomography radiotracer for peripheral benzodiazepine receptors that has previously been used to measure activated microglia and to assess neuroinflammation in the living human brain. This study investigates the radiosynthesis of [{sup 123}I]I-PK11195 in order to develop a rapid and efficient method that obtains [{sup 123}I]I-PK11195 with a high specific activity for in vivo animal and human imaging studies. Methods: The synthesis of [{sup 123}I]I-PK11195 was evaluated using a solid-state interhalogen exchange method and an electrophilic iododestannylation method, where bromine and trimethylstannyl derivatives were used as precursors, respectively. In the electrophilic iododestannylation method, the oxidants peracetic acid and chloramine-T were both investigated. Results: Electrophilic iododestannylation produced [{sup 123}I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than achievable using the halogen exchange method investigated. Using chloramine-T as oxidant provided a rapid and efficient method of choice for the synthesis of [{sup 123}I]I-PK11195. Conclusions: [{sup 123}I]I-PK11195 has been successfully synthesized via a rapid and efficient electrophilic iododestannylation method, producing [{sup 123}I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than previously achieved.

  10. Tune-control improvements on the rapid-cycling synchrotron

    International Nuclear Information System (INIS)

    Potts, C.; Faber, M.; Gunderson, G.; Knott, M.; Voss, D.

    1981-01-01

    The as-built lattice of the Rapid-Cycling Synchrotron (RCS) had two sets of correction sextupoles and two sets of quadrupoles energized by dc power supplies to control the tune and the tune tilt. With this method of powering these magnets, adjustment of tune conditions during the accelerating cycle as needed was not possible. A set of dynamically programmable power supplies has been built and operated to provide the required chromaticity adjustment. The short accelerating time (16.7 ms) of the RCS and the inductance of the magnets dictated large transistor amplifier power supplies. The required time resolution and waveform flexibility indicated the desirability of computer control. Both the amplifiers and controls are described, along with resulting improvements in the beam performance. A set of octupole magnets and programmable power supplies with similar dynamic qualities have been constructed and installed to control the anticipated high-intensity transverse instability. This system will be operational in the spring of 1981

  11. An assessment of the real-time application capabilities of the SIFT computer system

    Science.gov (United States)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  12. Computation of the Short-Time Linear Canonical Transform with Dual Window

    Directory of Open Access Journals (Sweden)

    Lei Huang

    2017-01-01

    Full Text Available The short-time linear canonical transform (STLCT, which maps the time domain signal into the joint time and frequency domain, has recently attracted some attention in the area of signal processing. However, its applications are still limited due to the fact that selection of coefficients of the short-time linear canonical series (STLCS is not unique, because time and frequency elementary functions (together known as basis function of STLCS do not constitute an orthogonal basis. To solve this problem, this paper investigates a dual window solution. First, the nonorthogonal problem that suffered from original window is fulfilled by orthogonal condition with dual window. Then based on the obtained condition, a dual window computation approach of the GT is extended to the STLCS. In addition, simulations verify the validity of the proposed condition and solutions. Furthermore, some possible applied directions are discussed.

  13. Manual cross check of computed dose times for motorised wedged fields

    International Nuclear Information System (INIS)

    Porte, J.

    2001-01-01

    If a mass of tissue equivalent material is exposed in turn to wedged and open radiation fields of the same size, for equal times, it is incorrect to assume that the resultant isodose pattern will be effectively that of a wedge having half the angle of the wedged field. Computer programs have been written to address the problem of creating an intermediate wedge field, commonly known as a motorized wedge. The total exposure time is apportioned between the open and wedged fields, to produce a beam modification equivalent to that of a wedged field of a given wedge angle. (author)

  14. ADAPTATION OF JOHNSON SEQUENCING ALGORITHM FOR JOB SCHEDULING TO MINIMISE THE AVERAGE WAITING TIME IN CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    SOUVIK PAL

    2016-09-01

    Full Text Available Cloud computing is an emerging paradigm of Internet-centric business computing where Cloud Service Providers (CSPs are providing services to the customer according to their needs. The key perception behind cloud computing is on-demand sharing of resources available in the resource pool provided by CSP, which implies new emerging business model. The resources are provisioned when jobs arrive. The job scheduling and minimization of waiting time are the challenging issue in cloud computing. When a large number of jobs are requested, they have to wait for getting allocated to the servers which in turn may increase the queue length and also waiting time. This paper includes system design for implementation which is concerned with Johnson Scheduling Algorithm that provides the optimal sequence. With that sequence, service times can be obtained. The waiting time and queue length can be reduced using queuing model with multi-server and finite capacity which improves the job scheduling model.

  15. A BIOINFORMATIC STRATEGY TO RAPIDLY CHARACTERIZE CDNA LIBRARIES

    Science.gov (United States)

    A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...

  16. The research and application of green computer room environmental monitoring system based on internet of things technology

    Science.gov (United States)

    Wei, Wang; Chongchao, Pan; Yikai, Liang; Gang, Li

    2017-11-01

    With the rapid development of information technology, the scale of data center increases quickly, and the energy consumption of computer room also increases rapidly, among which, energy consumption of air conditioning cooling makes up a large proportion. How to apply new technology to reduce the energy consumption of the computer room becomes an important topic of energy saving in the current research. This paper study internet of things technology, and design a kind of green computer room environmental monitoring system. In the system, we can get the real-time environment data from the application of wireless sensor network technology, which will be showed in a creative way of three-dimensional effect. In the environment monitor, we can get the computer room assets view, temperature cloud view, humidity cloud view, microenvironment view and so on. Thus according to the condition of the microenvironment, we can adjust the air volume, temperature and humidity parameters of the air conditioning for the individual equipment cabinet to realize the precise air conditioning refrigeration. And this can reduce the energy consumption of air conditioning, as a result, the overall energy consumption of the green computer room will reduce greatly. At the same time, we apply this project in the computer center of Weihai, and after a year of test and running, we find that it took a good energy saving effect, which fully verified the effectiveness of this project on the energy conservation of the computer room.

  17. Granular computing: perspectives and challenges.

    Science.gov (United States)

    Yao, JingTao; Vasilakos, Athanasios V; Pedrycz, Witold

    2013-12-01

    Granular computing, as a new and rapidly growing paradigm of information processing, has attracted many researchers and practitioners. Granular computing is an umbrella term to cover any theories, methodologies, techniques, and tools that make use of information granules in complex problem solving. The aim of this paper is to review foundations and schools of research and to elaborate on current developments in granular computing research. We first review some basic notions of granular computing. Classification and descriptions of various schools of research in granular computing are given. We also present and identify some research directions in granular computing.

  18. Reservoir computer predictions for the Three Meter magnetic field time evolution

    Science.gov (United States)

    Perevalov, A.; Rojas, R.; Lathrop, D. P.; Shani, I.; Hunt, B. R.

    2017-12-01

    The source of the Earth's magnetic field is the turbulent flow of liquid metal in the outer core. Our experiment's goal is to create Earth-like dynamo, to explore the mechanisms and to understand the dynamics of the magnetic and velocity fields. Since it is a complicated system, predictions of the magnetic field is a challenging problem. We present results of mimicking the three Meter experiment by a reservoir computer deep learning algorithm. The experiment is a three-meter diameter outer sphere and a one-meter diameter inner sphere with the gap filled with liquid sodium. The spheres can rotate up to 4 and 14 Hz respectively, giving a Reynolds number near to 108. Two external electromagnets apply magnetic fields, while an array of 31 external and 2 internal Hall sensors measure the resulting induced fields. We use this magnetic probe data to train a reservoir computer to predict the 3M time evolution and mimic waves in the experiment. Surprisingly accurate predictions can be made for several magnetic dipole time scales. This shows that such a complicated MHD system's behavior can be predicted. We gratefully acknowledge support from NSF EAR-1417148.

  19. A stable computational scheme for stiff time-dependent constitutive equations

    International Nuclear Information System (INIS)

    Shih, C.F.; Delorenzi, H.G.; Miller, A.K.

    1977-01-01

    Viscoplasticity and creep type constitutive equations are increasingly being employed in finite element codes for evaluating the deformation of high temperature structural members. These constitutive equations frequently exhibit stiff regimes which makes an analytical assessment of the structure very costly. A computational scheme for handling deformation in stiff regimes is proposed in this paper. By the finite element discretization, the governing partial differential equations in the spatial (x) and time (t) variables are reduced to a system of nonlinear ordinary differential equations in the independent variable t. The constitutive equations are expanded in a Taylor's series about selected values of t. The resulting system of differential equations are then integrated by an implicit scheme which employs a predictor technique to initiate the Newton-Raphson procedure. To examine the stability and accuracy of the computational scheme, a series of calculations were carried out for uniaxial specimens and thick wall tubes subjected to mechanical and thermal loading. (Auth.)

  20. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning.

    Science.gov (United States)

    Silva, Susana F; Domingues, José Paulo; Morgado, António Miguel

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed.

  1. Time-resolved temperature measurements in a rapid compression machine using quantum cascade laser absorption in the intrapulse mode

    KAUST Repository

    Nasir, Ehson Fawad

    2016-07-16

    A temperature sensor based on the intrapulse absorption spectroscopy technique has been developed to measure in situ temperature time-histories in a rapid compression machine (RCM). Two quantum-cascade lasers (QCLs) emitting near 4.55μm and 4.89μm were operated in pulsed mode, causing a frequency "down-chirp" across two ro-vibrational transitions of carbon monoxide. The down-chirp phenomenon resulted in large spectral tuning (δν ∼2.8cm-1) within a single pulse of each laser at a high pulse repetition frequency (100kHz). The wide tuning range allowed the application of the two-line thermometry technique, thus making the sensor quantitative and calibration-free. The sensor was first tested in non-reactive CO-N2 gas mixtures in the RCM and then applied to cases of n-pentane oxidation. Experiments were carried out for end of compression (EOC) pressures and temperatures ranging 9.21-15.32bar and 745-827K, respectively. Measured EOC temperatures agreed with isentropic calculations within 5%. Temperature rise measured during the first-stage ignition of n-pentane is over-predicted by zero-dimensional kinetic simulations. This work presents, for the first time, highly time-resolved temperature measurements in reactive and non-reactive rapid compression machine experiments. © 2016 Elsevier Ltd.

  2. Adjustment to subtle time constraints and power law learning in rapid serial visual presentation

    Directory of Open Access Journals (Sweden)

    Jacqueline Chakyung Shin

    2015-11-01

    Full Text Available We investigated whether attention could be modulated through the implicit learning of temporal information in a rapid serial visual presentation (RSVP task. Participants identified two target letters among numeral distractors. The stimulus-onset asynchrony immediately following the first target (SOA1 varied at three levels (70, 98, and 126 ms randomly between trials or fixed within blocks of trials. Practice over three consecutive days resulted in a continuous improvement in the identification rate for both targets and attenuation of the attentional blink (AB, a decrement in target (T2 identification when presented 200-400 ms after another target (T1. Blocked SOA1s led to a faster rate of improvement in RSVP performance and more target order reversals relative to random SOA1s, suggesting that the implicit learning of SOA1 positively affected performance. The results also reveal power law learning curves for individual target identification as well as the reduction in the AB decrement. These learning curves reflect the spontaneous emergence of skill through subtle attentional modulations rather than general attentional distribution. Together, the results indicate that implicit temporal learning could improve high level and rapid cognitive processing and highlights the sensitivity and adaptability of the attentional system to subtle constraints in stimulus timing.

  3. Effect of a Real-Time Electronic Dashboard on a Rapid Response System.

    Science.gov (United States)

    Fletcher, Grant S; Aaronson, Barry A; White, Andrew A; Julka, Reena

    2017-11-20

    A rapid response system (RRS) may have limited effectiveness when inpatient providers fail to recognize signs of early patient decompensation. We evaluated the impact of an electronic medical record (EMR)-based alerting dashboard on outcomes associated with RRS activation. We used a repeated treatment study in which the dashboard display was successively turned on and off each week for ten 2-week cycles over a 20-week period on the inpatient acute care wards of an academic medical center. The Rapid Response Team (RRT) dashboard displayed all hospital patients in a single view ranked by severity score, updated in real time. The dashboard could be seen within the EMR by any provider, including RRT members. The primary outcomes were the incidence rate ratio (IRR) of all RRT activations, unexpected ICU transfers, cardiopulmonary arrests and deaths on general medical-surgical wards (wards). We conducted an exploratory analysis of first RRT activations. There were 6736 eligible admissions during the 20-week study period. There was no change in overall RRT activations (IRR = 1.14, p = 0.07), but a significant increase in first RRT activations (IRR = 1.20, p = 0.04). There were no significant differences in unexpected ICU transfers (IRR = 1.15, p = 0.25), cardiopulmonary arrests on general wards (IRR = 1.46, p = 0.43), or deaths on general wards (IRR = 0.96, p = 0.89). The introduction of the RRT dashboard was associated with increased initial RRT activations but not overall activations, unexpected ICU transfers, cardiopulmonary arrests, or death. The RRT dashboard is a novel tool to help providers recognize patient decompensation and may improve initial RRT notification.

  4. Decreasing Transition Times in Elementary School Classrooms: Using Computer-Assisted Instruction to Automate Intervention Components

    Science.gov (United States)

    Hine, Jeffrey F.; Ardoin, Scott P.; Foster, Tori E.

    2015-01-01

    Research suggests that students spend a substantial amount of time transitioning between classroom activities, which may reduce time spent academically engaged. This study used an ABAB design to evaluate the effects of a computer-assisted intervention that automated intervention components previously shown to decrease transition times. We examined…

  5. Enabling Real-time Water Decision Support Services Using Model as a Service

    Science.gov (United States)

    Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.

    2014-12-01

    Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.

  6. Impacts of Watching Television and Computer Using on Student' Reading Habits

    Directory of Open Access Journals (Sweden)

    Ayşe Gül Aksaçlıoğlu

    2013-11-01

    Full Text Available Reading habits contribute both to the cognitive and social developments of indi- viduals in so many aspects. This function of the reading habit continues in the rapid social changing process of today’s world. However, children’s habits of te- levision watching and computer using have been recently seen to affect their reading habits. Therefore, defining the positive or negative impacts of television and computers on children and finding solutions carries significant importance. The aim of this study is to determine the influences of the television watching and computer using on children’s reading habits. In order to find out the influ- ences, a survey was performed on all 5th grade students at Bilkent Private Primary School and Çankaya Public Primary School located within Ankara Büyükþehir Municipality borders. The questionaire was applied to 222 students in these two schools. As a result of the study, it is clear that students prefer to play on com- puters and watch television in their leisure time to reading books. There is an inverse proportion apparent between the time spent using computers and watching television and the time spent on reading.

  7. Computer-aided detection (CAD) of lung nodules in CT scans: radiologist performance and reading time with incremental CAD assistance

    International Nuclear Information System (INIS)

    Roos, Justus E.; Paik, David; Olsen, David; Liu, Emily G.; Leung, Ann N.; Mindelzun, Robert; Choudhury, Kingshuk R.; Napel, Sandy; Rubin, Geoffrey D.; Chow, Lawrence C.; Naidich, David P.

    2010-01-01

    The diagnostic performance of radiologists using incremental CAD assistance for lung nodule detection on CT and their temporal variation in performance during CAD evaluation was assessed. CAD was applied to 20 chest multidetector-row computed tomography (MDCT) scans containing 190 non-calcified ≥3-mm nodules. After free search, three radiologists independently evaluated a maximum of up to 50 CAD detections/patient. Multiple free-response ROC curves were generated for free search and successive CAD evaluation, by incrementally adding CAD detections one at a time to the radiologists' performance. The sensitivity for free search was 53% (range, 44%-59%) at 1.15 false positives (FP)/patient and increased with CAD to 69% (range, 59-82%) at 1.45 FP/patient. CAD evaluation initially resulted in a sharp rise in sensitivity of 14% with a minimal increase in FP over a time period of 100 s, followed by flattening of the sensitivity increase to only 2%. This transition resulted from a greater prevalence of true positive (TP) versus FP detections at early CAD evaluation and not by a temporal change in readers' performance. The time spent for TP (9.5 s ± 4.5 s) and false negative (FN) (8.4 s ± 6.7 s) detections was similar; FP decisions took two- to three-times longer (14.4 s ± 8.7 s) than true negative (TN) decisions (4.7 s ± 1.3 s). When CAD output is ordered by CAD score, an initial period of rapid performance improvement slows significantly over time because of non-uniformity in the distribution of TP CAD output and not to a changing reader performance over time. (orig.)

  8. Distributed Processing in Cloud Computing

    OpenAIRE

    Mavridis, Ilias; Karatza, Eleni

    2016-01-01

    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016. Cloud computing offers a wide range of resources and services through the Internet that can been used for various purposes. The rapid growth of cloud computing has exempted many companies and institutions from the burden of maintaining expensive hardware and software infrastructure. With characteristics like high scalability, availability ...

  9. Single-Trial Event-Related Potential Based Rapid Image Triage System

    Directory of Open Access Journals (Sweden)

    Ke Yu

    2011-06-01

    Full Text Available Searching for points of interest (POI in large-volume imagery is a challenging problem with few good solutions. In this work, a neural engineering approach called rapid image triage (RIT which could offer about a ten-fold speed up in POI searching is developed. It is essentially a cortically-coupled computer vision technique, whereby the user is presented bursts of images at a speed of 6–15 images per second and then neural signals called event-related potential (ERP is used as the ‘cue’ for user seeing images of high relevance likelihood. Compared to past efforts, the implemented system has several unique features: (1 it applies overlapping frames in image chip preparation, to ensure rapid image triage performance; (2 a novel common spatial-temporal pattern (CSTP algorithm that makes use of both spatial and temporal patterns of ERP topography is proposed for high-accuracy single-trial ERP detection; (3 a weighted version of probabilistic support-vector-machine (SVM is used to address the inherent unbalanced nature of single-trial ERP detection for RIT. High accuracy, fast learning, and real-time capability of the developed system shown on 20 subjects demonstrate the feasibility of a brainmachine integrated rapid image triage system for fast detection of POI from large-volume imagery.

  10. Development of wireless brain computer interface with embedded multitask scheduling and its application on real-time driver's drowsiness detection and warning.

    Science.gov (United States)

    Lin, Chin-Teng; Chen, Yu-Chieh; Huang, Teng-Yi; Chiu, Tien-Ting; Ko, Li-Wei; Liang, Sheng-Fu; Hsieh, Hung-Yi; Hsu, Shang-Hwa; Duann, Jeng-Ren

    2008-05-01

    Biomedical signal monitoring systems have been rapidly advanced with electronic and information technologies in recent years. However, most of the existing physiological signal monitoring systems can only record the signals without the capability of automatic analysis. In this paper, we proposed a novel brain-computer interface (BCI) system that can acquire and analyze electroencephalogram (EEG) signals in real-time to monitor human physiological as well as cognitive states, and, in turn, provide warning signals to the users when needed. The BCI system consists of a four-channel biosignal acquisition/amplification module, a wireless transmission module, a dual-core signal processing unit, and a host system for display and storage. The embedded dual-core processing system with multitask scheduling capability was proposed to acquire and process the input EEG signals in real time. In addition, the wireless transmission module, which eliminates the inconvenience of wiring, can be switched between radio frequency (RF) and Bluetooth according to the transmission distance. Finally, the real-time EEG-based drowsiness monitoring and warning algorithms were implemented and integrated into the system to close the loop of the BCI system. The practical online testing demonstrates the feasibility of using the proposed system with the ability of real-time processing, automatic analysis, and online warning feedback in real-world operation and living environments.

  11. Rapid serial visual presentation design for cognition

    CERN Document Server

    Spence, Robert

    2013-01-01

    A powerful new image presentation technique has evolved over the last twenty years, and its value demonstrated through its support of many and varied common tasks. Conceptually, Rapid Serial Visual Presentation (RSVP) is basically simple, exemplified in the physical world by the rapid riffling of the pages of a book in order to locate a known image. Advances in computation and graphics processing allow RSVP to be applied flexibly and effectively to a huge variety of common tasks such as window shopping, video fast-forward and rewind, TV channel selection and product browsing. At its heart is a

  12. A Brief Analysis of Development Situations and Trend of Cloud Computing

    Science.gov (United States)

    Yang, Wenyan

    2017-12-01

    in recent years, the rapid development of Internet technology has radically changed people's work, learning and lifestyles. More and more activities are completed by virtue of computers and networks. The amount of information and data generated is bigger day by day, and people rely more on computer, which makes computing power of computer fail to meet demands of accuracy and rapidity from people. The cloud computing technology has experienced fast development, which is widely applied in the computer industry as a result of advantages of high precision, fast computing and easy usage. Moreover, it has become a focus in information research at present. In this paper, the development situations and trend of cloud computing shall be analyzed and researched.

  13. Real-time dynamics of lattice gauge theories with a few-qubit quantum computer

    Science.gov (United States)

    Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer

    2016-06-01

    Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.

  14. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  15. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  16. Artificial neuron operations and spike-timing-dependent plasticity using memristive devices for brain-inspired computing

    Science.gov (United States)

    Marukame, Takao; Nishi, Yoshifumi; Yasuda, Shin-ichi; Tanamoto, Tetsufumi

    2018-04-01

    The use of memristive devices for creating artificial neurons is promising for brain-inspired computing from the viewpoints of computation architecture and learning protocol. We present an energy-efficient multiplier accumulator based on a memristive array architecture incorporating both analog and digital circuitries. The analog circuitry is used to full advantage for neural networks, as demonstrated by the spike-timing-dependent plasticity (STDP) in fabricated AlO x /TiO x -based metal-oxide memristive devices. STDP protocols for controlling periodic analog resistance with long-range stability were experimentally verified using a variety of voltage amplitudes and spike timings.

  17. Real time recording system of radioisotopes by local area network (LAN) computer system and user input processing

    International Nuclear Information System (INIS)

    Shinohara, Kunio; Ito, Atsushi; Kawaguchi, Hajime; Yanase, Makoto; Uno, Kiyoshi.

    1991-01-01

    A computer-assisted real time recording system was developed for management of radioisotopes. The system composed of two personal computers forming LAN, identification-card (ID-card) reader, and electricity-operating door-lock. One computer is operated by radiation safety staffs and stores the records of radioisotopes. The users of radioisotopes are registered in this computer. Another computer is installed in front of the storage room for radioisotopes. This computer is ready for operation by a registered ID-card and is input data by the user. After the completion of data input, the door to the storage room is unlocked. The present system enables us the following merits: Radiation safety staffs can easily keep up with the present states of radioisotopes in the storage room and save much labor. Radioactivity is always corrected. The upper limit of radioactivities in use per day is automatically checked and users are regulated when they input the amounts to be used. Users can obtain storage records of radioisotopes any time. In addition, the system is applicable to facilities which have more than two storage rooms. (author)

  18. Quantification of Artifact Reduction With Real-Time Cine Four-Dimensional Computed Tomography Acquisition Methods

    International Nuclear Information System (INIS)

    Langner, Ulrich W.; Keall, Paul J.

    2010-01-01

    Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.

  19. Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation.

    Science.gov (United States)

    Tempel, David G; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.

  20. Rapid scanning system for fuel drawers

    International Nuclear Information System (INIS)

    Caldwell, J.T.; Fehlau, P.E.; France, S.W.

    1981-01-01

    A nondestructive method for uniqely distinguishing among and quantifying the mass of individual fuel plates in situ in fuel drawers utilized in nuclear reactors is described. The method is both rapid and passive, eliminating the personnel hazard of the commonly used irradiation techniques which require that the analysis be performed in proximity to an intense neutron source such as a reactor. In the present technique, only normally decaying nuclei are observed. This allows the analysis to be performed anywhere. This feature, combined with rapid scanning of a given fuel drawer (in approximately 30 s), and the computer data analysis allows the processing of large numbers of fuel drawers efficiently in the event of a loss alert

  1. Protean appearance of craniopharyngioma on computed tomography

    International Nuclear Information System (INIS)

    Danziger, A.; Price, H.I.

    1979-01-01

    Craniopharyngiomas present a diverse appearance on computed tomography. Histological diagnosis is not always possible, but computed tomography is of great assistance in the delineation of the tumour as well as of the degree of associated hydrocephalus. Computed tomography also enables rapid non-invasive follow-up after surgery or radiotherapy, or both

  2. High-Precision Computation: Mathematical Physics and Dynamics

    International Nuclear Information System (INIS)

    Bailey, D.H.; Barrio, R.; Borwein, J.M.

    2010-01-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  3. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  4. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  5. Teaching Tip: Using Rapid Game Prototyping for Exploring Requirements Discovery and Modeling

    Science.gov (United States)

    Dalal, Nikunj

    2012-01-01

    We describe the use of rapid game prototyping as a pedagogic technique to experientially explore and learn requirements discovery, modeling, and specification in systems analysis and design courses. Students have a natural interest in gaming that transcends age, gender, and background. Rapid digital game creation is used to build computer games…

  6. Real time computer control of a nonlinear Multivariable System via Linearization and Stability Analysis

    International Nuclear Information System (INIS)

    Raza, K.S.M.

    2004-01-01

    This paper demonstrates that if a complicated nonlinear, non-square, state-coupled multi variable system is smartly linearized and subjected to a thorough stability analysis then we can achieve our design objectives via a controller which will be quite simple (in term of resource usage and execution time) and very efficient (in terms of robustness). Further the aim is to implement this controller via computer in a real time environment. Therefore first a nonlinear mathematical model of the system is achieved. An intelligent work is done to decouple the multivariable system. Linearization and stability analysis techniques are employed for the development of a linearized and mathematically sound control law. Nonlinearities like the saturation in actuators are also been catered. The controller is then discretized using Runge-Kutta integration. Finally the discretized control law is programmed in a computer in a real time environment. The programme is done in RT -Linux using GNU C for the real time realization of the control scheme. The real time processes, like sampling and controlled actuation, and the non real time processes, like graphical user interface and display, are programmed as different tasks. The issue of inter process communication, between real time and non real time task is addressed quite carefully. The results of this research pursuit are presented graphically. (author)

  7. Physics, Computer Science and Mathematics Division. Annual report, 1 January-31 December 1979

    International Nuclear Information System (INIS)

    Lepore, J.V.

    1980-09-01

    This annual report describes the research work carried out by the Physics, Computer Science and Mathematics Division during 1979. The major research effort of the Division remained High Energy Particle Physics with emphasis on preparing for experiments to be carried out at PEP. The largest effort in this field was for development and construction of the Time Projection Chamber, a powerful new particle detector. This work took a large fraction of the effort of the physics staff of the Division together with the equivalent of more than a hundred staff members in the Engineering Departments and shops. Research in the Computer Science and Mathematics Department of the Division (CSAM) has been rapidly expanding during the last few years. Cross fertilization of ideas and talents resulting from the diversity of effort in the Physics, Computer Science and Mathematics Division contributed to the software design for the Time Projection Chamber, made by the Computer Science and Applied Mathematics Department

  8. Rapid innovation diffusion in social networks.

    Science.gov (United States)

    Kreindler, Gabriel E; Young, H Peyton

    2014-07-22

    Social and technological innovations often spread through social networks as people respond to what their neighbors are doing. Previous research has identified specific network structures, such as local clustering, that promote rapid diffusion. Here we derive bounds that are independent of network structure and size, such that diffusion is fast whenever the payoff gain from the innovation is sufficiently high and the agents' responses are sufficiently noisy. We also provide a simple method for computing an upper bound on the expected time it takes for the innovation to become established in any finite network. For example, if agents choose log-linear responses to what their neighbors are doing, it takes on average less than 80 revision periods for the innovation to diffuse widely in any network, provided that the error rate is at least 5% and the payoff gain (relative to the status quo) is at least 150%. Qualitatively similar results hold for other smoothed best-response functions and populations that experience heterogeneous payoff shocks.

  9. Rapid magnetic hardening by rapid thermal annealing in NdFeB-based nanocomposites

    Energy Technology Data Exchange (ETDEWEB)

    Chu, K.-T.; Jin, Z Q; Chakka, Vamsi M; Liu, J P [Department of Physics, University of Texas at Arlington, Arlington, TX 76019 (United States)

    2005-11-21

    A systematic study of heat treatments and magnetic hardening of NdFeB-based melt-spun nanocomposite ribbons have been carried out. Comparison was made between samples treated by rapid thermal annealing and by conventional furnace annealing. Heating rates up to 200 K s{sup -1} were adopted in the rapid thermal processing. It was observed that magnetic hardening can be realized in an annealing time as short as 1 s. Coercivity of 10.2 kOe in the nanocomposites has been obtained by rapid thermal annealing for 1 s, and prolonged annealing did not give any increase in coercivity. Detailed results on the effects of annealing time, temperature and heating rate have been obtained. The dependence of magnetic properties on the annealing parameters has been investigated. Structural characterization revealed that there is a close correlation between magnetic hardening and nanostructured morphology. The coercivity mechanism was also studied by analysing the magnetization minor loops.

  10. The SAFT-UT (synthetic aperture focusing technique for ultrasonic testing) real-time inspection system: Operational principles and implementation

    Energy Technology Data Exchange (ETDEWEB)

    Hall, T. E.; Reid, L. D.; Doctor, S. R.

    1988-06-01

    This document provides a technical description of the real-time imaging system developed for rapid flaw detection and characterization utilizing the synthetic aperture focusing technique for ultrasonic testing (SAFT-UT). The complete fieldable system has been designed to perform inservice inspection of light-water reactor components. Software was written on a DEC LSI 11/23 computer system to control data collection. The unprocessed data is transferred to a VAX 11/730 host computer to perform data processing and image display tasks. A parallel architecture peripheral to the host computer, referred to as the Real-Time SAFT Processor, rapidly performs the SAFT processing function. From the host's point of view, this device operates on the SAFT data in such a way that one may consider it to be a specialized or SAFT array processor. A guide to SAFT-UT theory and conventions is included, along with a detailed description of the operation of the software, how to install the software, and a detailed hardware description.

  11. [Key points for esthetic rehabilitation of anterior teeth using chair-side computer aided design and computer aided manufacture technique].

    Science.gov (United States)

    Yang, J; Feng, H L

    2018-04-09

    With the rapid development of the chair-side computer aided design and computer aided manufacture (CAD/CAM) technology, its accuracy and operability of have been greatly improved in recent years. Chair-side CAD/CAM system may produce all kinds of indirect restorations, and has the advantages of rapid, accurate and stable production. It has become the future development direction of Stomatology. This paper describes the clinical application of the chair-side CAD/CAM technology for anterior aesthetic restorations from the aspects of shade and shape.

  12. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Directory of Open Access Journals (Sweden)

    Kevin R Ramkissoon

    Full Text Available The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  13. Rapid Identification of Sequences for Orphan Enzymes to Power Accurate Protein Annotation

    Science.gov (United States)

    Ojha, Sunil; Watson, Douglas S.; Bomar, Martha G.; Galande, Amit K.; Shearer, Alexander G.

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the “back catalog” of enzymology – “orphan enzymes,” those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme “back catalog” is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology’s “back catalog” another powerful tool to drive accurate genome annotation. PMID:24386392

  14. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  15. Rapid data processing for ultrafast X-ray computed tomography using scalable and modular CUDA based pipelines

    Science.gov (United States)

    Frust, Tobias; Wagner, Michael; Stephan, Jan; Juckeland, Guido; Bieberle, André

    2017-10-01

    Ultrafast X-ray tomography is an advanced imaging technique for the study of dynamic processes basing on the principles of electron beam scanning. A typical application case for this technique is e.g. the study of multiphase flows, that is, flows of mixtures of substances such as gas-liquidflows in pipelines or chemical reactors. At Helmholtz-Zentrum Dresden-Rossendorf (HZDR) a number of such tomography scanners are operated. Currently, there are two main points limiting their application in some fields. First, after each CT scan sequence the data of the radiation detector must be downloaded from the scanner to a data processing machine. Second, the current data processing is comparably time-consuming compared to the CT scan sequence interval. To enable online observations or use this technique to control actuators in real-time, a modular and scalable data processing tool has been developed, consisting of user-definable stages working independently together in a so called data processing pipeline, that keeps up with the CT scanner's maximal frame rate of up to 8 kHz. The newly developed data processing stages are freely programmable and combinable. In order to achieve the highest processing performance all relevant data processing steps, which are required for a standard slice image reconstruction, were individually implemented in separate stages using Graphics Processing Units (GPUs) and NVIDIA's CUDA programming language. Data processing performance tests on different high-end GPUs (Tesla K20c, GeForce GTX 1080, Tesla P100) showed excellent performance. Program Files doi:http://dx.doi.org/10.17632/65sx747rvm.1 Licensing provisions: LGPLv3 Programming language: C++/CUDA Supplementary material: Test data set, used for the performance analysis. Nature of problem: Ultrafast computed tomography is performed with a scan rate of up to 8 kHz. To obtain cross-sectional images from projection data computer-based image reconstruction algorithms must be applied. The

  16. Adaptive lattice decision-feedback equalizers - Their performance and application to time-variant multipath channnels

    Science.gov (United States)

    Ling, F.; Proakis, J. G.

    1985-04-01

    This paper presents two types of adaptive lattice decision-feedback equalizers (DFE), the least squares (LS) lattice DFE and the gradient lattice DFE. Their performance has been investigated on both time-invariant and time-variant channels through computer simulations and compared to other kinds of equalizers. An analysis of the self-noise and tracking characteristics of the LS DFE and the DFE employing the Widrow-Hoff least mean square adaptive algorithm (LMS DFE) are also given. The analysis and simulation results show that the LS lattice DFE has the faster initial convergence rate, while the gradient lattice DFE is computationally more efficient. The main advantages of the lattice DFE's are their numerical stability, their computational efficiency, the flexibility to change their length, and their excellent capabilities for tracking rapidly time-variant channels.

  17. Person-related determinants of TV viewing and computer time in a cohort of young Dutch adults: Who sits the most?

    NARCIS (Netherlands)

    Uijtdewilligen, L.; Singh, A.S.; Chin A Paw, M.J.M.; Twisk, J.W.R.; van Mechelen, W.

    2015-01-01

    We aimed to assess the associations of person-related factors with leisure time television (TV) viewing and computer time among young adults. We analyzed self-reported TV viewing (h/week) and leisure computer time (h/week) from 475 Dutch young adults (47% male) who had participated in the Amsterdam

  18. JINR rapid communications

    International Nuclear Information System (INIS)

    1999-01-01

    The present collection of rapid communications from JINR, Dubna, contains seven separate records on measurements of the total cross section difference Δσ L (np) at 1.59, 1.79, and 2.20 GeV, to the estimation of angular distributions of double charged spectator fragments in nucleus-nucleus interactions at superhigh energies, simulation dE/dx analysis results for silicon inner tracking system of ALICE set-up at LHC accelerator, high-multiplicity processes, triggering of high-multiplicity events using calorimetry, ORBIT-3.0 - a computer code for simulation and correction of the closed orbit and first turn in synchrotrons and determination of memory performance

  19. A Swellable Microneedle Patch to Rapidly Extract Skin Interstitial Fluid for Timely Metabolic Analysis.

    Science.gov (United States)

    Chang, Hao; Zheng, Mengjia; Yu, Xiaojun; Than, Aung; Seeni, Razina Z; Kang, Rongjie; Tian, Jingqi; Khanh, Duong Phan; Liu, Linbo; Chen, Peng; Xu, Chenjie

    2017-10-01

    Skin interstitial fluid (ISF) is an emerging source of biomarkers for disease diagnosis and prognosis. Microneedle (MN) patch has been identified as an ideal platform to extract ISF from the skin due to its pain-free and easy-to-administrated properties. However, long sampling time is still a serious problem which impedes timely metabolic analysis. In this study, a swellable MN patch that can rapidly extract ISF is developed. The MN patch is made of methacrylated hyaluronic acid (MeHA) and further crosslinked through UV irradiation. Owing to the supreme water affinity of MeHA, this MN patch can extract sufficient ISF in a short time without the assistance of extra devices, which remarkably facilitates timely metabolic analysis. Due to covalent crosslinked network, the MN patch maintains the structure integrity in the swelling hydrated state without leaving residues in skin after usage. More importantly, the extracted ISF metabolites can be efficiently recovered from MN patch by centrifugation for the subsequent offline analysis of metabolites such as glucose and cholesterol. Given the recent trend of easy-to-use point-of-care devices for personal healthcare monitoring, this study opens a new avenue for the development of MN-based microdevices for sampling ISF and minimally invasive metabolic detection. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  1. Novel method of fabricating individual trays for maxillectomy patients by computer-aided design and rapid prototyping.

    Science.gov (United States)

    Huang, Zhi; Wang, Xin-zhi; Hou, Yue-Zhong

    2015-02-01

    Making impressions for maxillectomy patients is an essential but difficult task. This study developed a novel method to fabricate individual trays by computer-aided design (CAD) and rapid prototyping (RP) to simplify the process and enhance patient safety. Five unilateral maxillectomy patients were recruited for this study. For each patient, a computed tomography (CT) scan was taken. Based on the 3D surface reconstruction of the target area, an individual tray was manufactured by CAD/RP. With a conventional custom tray as control, two final impressions were made using the different types of tray for each patient. The trays were sectioned, and in each section the thickness of the material was measured at six evenly distributed points. Descriptive statistics and paired t-test were used to examine the difference of the impression thickness. SAS 9.3 was applied in the statistical analysis. Afterwards, all casts were then optically 3D scanned and compared digitally to evaluate the feasibility of this method. Impressions of all five maxillectomy patients were successfully made with individual trays fabricated by CAD/RP and traditional trays. The descriptive statistics of impression thickness measurement showed slightly more uneven results in the traditional trays, but no statistical significance was shown. A 3D digital comparison showed acceptable discrepancies within 1 mm in the majority of cast areas. The largest difference of 3 mm was observed in the buccal wall of the defective areas. Moderate deviations of 1 to 2 mm were detected in the buccal and labial vestibular groove areas. This study confirmed the feasibility of a novel method of fabricating individual trays by CAD/RP. Impressions made by individual trays manufactured using CAD/RP had a uniform thickness, with an acceptable level of accuracy compared to those made through conventional processes. © 2014 by the American College of Prosthodontists.

  2. Rapid Design and Navigation Tools to Enable Small-Body Missions

    Data.gov (United States)

    National Aeronautics and Space Administration — Rapid design and navigation tools broaden the number and scope of available missions by making the most of advances in astrodynamics and in computer software and...

  3. Earthquake simulations with time-dependent nucleation and long-range interactions

    Directory of Open Access Journals (Sweden)

    J. H. Dieterich

    1995-01-01

    Full Text Available A model for rapid simulation of earthquake sequences is introduced which incorporates long-range elastic interactions among fault elements and time-dependent earthquake nucleation inferred from experimentally derived rate- and state-dependent fault constitutive properties. The model consists of a planar two-dimensional fault surface which is periodic in both the x- and y-directions. Elastic interactions among fault elements are represented by an array of elastic dislocations. Approximate solutions for earthquake nucleation and dynamics of earthquake slip are introduced which permit computations to proceed in steps that are determined by the transitions from one sliding state to the next. The transition-driven time stepping and avoidance of systems of simultaneous equations permit rapid simulation of large sequences of earthquake events on computers of modest capacity, while preserving characteristics of the nucleation and rupture propagation processes evident in more detailed models. Earthquakes simulated with this model reproduce many of the observed spatial and temporal characteristics of clustering phenomena including foreshock and aftershock sequences. Clustering arises because the time dependence of the nucleation process is highly sensitive to stress perturbations caused by nearby earthquakes. Rate of earthquake activity following a prior earthquake decays according to Omori's aftershock decay law and falls off with distance.

  4. Y2K issues for real time computer systems for fast breeder test reactor

    International Nuclear Information System (INIS)

    Swaminathan, P.

    1999-01-01

    Presentation shows the classification of real time systems related to operation, control and monitoring of the fast breeder test reactor. Software life cycle includes software requirement specification, software design description, coding, commissioning, operation and management. A software scheme in supervisory computer of fast breeder test rector is described with the twenty years of experience in design, development, installation, commissioning, operation and maintenance of computer based supervision control system for nuclear installation with a particular emphasis on solving the Y2K problem

  5. Application of computer mathematical modeling in nuclear well-logging industry

    International Nuclear Information System (INIS)

    Cai Shaohui

    1994-01-01

    Nuclear well logging techniques have made rapid progress since the first well log calibration facility (the API pits) was dedicated in 1959. Then came the first computer mathematical model in the late 70's. Mathematical modeling can now minimize design and experiment time, as well as provide new information and idea on tool design, environmental effects and result interpretation. The author gives a brief review on the achievements of mathematical modeling on nuclear logging problems

  6. Elastic Spatial Query Processing in OpenStack Cloud Computing Environment for Time-Constraint Data Analysis

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2017-03-01

    Full Text Available Geospatial big data analysis (GBDA is extremely significant for time-constraint applications such as disaster response. However, the time-constraint analysis is not yet a trivial task in the cloud computing environment. Spatial query processing (SQP is typical computation-intensive and indispensable for GBDA, and the spatial range query, join query, and the nearest neighbor query algorithms are not scalable without using MapReduce-liked frameworks. Parallel SQP algorithms (PSQPAs are trapped in screw-processing, which is a known issue in Geoscience. To satisfy time-constrained GBDA, we propose an elastic SQP approach in this paper. First, Spark is used to implement PSQPAs. Second, Kubernetes-managed Core Operation System (CoreOS clusters provide self-healing Docker containers for running Spark clusters in the cloud. Spark-based PSQPAs are submitted to Docker containers, where Spark master instances reside. Finally, the horizontal pod auto-scaler (HPA would scale-out and scale-in Docker containers for supporting on-demand computing resources. Combined with an auto-scaling group of virtual instances, HPA helps to find each of the five nearest neighbors for 46,139,532 query objects from 834,158 spatial data objects in less than 300 s. The experiments conducted on an OpenStack cloud demonstrate that auto-scaling containers can satisfy time-constraint GBDA in clouds.

  7. Degeneration of rapid eye movement sleep circuitry underlies rapid eye movement sleep behavior disorder.

    Science.gov (United States)

    McKenna, Dillon; Peever, John

    2017-05-01

    During healthy rapid eye movement sleep, skeletal muscles are actively forced into a state of motor paralysis. However, in rapid eye movement sleep behavior disorder-a relatively common neurological disorder-this natural process is lost. A lack of motor paralysis (atonia) in rapid eye movement sleep behavior disorder allows individuals to actively move, which at times can be excessive and violent. At first glance this may sound harmless, but it is not because rapid eye movement sleep behavior disorder patients frequently injure themselves or the person they sleep with. It is hypothesized that the degeneration or dysfunction of the brain stem circuits that control rapid eye movement sleep paralysis is an underlying cause of rapid eye movement sleep behavior disorder. The link between brain stem degeneration and rapid eye movement sleep behavior disorder stems from the fact that rapid eye movement sleep behavior disorder precedes, in the majority (∼80%) of cases, the development of synucleinopathies such as Parkinson's disease, dementia with Lewy bodies, and multiple system atrophy, which are known to initially cause degeneration in the caudal brain stem structures where rapid eye movement sleep circuits are located. Furthermore, basic science and clinical evidence demonstrate that lesions within the rapid eye movement sleep circuits can induce rapid eye movement sleep-specific motor deficits that are virtually identical to those observed in rapid eye movement sleep behavior disorder. This review examines the evidence that rapid eye movement sleep behavior disorder is caused by synucleinopathic neurodegeneration of the core brain stem circuits that control healthy rapid eye movement sleep and concludes that rapid eye movement sleep behavior disorder is not a separate clinical entity from synucleinopathies but, rather, it is the earliest symptom of these disorders. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and

  8. Real time eye tracking using Kalman extended spatio-temporal context learning

    Science.gov (United States)

    Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu

    2017-06-01

    Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.

  9. Real-time polymerase chain reaction assay for the rapid detection and characterization of chloroquine-resistant Plasmodium falciparum malaria in returned travelers.

    Science.gov (United States)

    Farcas, Gabriella A; Soeller, Rainer; Zhong, Kathleen; Zahirieh, Alireza; Kain, Kevin C

    2006-03-01

    Imported drug-resistant malaria is a growing problem in industrialized countries. Rapid and accurate diagnosis is essential to prevent malaria-associated mortality in returned travelers. However, outside of a limited number of specialized centers, the microscopic diagnosis of malaria is slow, unreliable, and provides little information about drug resistance. Molecular diagnostics have the potential to overcome these limitations. We developed and evaluated a rapid, real-time polymerase chain reaction (PCR) assay to detect Plasmodium falciparum malaria and chloroquine (CQ)-resistance determinants in returned travelers who are febrile. A real-time PCR assay based on detection of the K76T mutation in PfCRT (K76T) of P. falciparum was developed on a LightCycler platform (Roche). The performance characteristics of the real-time assay were compared with those of the nested PCR-restriction fragment-length polymorphism (RFLP) and the sequence analyses of samples obtained from 200 febrile returned travelers, who included 125 infected with P. falciparum (48 of whom were infected CQ-susceptible [K76] and 77 of whom were CQ-resistant [T76] P. falciparum), 22 infected with Plasmodium vivax, 10 infected with Plasmodium ovale, 3 infected with Plasmodium malariae malaria, and 40 infected with other febrile syndromes. All patient samples were coded, and all analyses were performed blindly. The real-time PCR assay detected multiple pfcrt haplotypes associated with CQ resistance in geographically diverse malaria isolates acquired by travelers. Compared with nested-PCR RFLP (the reference standard), the real-time assay was 100% sensitive and 96.2% specific for detection of the P. falciparum K76T mutation. This assay is rapid, sensitive, and specific for the detection and characterization of CQ-resistant P. falciparum malaria in returned travelers. This assay is automated, standardized, and suitable for routine use in clinical diagnostic laboratories.

  10. Rapid and minimum invasive functional brain mapping by real-time visualization of high gamma activity during awake craniotomy.

    Science.gov (United States)

    Ogawa, Hiroshi; Kamada, Kyousuke; Kapeller, Christoph; Hiroshima, Satoru; Prueckl, Robert; Guger, Christoph

    2014-11-01

    Electrocortical stimulation (ECS) is the gold standard for functional brain mapping during an awake craniotomy. The critical issue is to set aside enough time to identify eloquent cortices by ECS. High gamma activity (HGA) ranging between 80 and 120 Hz on electrocorticogram is assumed to reflect localized cortical processing. In this report, we used real-time HGA mapping and functional neuronavigation integrated with functional magnetic resonance imaging (fMRI) for rapid and reliable identification of motor and language functions. Four patients with intra-axial tumors in their dominant hemisphere underwent preoperative fMRI and lesion resection with an awake craniotomy. All patients showed significant fMRI activation evoked by motor and language tasks. During the craniotomy, we recorded electrocorticogram activity by placing subdural grids directly on the exposed brain surface. Each patient performed motor and language tasks and demonstrated real-time HGA dynamics in hand motor areas and parts of the inferior frontal gyrus. Sensitivity and specificity of HGA mapping were 100% compared with ECS mapping in the frontal lobe, which suggested HGA mapping precisely indicated eloquent cortices. We found different HGA dynamics of language tasks in frontal and temporal regions. Specificities of the motor and language-fMRI did not reach 85%. The results of HGA mapping was mostly consistent with those of ECS mapping, although fMRI tended to overestimate functional areas. This novel technique enables rapid and accurate identification of motor and frontal language areas. Furthermore, real-time HGA mapping sheds light on underlying physiological mechanisms related to human brain functions. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. FlyMAD: rapid thermogenetic control of neuronal activity in freely walking Drosophila.

    Science.gov (United States)

    Bath, Daniel E; Stowers, John R; Hörmann, Dorothea; Poehlmann, Andreas; Dickson, Barry J; Straw, Andrew D

    2014-07-01

    Rapidly and selectively modulating the activity of defined neurons in unrestrained animals is a powerful approach in investigating the circuit mechanisms that shape behavior. In Drosophila melanogaster, temperature-sensitive silencers and activators are widely used to control the activities of genetically defined neuronal cell types. A limitation of these thermogenetic approaches, however, has been their poor temporal resolution. Here we introduce FlyMAD (the fly mind-altering device), which allows thermogenetic silencing or activation within seconds or even fractions of a second. Using computer vision, FlyMAD targets an infrared laser to freely walking flies. As a proof of principle, we demonstrated the rapid silencing and activation of neurons involved in locomotion, vision and courtship. The spatial resolution of the focused beam enabled preferential targeting of neurons in the brain or ventral nerve cord. Moreover, the high temporal resolution of FlyMAD allowed us to discover distinct timing relationships for two neuronal cell types previously linked to courtship song.

  12. Design of rapid prototype of UAV line-of-sight stabilized control system

    Science.gov (United States)

    Huang, Gang; Zhao, Liting; Li, Yinlong; Yu, Fei; Lin, Zhe

    2018-01-01

    The line-of-sight (LOS) stable platform is the most important technology of UAV (unmanned aerial vehicle), which can reduce the effect to imaging quality from vibration and maneuvering of the aircraft. According to the requirement of LOS stability system (inertial and optical-mechanical combined method) and UAV's structure, a rapid prototype is designed using based on industrial computer using Peripheral Component Interconnect (PCI) and Windows RTX to exchange information. The paper shows the control structure, and circuit system including the inertial stability control circuit with gyro and voice coil motor driven circuit, the optical-mechanical stability control circuit with fast-steering-mirror (FSM) driven circuit and image-deviation-obtained system, outer frame rotary follower, and information-exchange system on PC. Test results show the stability accuracy reaches 5μrad, and prove the effectiveness of the combined line-of-sight stabilization control system, and the real-time rapid prototype runs stable.

  13. Towards OpenVL: Improving Real-Time Performance of Computer Vision Applications

    Science.gov (United States)

    Shen, Changsong; Little, James J.; Fels, Sidney

    Meeting constraints for real-time performance is a main issue for computer vision, especially for embedded computer vision systems. This chapter presents our progress on our open vision library (OpenVL), a novel software architecture to address efficiency through facilitating hardware acceleration, reusability, and scalability for computer vision systems. A logical image understanding pipeline is introduced to allow parallel processing. We also discuss progress on our middleware—vision library utility toolkit (VLUT)—that enables applications to operate transparently over a heterogeneous collection of hardware implementations. OpenVL works as a state machine,with an event-driven mechanismto provide users with application-level interaction. Various explicit or implicit synchronization and communication methods are supported among distributed processes in the logical pipelines. The intent of OpenVL is to allow users to quickly and easily recover useful information from multiple scenes, in a cross-platform, cross-language manner across various software environments and hardware platforms. To validate the critical underlying concepts of OpenVL, a human tracking system and a local positioning system are implemented and described. The novel architecture separates the specification of algorithmic details from the underlying implementation, allowing for different components to be implemented on an embedded system without recompiling code.

  14. Rapid Resuscitation with Small Volume Hypertonic Saline Solution ...

    African Journals Online (AJOL)

    Rapid Resuscitation with Small Volume Hypertonic Saline Solution for Patients in Traumatic Haemorrhagic Shock. ... The data were entered into a computer data base and analysed. Results: Forty five patients were enrolled and resuscitated with 250 mls 7.5% HSS. Among the studied patients, 88.9% recovered from shock ...

  15. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  16. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  17. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  18. Job tasks, computer use, and the decreasing part-time pay penalty for women in the UK

    NARCIS (Netherlands)

    Elsayed, A.E.A.; de Grip, A.; Fouarge, D.

    2014-01-01

    Using data from the UK Skills Surveys, we show that the part-time pay penalty for female workers within low- and medium-skilled occupations decreased significantly over the period 1997-2006. The convergence in computer use between part-time and full-time workers within these occupations explains a

  19. Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables

    Science.gov (United States)

    Ouyang, Bo; Shang, Weiwei

    2016-03-01

    The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.

  20. The position of a standard optical computer mouse affects cardiorespiratory responses during the operation of a computer under time constraints.

    Science.gov (United States)

    Sako, Shunji; Sugiura, Hiromichi; Tanoue, Hironori; Kojima, Makoto; Kono, Mitsunobu; Inaba, Ryoichi

    2014-08-01

    This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1) the distal position (DP), in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2) the proximal position (PP), in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses), operating efficiencies (based on word counts), and fatigue levels (based on the visual analog scale - VAS). Oxygen consumption (VO(2)), the ratio of inspiration time to respiration time (T(i)/T(total)), respiratory rate (RR), minute ventilation (VE), and the ratio of expiration to inspiration (Te/T(i)) were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT), carbon dioxide output rates (VCO(2)/VE), and oxygen extraction fractions (VO(2)/VE) were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when operating a computer.

  1. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  2. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  3. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  4. Modifications of ORNL's computer programs MSF-21 and VTE-21 for the evaluation and rapid optimization of multistage flash and vertical tube evaporators

    Energy Technology Data Exchange (ETDEWEB)

    Glueckstern, P.; Wilson, J.V.; Reed, S.A.

    1976-06-01

    Design and cost modifications were made to ORNL's Computer Programs MSF-21 and VTE-21 originally developed for the rapid calculation and design optimization of multistage flash (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. The modifications include additional design options to make possible the evaluation of desalting plants based on current technology (the original programs were based on conceptual designs applying advanced and not yet proven technological developments and design features) and new materials and equipment costs updated to mid-1975.

  5. P-CARES 2.0.0, Probabilistic Computer Analysis for Rapid Evaluation of Structures

    International Nuclear Information System (INIS)

    2008-01-01

    1 - Description of program or function: P-CARES 2.0.0 (Probabilistic Computer Analysis for Rapid Evaluation of Structures) was developed for NRC staff use to determine the validity and accuracy of the analysis methods used by various utilities for structural safety evaluations of nuclear power plants. P-CARES provides the capability to effectively evaluate the probabilistic seismic response using simplified soil and structural models and to quickly check the validity and/or accuracy of the SSI data received from applicants and licensees. The code is organized in a modular format with the basic modules of the system performing static, seismic, and nonlinear analysis. 2 - Methods: P-CARES is an update of the CARES program developed at Brookhaven National Laboratory during the 1980's. A major improvement is the enhanced analysis capability in which a probabilistic algorithm has been implemented to perform the probabilistic site response and soil-structure interaction (SSI) analyses. This is accomplished using several sampling techniques such as the Latin Hypercube sampling (LHC), engineering LHC, the Fekete Point Set method, and also the traditional Monte Carlo simulation. This new feature enhances the site response and SSI analysis such that the effect of uncertainty in local site soil properties can now be quantified. Another major addition to P-CARES is a graphical user interface (GUI) which significantly improves the performance of P-Cares in terms of the inter-relations among different functions of the program, and facilitates the input/output processing and execution management. It also provides many user friendly features that would allow an analyst to quickly develop insights from the analysis results. 3 - Restrictions on the complexity of the problem: None noted

  6. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    Science.gov (United States)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  7. [Evaluation of production and clinical working time of computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays for complete denture].

    Science.gov (United States)

    Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X

    2017-02-18

    To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (Pmanufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being

  8. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    Science.gov (United States)

    2015-06-01

    CONFABULATION BASED REAL-TIME ANOMALY DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE SYRACUSE...DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-12-1-0251 5b. GRANT...processors including graphic processor units (GPUs) and Intel Xeon Phi processors. Experimental results showed significant speedups, which can enable

  9. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    International Nuclear Information System (INIS)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide

    2007-01-01

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustrate our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls

  10. Process control in conventional power plants. The use of computer systems

    Energy Technology Data Exchange (ETDEWEB)

    Schievink, A; Woehrle, G

    1989-03-01

    To process information man can use his knowledge and his experience. Both these means however, permit only slow flows of information (about 25 bit/s) to be processed. The flow of information in a modern 700-MW-coal power station that the staff has to face is about 5000 bit per second, i.e. 200 times as much as a single human brain can process. One therefore needs modern computer-controlled process control systems which support the staff in recognizing and processing the complicated and rapid processes in such a way that the servicing staff is efficiently supported. The computer-man interface is ergonomically improved by visual display units.

  11. Real-time LMR control parameter generation using advanced adaptive synthesis

    International Nuclear Information System (INIS)

    King, R.W.; Mott, J.E.

    1990-01-01

    The reactor ''delta T'', the difference between the average core inlet and outlet temperatures, for the liquid-sodium-cooled Experimental Breeder Reactor 2 is empirically synthesized in real time from, a multitude of examples of past reactor operation. The real-time empirical synthesis is based on reactor operation. The real-time empirical synthesis is based on system state analysis (SSA) technology embodied in software on the EBR 2 data acquisition computer. Before the real-time system is put into operation, a selection of reactor plant measurements is made which is predictable over long periods encompassing plant shutdowns, core reconfigurations, core load changes, and plant startups. A serial data link to a personal computer containing SSA software allows the rapid verification of the predictability of these plant measurements via graphical means. After the selection is made, the real-time synthesis provides a fault-tolerant estimate of the reactor delta T accurate to +/-1%. 5 refs., 7 figs

  12. Computer Training for Entrepreneurial Meteorologists.

    Science.gov (United States)

    Koval, Joseph P.; Young, George S.

    2001-05-01

    Computer applications of increasing diversity form a growing part of the undergraduate education of meteorologists in the early twenty-first century. The advent of the Internet economy, as well as a waning demand for traditional forecasters brought about by better numerical models and statistical forecasting techniques has greatly increased the need for operational and commercial meteorologists to acquire computer skills beyond the traditional techniques of numerical analysis and applied statistics. Specifically, students with the skills to develop data distribution products are in high demand in the private sector job market. Meeting these demands requires greater breadth, depth, and efficiency in computer instruction. The authors suggest that computer instruction for undergraduate meteorologists should include three key elements: a data distribution focus, emphasis on the techniques required to learn computer programming on an as-needed basis, and a project orientation to promote management skills and support student morale. In an exploration of this approach, the authors have reinvented the Applications of Computers to Meteorology course in the Department of Meteorology at The Pennsylvania State University to teach computer programming within the framework of an Internet product development cycle. Because the computer skills required for data distribution programming change rapidly, specific languages are valuable for only a limited time. A key goal of this course was therefore to help students learn how to retrain efficiently as technologies evolve. The crux of the course was a semester-long project during which students developed an Internet data distribution product. As project management skills are also important in the job market, the course teamed students in groups of four for this product development project. The success, failures, and lessons learned from this experiment are discussed and conclusions drawn concerning undergraduate instructional methods

  13. Computational approaches to energy materials

    CERN Document Server

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  14. A primer on the energy efficiency of computing

    Energy Technology Data Exchange (ETDEWEB)

    Koomey, Jonathan G. [Research Fellow, Steyer-Taylor Center for Energy Policy and Finance, Stanford University (United States)

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  15. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Sarrut, David; Guigues, Laurent

    2008-01-01

    Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy

  16. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  17. Automated selection of brain regions for real-time fMRI brain-computer interfaces

    Science.gov (United States)

    Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio

    2017-02-01

    Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.

  18. Visual Temporal Logic as a Rapid Prototying Tool

    DEFF Research Database (Denmark)

    Fränzle, Martin; Lüth, Karsten

    2001-01-01

    Within this survey article, we explain real-time symbolic timing diagrams and the ICOS tool-box supporting timing-diagram-based requirements capture and rapid prototyping. Real-time symbolic timing diagrams are a full-fledged metric-time temporal logic, but with a graphical syntax reminiscent...... of the informal timing diagrams widely used in electrical engineering. ICOS integrates a variety of tools, ranging from graphical specification editors over tautology checking and counterexample generation to code generators emitting C or VHDL, thus bridging the gap from formal specification to rapid prototype...

  19. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    Science.gov (United States)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  20. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    Science.gov (United States)

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.