WorldWideScience

Sample records for shorter computation time

  1. YAOPBM-II: extension to higher degrees and to shorter time series

    Energy Technology Data Exchange (ETDEWEB)

    Korzennik, S G [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States)], E-mail: skorzennik@cfa.harvard.edu

    2008-10-15

    In 2005, I presented a new fitting methodology (Yet AnOther Peak Bagging Method -YAOPBM), derived for very-long time series (2088-day-long) and applied it to low degree modes, {iota} {<=} 25. That very-long time series was also sub-divided into shorter segments (728-day-long) that were each fitted over the same range of degrees, to estimate changes with solar activity levels. I present here the extension of this method in several 'directions': a) to substantially higher degrees ({iota} {<=} 125); b) to shorter time series (364- and 182-day-long); and c) to additional 728-day-long segments, covering now some 10 years of observations. I discuss issues with the fitting, namely the leakage matrix, and the f- and p1 mode at very low frequencies, and I present some of the characteristics of the observed temporal changes.

  2. The risk of shorter fasting time for pediatric deep sedation.

    Science.gov (United States)

    Clark, Mathew; Birisci, Esma; Anderson, Jordan E; Anliker, Christina M; Bryant, Micheal A; Downs, Craig; Dalabih, Abdallah

    2016-01-01

    Current guidelines adopted by the American Academy of Pediatrics calls for prolonged fasting times before performing pediatric procedural sedation and analgesia (PSA). PSA is increasingly provided to children outside of the operating theater by sedation trained pediatric providers and does not require airway manipulation. We investigated the safety of a shorter fasting time compared to a longer and guideline compliant fasting time. We tried to identify the association between fasting time and sedation-related complications. This is a prospective observational study that included children 2 months to 18 years of age and had an American Society of Anesthesiologists physical status classification of I or II, who underwent deep sedation for elective procedures, performed by pediatric critical care providers. Procedures included radiologic imaging studies, electroencephalograms, auditory brainstem response, echocardiograms, Botox injections, and other minor surgical procedures. Subjects were divided into two groups depending on the length of their fasting time (4-6 h and >6 h). Complication rates were calculated and compared between the three groups. In the studied group of 2487 subjects, 1007 (40.5%) had fasting time of 4-6 h and the remaining 1480 (59.5%) subjects had fasted for >6 h. There were no statistically significant differences in any of the studied complications between the two groups. This study found no difference in complication rate in regard to the fasting time among our subjects cohort, which included only healthy children receiving elective procedures performed by sedation trained pediatric critical care providers. This suggests that using shorter fasting time may be safe for procedures performed outside of the operating theater that does not involve high-risk patients or airway manipulation.

  3. Physical activity during video capsule endoscopy correlates with shorter bowel transit time.

    Science.gov (United States)

    Stanich, Peter P; Peck, Joshua; Murphy, Christopher; Porter, Kyle M; Meyer, Marty M

    2017-09-01

     Video capsule endoscopy (VCE) is limited by reliance on bowel motility for propulsion, and lack of physical activity has been proposed as a cause of incomplete studies. Our aim was to prospectively investigate the association between physical activity and VCE bowel transit.  Ambulatory outpatients receiving VCE were eligible for the study. A pedometer was attached at the time of VCE ingestion and step count was recorded at the end of the procedure. VCE completion was assessed by logistic regression models, which included step count (500 steps as one unit). Total transit time was analyzed by Cox proportional hazards models. The hazard ratios (HR) with 95 % confidence interval (CI) indicated the "hazard" of completion, such that HRs > 1 indicated a reduced transit time.  A total of 100 patients were included. VCE was completed in 93 patients (93 %). The median step count was 2782 steps. Step count was not significantly associated with VCE completion (odds ratio 1.45, 95 %CI 0.84, 2.49). Pedometer step count was significantly associated with shorter total, gastric, and small-bowel transit times (HR 1.09, 95 %CI 1.03, 1.16; HR 1.05, 95 %CI 1.00, 1.11; HR 1.07, 95 %CI 1.01, 1.14, respectively). Higher body mass index (BMI) was significantly associated with VCE completion (HR 1.87, 95 %CI 1.18, 2.97) and shorter bowel transit times (HR 1.05, 95 %CI 1.02, 1.08).  Increased physical activity during outpatient VCE was associated with shorter bowel transit times but not with study completion. In addition, BMI was a previously unreported clinical characteristic associated with VCE completion and should be included as a variable of interest in future studies.

  4. The Change of the Family Life Affected by the Shorter Working Time : From the Point of View of the Home Management

    OpenAIRE

    平田, 道憲

    1994-01-01

    In Japan, the working time has been decreasing. However, Japanese working people spend more hours per year to work than those in Western countries. The policy of the shorter working time is conducted by the Japanese Government in order that the working people get more free time. This paper examines whether the shorter working time of working members in the family enrich the time use of the other members of the family. Especially, the effect of the shorter working time of husbands to wives...

  5. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  6. Optimization of a shorter variable-acquisition time for legs to achieve true whole-body PET/CT images.

    Science.gov (United States)

    Umeda, Takuro; Miwa, Kenta; Murata, Taisuke; Miyaji, Noriaki; Wagatsuma, Kei; Motegi, Kazuki; Terauchi, Takashi; Koizumi, Mitsuru

    2017-12-01

    The present study aimed to qualitatively and quantitatively evaluate PET images as a function of acquisition time for various leg sizes, and to optimize a shorter variable-acquisition time protocol for legs to achieve better qualitative and quantitative accuracy of true whole-body PET/CT images. The diameters of legs to be modeled as phantoms were defined based on data derived from 53 patients. This study analyzed PET images of a NEMA phantom and three plastic bottle phantoms (diameter, 5.68, 8.54 and 10.7 cm) that simulated the human body and legs, respectively. The phantoms comprised two spheres (diameters, 10 and 17 mm) containing fluorine-18 fluorodeoxyglucose solution with sphere-to-background ratios of 4 at a background radioactivity level of 2.65 kBq/mL. All PET data were reconstructed with acquisition times ranging from 10 to 180, and 1200 s. We visually evaluated image quality and determined the coefficient of variance (CV) of the background, contrast and the quantitative %error of the hot spheres, and then determined two shorter variable-acquisition protocols for legs. Lesion detectability and quantitative accuracy determined based on maximum standardized uptake values (SUV max ) in PET images of a patient using the proposed protocols were also evaluated. A larger phantom and a shorter acquisition time resulted in increased background noise on images and decreased the contrast in hot spheres. A visual score of ≥ 1.5 was obtained when the acquisition time was ≥ 30 s for three leg phantoms, and ≥ 120 s for the NEMA phantom. The quantitative %errors of the 10- and 17-mm spheres in the leg phantoms were ± 15 and ± 10%, respectively, in PET images with a high CV (scan mean SUV max of three lesions using the current fixed-acquisition and two proposed variable-acquisition time protocols in the clinical study were 3.1, 3.1 and 3.2, respectively, which did not significantly differ. Leg acquisition time per bed position of even 30-90

  7. Driving for shorter outages

    International Nuclear Information System (INIS)

    Tritch, S.

    1996-01-01

    Nuclear plant outages are necessary to complete activities that cannot be completed during the operating cycle, such as steam generator inspection and testing, refueling, installing modifications, and performing maintenance tests. The time devoted to performing outages is normally the largest contributor to plant unavailability. Similarly, outage costs are a sizable portion of the total plant budget. The scope and quality of work done during outages directly affects operating reliability and the number of unplanned outages. Improved management and planning of outages enhances the margin of safety during the outage and results in increased plant reliability. The detailed planning and in-depth preparation that has become a necessity for driving shorter outage durations has also produced safer outages and improved post-outage reliability. Short outages require both plant and vendor management to focus on all aspects of the outage. Short outage durations, such as 26 days at South Texas or 29 days at North Anna, require power plant inter-department and intra-department teamwork and communication and vendor participation. In this paper shorter and safer outage at the 3-loop plants in the United States are explained. (J.P.N.)

  8. Investigations of model polymers: Dynamics of melts and statics of a long chain in a dilute melt of shorter chains

    International Nuclear Information System (INIS)

    Bishop, M.; Ceperley, D.; Frisch, H.L.; Kalos, M.H.

    1982-01-01

    We report additional results on a simple model of polymers, namely the diffusion in concentrated polymer systems and the static properties of one long chain in a dilute melt of shorter chains. It is found, for the polymer sizes and time scales amenable to our computer calculations, that there is as yet no evidence for a ''reptation'' regime in a melt. There is some indication of reptation in the case of a single chain moving through fixed obstacles. No statistically significant effect of the change, from excluded volume behavior of the long chain to ideal behavior as the shorter chains grow, is observed

  9. Reusability Framework for Cloud Computing

    OpenAIRE

    Singh, Sukhpal; Singh, Rishideep

    2012-01-01

    Cloud based development is a challenging task for several software engineering projects, especially for those which needs development with reusability. Present time of cloud computing is allowing new professional models for using the software development. The expected upcoming trend of computing is assumed to be this cloud computing because of speed of application deployment, shorter time to market, and lower cost of operation. Until Cloud Co mputing Reusability Model is considered a fundamen...

  10. How do shorter working hours affect employee wellbeing? : Shortening working time in Finland

    OpenAIRE

    Lahdenperä, Netta

    2017-01-01

    The way work is done is dramatically changing due to digital breakthroughs. Generation Y is entering the workforce with a changed attitude towards work as organizations are increasing their focus towards employee wellbeing. Organizations who adopt the new model of work and understand the importance of the wellbeing of their staff are leading the transition to a more efficient business, better working life and a healthier planet. The thesis explores the numerous effects of shorter working...

  11. Shorter Ground Contact Time and Better Running Economy: Evidence From Female Kenyan Runners.

    Science.gov (United States)

    Mooses, Martin; Haile, Diresibachew W; Ojiambo, Robert; Sang, Meshack; Mooses, Kerli; Lane, Amy R; Hackney, Anthony C

    2018-06-25

    Mooses, M, Haile, DW, Ojiambo, R, Sang, M, Mooses, K, Lane, AR, and Hackney, AC. Shorter ground contact time and better running economy: evidence from female Kenyan runners. J Strength Cond Res XX(X): 000-000, 2018-Previously, it has been concluded that the improvement in running economy (RE) might be considered as a key to the continued improvement in performance when no further increase in V[Combining Dot Above]O2max is observed. To date, RE has been extensively studied among male East African distance runners. By contrast, there is a paucity of data on the RE of female East African runners. A total of 10 female Kenyan runners performed 3 × 1,600-m steady-state run trials on a flat outdoor clay track (400-m lap) at the intensities that corresponded to their everyday training intensities for easy, moderate, and fast running. Running economy together with gait characteristics was determined. Participants showed moderate to very good RE at the first (202 ± 26 ml·kg·km) and second (188 ± 12 ml·kg·km) run trials, respectively. Correlation analysis revealed significant relationship between ground contact time (GCT) and RE at the second run (r = 0.782; p = 0.022), which represented the intensity of anaerobic threshold. This study is the first to report the RE and gait characteristics of East African female athletes measured under everyday training settings. We provided the evidence that GCT is associated with the superior RE of the female Kenyan runners.

  12. Self-produced Time Intervals Are Perceived as More Variable and/or Shorter Depending on Temporal Context in Subsecond and Suprasecond Ranges

    Directory of Open Access Journals (Sweden)

    Keita eMitani

    2016-06-01

    Full Text Available The processing of time intervals is fundamental for sensorimotor and cognitive functions. Perceptual and motor timing are often performed concurrently (e.g., playing a musical instrument. Although previous studies have shown the influence of body movements on time perception, how we perceive self-produced time intervals has remained unclear. Furthermore, it has been suggested that the timing mechanisms are distinct for the sub- and suprasecond ranges. Here, we compared perceptual performances for self-produced and passively presented time intervals in random contexts (i.e., multiple target intervals presented in a session across the sub- and suprasecond ranges (Experiment 1 and within the sub- (Experiment 2 and suprasecond (Experiment 3 ranges, and in a constant context (i.e., a single target interval presented in a session in the sub- and suprasecond ranges (Experiment 4. We show that self-produced time intervals were perceived as shorter and more variable across the sub- and suprasecond ranges and within the suprasecond range but not within the subsecond range in a random context. In a constant context, the self-produced time intervals were perceived as more variable in the suprasecond range but not in the subsecond range. The impairing effects indicate that motor timing interferes with perceptual timing. The dependence of impairment on temporal contexts suggests multiple timing mechanisms for the subsecond and suprasecond ranges. In addition, violation of the scalar property (i.e., a constant variability to target interval ratio was observed between the sub- and suprasecond ranges. The violation was clearer for motor timing than for perceptual timing. This suggests that the multiple timing mechanisms for the sub- and suprasecond ranges overlap more for perception than for motor. Moreover, the central tendency effect (i.e., where shorter base intervals are overestimated and longer base intervals are underestimated disappeared with subsecond

  13. Computer tomography urography assisted real-time ultrasound-guided percutaneous nephrolithotomy on renal calculus.

    Science.gov (United States)

    Fang, You-Qiang; Wu, Jie-Ying; Li, Teng-Cheng; Zheng, Hao-Feng; Liang, Guan-Can; Chen, Yan-Xiong; Hong, Xiao-Bin; Cai, Wei-Zhong; Zang, Zhi-Jun; Di, Jin-Ming

    2017-06-01

    This study aimed to assess the role of pre-designed route on computer tomography urography (CTU) in the ultrasound-guided percutaneous nephrolithotomy (PCNL) for renal calculus.From August 2013 to May 2016, a total of 100 patients diagnosed with complex renal calculus in our hospital were randomly divided into CTU group and control group (without CTU assistance). CTU was used to design a rational route for puncturing in CTU group. Ultrasound was used in both groups to establish a working trace in the operation areas. Patients' perioperative parameters and postoperative complications were recorded.All operations were successfully performed, without transferring to open surgery. Time of channel establishment in CTU group (6.5 ± 4.3 minutes) was shorter than the control group (10.0 ± 6.7 minutes) (P = .002). In addition, there was shorter operation time, lower rates of blood transfusion, secondary operation, and less establishing channels. The incidence of postoperative complications including residual stones, sepsis, severe hemorrhage, and perirenal hematoma was lower in CTU group than in control group.Pre-designing puncture route on CTU images would improve the puncturing accuracy, lessen establishing channels as well as improve the security in the ultrasound-guided PCNL for complex renal calculus, but at the cost of increased radiation exposure.

  14. The Napoleon Complex: When Shorter Men Take More.

    Science.gov (United States)

    Knapen, Jill E P; Blaker, Nancy M; Van Vugt, Mark

    2018-05-01

    Inspired by an evolutionary psychological perspective on the Napoleon complex, we hypothesized that shorter males are more likely to show indirect aggression in resource competitions with taller males. Three studies provide support for our interpretation of the Napoleon complex. Our pilot study shows that men (but not women) keep more resources for themselves when they feel small. When paired with a taller male opponent (Study 1), shorter men keep more resources to themselves in a game in which they have all the power (dictator game) versus a game in which the opponent also has some power (ultimatum game). Furthermore, shorter men are not more likely to show direct, physical aggression toward a taller opponent (Study 2). As predicted by the Napoleon complex, we conclude that (relatively) shorter men show greater behavioral flexibility in securing resources when presented with cues that they are physically less competitive. Theoretical and practical implications are discussed.

  15. 12 CFR 1102.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1102.27 Section 1102.27 Banks... for Proceedings § 1102.27 Computing time. (a) General rule. In computing any period of time prescribed... time begins to run is not included. The last day so computed is included, unless it is a Saturday...

  16. Effects of shorter versus longer storage time of transfused red blood cells in adult ICU patients

    DEFF Research Database (Denmark)

    Rygård, Sofie L; Jonsson, Andreas B; Madsen, Martin B

    2018-01-01

    on the effects of shorter versus longer storage time of transfused RBCs on outcomes in ICU patients. METHODS: We conducted a systematic review with meta-analyses and trial sequential analyses (TSA) of randomised clinical trials including adult ICU patients transfused with fresher versus older or standard issue...... blood. RESULTS: We included seven trials with a total of 18,283 randomised ICU patients; two trials of 7504 patients were judged to have low risk of bias. We observed no effects of fresher versus older blood on death (relative risk 1.04, 95% confidence interval (CI) 0.97-1.11; 7349 patients; TSA......-adjusted CI 0.93-1.15), adverse events (1.26, 0.76-2.09; 7332 patients; TSA-adjusted CI 0.16-9.87) or post-transfusion infections (1.07, 0.96-1.20; 7332 patients; TSA-adjusted CI 0.90-1.27). The results were unchanged by including trials with high risk of bias. TSA confirmed the results and the required...

  17. 12 CFR 622.21 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Computing time. 622.21 Section 622.21 Banks and... Formal Hearings § 622.21 Computing time. (a) General rule. In computing any period of time prescribed or... run is not to be included. The last day so computed shall be included, unless it is a Saturday, Sunday...

  18. 36 CFR 223.81 - Shorter advertising periods in emergencies.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 2 2010-07-01 2010-07-01 false Shorter advertising periods... OF AGRICULTURE SALE AND DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER Timber Sale Contracts Advertisement and Bids § 223.81 Shorter advertising periods in emergencies. In emergency situations where prompt...

  19. 12 CFR 908.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 908.27 Section 908.27 Banks and... PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD General Rules § 908.27 Computing time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event...

  20. Do shorter wavelengths improve contrast in optical mammography?

    International Nuclear Information System (INIS)

    Taroni, P; Pifferi, A; Torricelli, A; Spinelli, L; Danesini, G M; Cubeddu, R

    2004-01-01

    The detection of tumours with time-resolved transmittance imaging relies essentially on blood absorption. Previous theoretical and phantom studies have shown that both contrast and spatial resolution of optical images are affected by the optical properties of the background medium, and high absorption and scattering are generally beneficial. Based on these observations, wavelengths shorter than presently used (680-780 nm) could be profitable for optical mammography. A study was thus performed analysing time-resolved transmittance images at 637, 656, 683 and 785 nm obtained from 26 patients bearing 16 tumours and 15 cysts. The optical contrast proved to increase upon decreasing wavelengths for the detection of cancers in late-gated intensity images, with higher gain in contrast for lesions of smaller size (<1.5 cm diameter). For cysts either a progressive increase or decrease in contrast with wavelength was observed in scattering images

  1. 12 CFR 1780.11 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1780.11 Section 1780.11 Banks... time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event that commences the designated period of time is not included. The last day so...

  2. N-Terminal Domains in Two-Domain Proteins Are Biased to Be Shorter and Predicted to Fold Faster Than Their C-Terminal Counterparts

    Directory of Open Access Journals (Sweden)

    Etai Jacob

    2013-04-01

    Full Text Available Computational analysis of proteomes in all kingdoms of life reveals a strong tendency for N-terminal domains in two-domain proteins to have shorter sequences than their neighboring C-terminal domains. Given that folding rates are affected by chain length, we asked whether the tendency for N-terminal domains to be shorter than their neighboring C-terminal domains reflects selection for faster-folding N-terminal domains. Calculations of absolute contact order, another predictor of folding rate, provide additional evidence that N-terminal domains tend to fold faster than their neighboring C-terminal domains. A possible explanation for this bias, which is more pronounced in prokaryotes than in eukaryotes, is that faster folding of N-terminal domains reduces the risk for protein aggregation during folding by preventing formation of nonnative interdomain interactions. This explanation is supported by our finding that two-domain proteins with a shorter N-terminal domain are much more abundant than those with a shorter C-terminal domain.

  3. 6 CFR 13.27 - Computation of time.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Computation of time. 13.27 Section 13.27 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY PROGRAM FRAUD CIVIL REMEDIES § 13.27 Computation of time. (a) In computing any period of time under this part or in an order issued...

  4. Seeding the cloud: Financial bootstrapping in the computer software sector

    OpenAIRE

    Mac An Bhaird, Ciarán; Lynn, Theo

    2015-01-01

    This study investigates resourcing of computer software companies that have adopted cloud computing for the development and delivery of application software. Use of this innovative technology potentially impacts firm financing because the initial infrastructure investment requirement is much lower than for packaged software, lead time to market is shorter, and cloud computing supports instant scalability. We test these predictions by conducting in-depth interviews with founders of 18 independ...

  5. Representativeness of shorter measurement sessions in long-term indoor air monitoring.

    Science.gov (United States)

    Maciejewska, M; Szczurek, A

    2015-02-01

    Indoor air quality (IAQ) considerably influences health, comfort and the overall performance of people who spend most of their lives in confined spaces. For this reason, there is a strong need to develop methods for IAQ assessment. The fundamental issue in the quantitative determination of IAQ is the duration of measurements. Its inadequate choice may result in providing incorrect information and this potentially leads to wrong conclusions. The most complete information may be acquired through long-term monitoring. However it is typically perceived as impractical due to time and cost load. The aim of this study was to determine whether long-term monitoring can be adequately represented by a shorter measurement session. There were considered three measurable quantities: temperature, relative humidity and carbon dioxide concentration. They are commonly recognized as indicatives for IAQ and may be readily monitored. Scaled Kullback-Leibler divergence, also called relative entropy, was applied as a measure of data representativeness. We considered long-term monitoring in a range from 1 to 9 months. Based on our work, the representative data on CO2 concentration may be acquired while performing measurements during 20% of time dedicated to long-term monitoring. In the case of temperature and relative humidity the respective time demand was 50% of long-term monitoring. From our results, in indoor air monitoring strategies, there could be considered shorter measurement sessions, while still collecting data which are representative for long-term monitoring.

  6. Noise-constrained switching times for heteroclinic computing

    Science.gov (United States)

    Neves, Fabio Schittler; Voit, Maximilian; Timme, Marc

    2017-03-01

    Heteroclinic computing offers a novel paradigm for universal computation by collective system dynamics. In such a paradigm, input signals are encoded as complex periodic orbits approaching specific sequences of saddle states. Without inputs, the relevant states together with the heteroclinic connections between them form a network of states—the heteroclinic network. Systems of pulse-coupled oscillators or spiking neurons naturally exhibit such heteroclinic networks of saddles, thereby providing a substrate for general analog computations. Several challenges need to be resolved before it becomes possible to effectively realize heteroclinic computing in hardware. The time scales on which computations are performed crucially depend on the switching times between saddles, which in turn are jointly controlled by the system's intrinsic dynamics and the level of external and measurement noise. The nonlinear dynamics of pulse-coupled systems often strongly deviate from that of time-continuously coupled (e.g., phase-coupled) systems. The factors impacting switching times in pulse-coupled systems are still not well understood. Here we systematically investigate switching times in dependence of the levels of noise and intrinsic dissipation in the system. We specifically reveal how local responses to pulses coact with external noise. Our findings confirm that, like in time-continuous phase-coupled systems, piecewise-continuous pulse-coupled systems exhibit switching times that transiently increase exponentially with the number of switches up to some order of magnitude set by the noise level. Complementarily, we show that switching times may constitute a good predictor for the computation reliability, indicating how often an input signal must be reiterated. By characterizing switching times between two saddles in conjunction with the reliability of a computation, our results provide a first step beyond the coding of input signal identities toward a complementary coding for

  7. Effects of computing time delay on real-time control systems

    Science.gov (United States)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  8. Making tomorrow's mistakes today: Evolutionary prototyping for risk reduction and shorter development time

    Science.gov (United States)

    Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim

    1993-01-01

    In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the

  9. Making tomorrow's mistakes today: Evolutionary prototyping for risk reduction and shorter development time

    Science.gov (United States)

    Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim

    1993-03-01

    In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the

  10. High Numbers of Stromal Cancer-Associated Fibroblasts Are Associated With a Shorter Survival Time in Cats With Oral Squamous Cell Carcinoma.

    Science.gov (United States)

    Klobukowska, H J; Munday, J S

    2016-11-01

    Cancer-associated fibroblasts (CAFs) are fibroblastic cells that express α-smooth muscle actin and have been identified in the stroma of numerous epithelial tumors. The presence of CAFs within the tumor stroma has been associated with a poorer prognosis in some human cancers, including oral squamous cell carcinomas (SCCs). Cats frequently develop oral SCCs, and although these are generally highly aggressive neoplasms, there is currently a lack of prognostic markers for these tumors. The authors investigated the prognostic value of the presence of CAFs within the stroma of oral SCC biopsy specimens from 47 cats. In addition, several epidemiologic, clinical, and histologic variables were also assessed for prognostic significance. A CAF-positive stroma was identified in 35 of 47 SCCs (74.5%), and the median survival time (ST) of cats with CAF-positive SCCs (35 days) was significantly shorter than that of cats with CAF-negative SCCs (48.5 days) (P = .031). ST was also associated with the location of the primary tumor (P = .0018): the median ST for oropharyngeal SCCs (179 days) was significantly longer than for maxillary (43.5 days; P = .047), mandibular (42 days; P = .022), and sublingual SCCs (22.5 days; P = .0005). The median ST of sublingual SCCs was also shorter compared with maxillary SCCs (P = .0017). Furthermore, a significant association was identified between site and the presence of stromal CAFs (P = .025). On the basis of this retrospective study, evaluating the tumor stroma for CAFs in feline oral SCC biopsy specimens may be of potential prognostic value. © The Author(s) 2016.

  11. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  12. Fast algorithms for computing phylogenetic divergence time.

    Science.gov (United States)

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  13. Shorter time since inflammatory bowel disease diagnosis in children is associated with lower mental health in parents.

    Science.gov (United States)

    Werner, H; Braegger, Cp; Buehr, P; Koller, R; Nydegger, A; Spalinger, J; Heyland, K; Schibli, S; Landolt, Ma

    2015-01-01

    This study assessed the mental health of parents of children with inflammatory bowel disease (IBD), compared their mental health with age-matched and gender-matched references and examined parental and child predictors for mental health problems. A total of 125 mothers and 106 fathers of 125 children with active and inactive IBD from the Swiss IBD multicentre cohort study were included. Parental mental health was assessed by the Symptom Checklist 27 and child behaviour problems by the Strengths and Difficulties Questionnaire. Child medical data were extracted from hospital records. While the mothers reported lower mental health, the fathers' mental health was similar, or even better, than in age-matched and gender-matched community controls. In both parents, shorter time since the child's diagnosis was associated with poorer mental health. In addition, the presence of their own IBD diagnosis and child behaviour problems predicted maternal mental health problems. Parents of children with IBD may need professional support when their child is diagnosed, to mitigate distress. This, in turn, may help the child to adjust better to IBD. Particular attention should be paid to mothers who have their own IBD diagnosis and whose children display behaviour problems. ©2014 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  14. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  15. Defining Accelerometer Nonwear Time to Maximize Detection of Sedentary Time in Youth

    DEFF Research Database (Denmark)

    Cain, Kelli L; Bonilla, Edith; Conway, Terry L

    2018-01-01

    PURPOSE: The present study examined various accelerometer nonwear definitions and their impact on detection of sedentary time using different ActiGraph models, filters, and axes. METHODS: In total, 61 youth (34 children and 27 adolescents; aged 5-17 y) wore a 7164 and GT3X+ ActiGraph on a hip......), and GT3X+N (V and VM), and sedentary estimates were computed. RESULTS: The GT3X+LFE-VM was most sensitive to movement and could accurately detect observed sedentary time with the shortest nonwear definition of 20 minutes of consecutive "0" counts for children and 40 minutes for adolescents. The GT3X......+N-V was least sensitive to movement and required longer definitions to detect observed sedentary time (40 min for children and 90 min for adolescents). VM definitions were 10 minutes shorter than V definitions. LFE definitions were 40 minutes shorter than N definitions in adolescents. CONCLUSION: Different...

  16. 5 CFR 890.101 - Definitions; time computations.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Definitions; time computations. 890.101....101 Definitions; time computations. (a) In this part, the terms annuitant, carrier, employee, employee... in section 8901 of title 5, United States Code, and supplement the following definitions: Appropriate...

  17. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  18. 29 CFR 1921.22 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Computation of time. 1921.22 Section 1921.22 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... WORKERS' COMPENSATION ACT Miscellaneous § 1921.22 Computation of time. Sundays and holidays shall be...

  19. General purpose computers in real time

    International Nuclear Information System (INIS)

    Biel, J.R.

    1989-01-01

    I see three main trends in the use of general purpose computers in real time. The first is more processing power. The second is the use of higher speed interconnects between computers (allowing more data to be delivered to the processors). The third is the use of larger programs running in the computers. Although there is still work that needs to be done, I believe that all indications are that the online need for general purpose computers should be available for the SCC and LHC machines. 2 figs

  20. Hereditary angioedema attacks resolve faster and are shorter after early icatibant treatment.

    Directory of Open Access Journals (Sweden)

    Marcus Maurer

    Full Text Available BACKGROUND: Attacks of hereditary angioedema (HAE are unpredictable and, if affecting the upper airway, can be lethal. Icatibant is used for physician- or patient self-administered symptomatic treatment of HAE attacks in adults. Its mode of action includes disruption of the bradykinin pathway via blockade of the bradykinin B(2 receptor. Early treatment is believed to shorten attack duration and prevent severe outcomes; however, evidence to support these benefits is lacking. OBJECTIVE: To examine the impact of timing of icatibant administration on the duration and resolution of HAE type I and II attacks. METHODS: The Icatibant Outcome Survey is an international, prospective, observational study for patients treated with icatibant. Data on timings and outcomes of icatibant treatment for HAE attacks were collected between July 2009-February 2012. A mixed-model of repeated measures was performed for 426 attacks in 136 HAE type I and II patients. RESULTS: Attack duration was significantly shorter in patients treated <1 hour of attack onset compared with those treated ≥ 1 hour (6.1 hours versus 16.8 hours [p<0.001]. Similar significant effects were observed for <2 hours versus ≥ 2 hours (7.2 hours versus 20.2 hours [p<0.001] and <5 hours versus ≥ 5 hours (8.0 hours versus 23.5 hours [p<0.001]. Treatment within 1 hour of attack onset also significantly reduced time to attack resolution (5.8 hours versus 8.8 hours [p<0.05]. Self-administrators were more likely to treat early and experience shorter attacks than those treated by a healthcare professional. CONCLUSION: Early blockade of the bradykinin B(2 receptor with icatibant, particularly within the first hour of attack onset, significantly reduced attack duration and time to attack resolution.

  1. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  2. Recent achievements in real-time computational seismology in Taiwan

    Science.gov (United States)

    Lee, S.; Liang, W.; Huang, B.

    2012-12-01

    Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information ROS completes a 3D simulation real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).

  3. 7 CFR 1.603 - How are time periods computed?

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false How are time periods computed? 1.603 Section 1.603... Licenses General Provisions § 1.603 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2...

  4. Spindle assembly checkpoint protein expression correlates with cellular proliferation and shorter time to recurrence in ovarian cancer.

    LENUS (Irish Health Repository)

    McGrogan, Barbara

    2014-07-01

    Ovarian carcinoma (OC) is the most lethal of the gynecological malignancies, often presenting at an advanced stage. Treatment is hampered by high levels of drug resistance. The taxanes are microtubule stabilizing agents, used as first-line agents in the treatment of OC that exert their apoptotic effects through the spindle assembly checkpoint. BUB1-related protein kinase (BUBR1) and mitotic arrest deficient 2 (MAD2), essential spindle assembly checkpoint components, play a key role in response to taxanes. BUBR1, MAD2, and Ki-67 were assessed on an OC tissue microarray platform representing 72 OC tumors of varying histologic subtypes. Sixty-one of these patients received paclitaxel and platinum agents combined; 11 received platinum alone. Overall survival was available for all 72 patients, whereas recurrence-free survival (RFS) was available for 66 patients. Increased BUBR1 expression was seen in serous carcinomas, compared with other histologies (P = .03). Increased BUBR1 was significantly associated with tumors of advanced stage (P = .05). Increased MAD2 and BUBR1 expression also correlated with increased cellular proliferation (P < .0002 and P = .02, respectively). Reduced MAD2 nuclear intensity was associated with a shorter RFS (P = .03), in ovarian tumors of differing histologic subtype (n = 66). In this subgroup, for those women who received paclitaxel and platinum agents combined (n = 57), reduced MAD2 intensity also identified women with a shorter RFS (P < .007). For the entire cohort of patients, irrespective of histologic subtype or treatment, MAD2 nuclear intensity retained independent significance in a multivariate model, with tumors showing reduced nuclear MAD2 intensity identifying patients with a poorer RFS (P = .05).

  5. Instruction timing for the CDC 7600 computer

    International Nuclear Information System (INIS)

    Lipps, H.

    1975-01-01

    This report provides timing information for all instructions of the Control Data 7600 computer, except for instructions of type 01X, to enable the optimization of 7600 programs. The timing rules serve as background information for timing charts which are produced by a program (TIME76) of the CERN Program Library. The rules that co-ordinate the different sections of the CPU are stated in as much detail as is necessary to time the flow of instructions for a given sequence of code. Instruction fetch, instruction issue, and access to small core memory are treated at length, since details are not available from the computer manuals. Annotated timing charts are given for 24 examples, chosen to display the full range of timing considerations. (Author)

  6. Extending 3D near-cloud corrections from shorter to longer wavelengths

    International Nuclear Information System (INIS)

    Marshak, Alexander; Evans, K. Frank; Várnai, Tamás; Wen, Guoyong

    2014-01-01

    Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model [18]. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov [9] proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths. - Highlights:

  7. 50 CFR 221.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false How are time periods computed? 221.3... Provisions § 221.3 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2) The last day of the...

  8. Real-time computational photon-counting LiDAR

    Science.gov (United States)

    Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles

    2018-03-01

    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.

  9. TimeSet: A computer program that accesses five atomic time services on two continents

    Science.gov (United States)

    Petrakis, P. L.

    1993-01-01

    TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.

  10. TV time but not computer time is associated with cardiometabolic risk in Dutch young adults.

    Science.gov (United States)

    Altenburg, Teatske M; de Kroon, Marlou L A; Renders, Carry M; Hirasing, Remy; Chinapaw, Mai J M

    2013-01-01

    TV time and total sedentary time have been positively related to biomarkers of cardiometabolic risk in adults. We aim to examine the association of TV time and computer time separately with cardiometabolic biomarkers in young adults. Additionally, the mediating role of waist circumference (WC) is studied. Data of 634 Dutch young adults (18-28 years; 39% male) were used. Cardiometabolic biomarkers included indicators of overweight, blood pressure, blood levels of fasting plasma insulin, cholesterol, glucose, triglycerides and a clustered cardiometabolic risk score. Linear regression analyses were used to assess the cross-sectional association of self-reported TV and computer time with cardiometabolic biomarkers, adjusting for demographic and lifestyle factors. Mediation by WC was checked using the product-of-coefficient method. TV time was significantly associated with triglycerides (B = 0.004; CI = [0.001;0.05]) and insulin (B = 0.10; CI = [0.01;0.20]). Computer time was not significantly associated with any of the cardiometabolic biomarkers. We found no evidence for WC to mediate the association of TV time or computer time with cardiometabolic biomarkers. We found a significantly positive association of TV time with cardiometabolic biomarkers. In addition, we found no evidence for WC as a mediator of this association. Our findings suggest a need to distinguish between TV time and computer time within future guidelines for screen time.

  11. Shorter Perceived Outpatient MRI Wait Times Associated With Higher Patient Satisfaction.

    Science.gov (United States)

    Holbrook, Anna; Glenn, Harold; Mahmood, Rabia; Cai, Qingpo; Kang, Jian; Duszak, Richard

    2016-05-01

    The aim of this study was to assess differences in perceived versus actual wait times among patients undergoing outpatient MRI examinations and to correlate those times with patient satisfaction. Over 15 weeks, 190 patients presenting for outpatient MR in a radiology department in which "patient experience" is one of the stated strategic priorities were asked to (1) estimate their wait times for various stages in the imaging process and (2) state their satisfaction with their imaging experience. Perceived times were compared with actual electronic time stamps. Perceived and actual times were compared and correlated with standardized satisfaction scores using Kendall τ correlation. The mean actual wait time between patient arrival and examination start was 53.4 ± 33.8 min, whereas patients perceived a mean wait time of 27.8 ± 23.1 min, a statistically significant underestimation of 25.6 min (P perceived wait times at all points during patient encounters were correlated with higher satisfaction scores (P perceived and actual wait times were both correlated with higher satisfaction scores. As satisfaction surveys play a larger role in an environment of metric transparency and value-based payments, better understanding of such factors will be increasingly important. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  12. Time-of-Flight Cameras in Computer Graphics

    DEFF Research Database (Denmark)

    Kolb, Andreas; Barth, Erhardt; Koch, Reinhard

    2010-01-01

    Computer Graphics, Computer Vision and Human Machine Interaction (HMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real-time geometry...

  13. 29 CFR 4245.8 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Computation of time. 4245.8 Section 4245.8 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION INSOLVENCY, REORGANIZATION, TERMINATION, AND OTHER RULES APPLICABLE TO MULTIEMPLOYER PLANS NOTICE OF INSOLVENCY § 4245.8 Computation of...

  14. Computation Offloading for Frame-Based Real-Time Tasks under Given Server Response Time Guarantees

    Directory of Open Access Journals (Sweden)

    Anas S. M. Toma

    2014-11-01

    Full Text Available Computation offloading has been adopted to improve the performance of embedded systems by offloading the computation of some tasks, especially computation-intensive tasks, to servers or clouds. This paper explores computation offloading for real-time tasks in embedded systems, provided given response time guarantees from the servers, to decide which tasks should be offloaded to get the results in time. We consider frame-based real-time tasks with the same period and relative deadline. When the execution order of the tasks is given, the problem can be solved in linear time. However, when the execution order is not specified, we prove that the problem is NP-complete. We develop a pseudo-polynomial-time algorithm for deriving feasible schedules, if they exist.  An approximation scheme is also developed to trade the error made from the algorithm and the complexity. Our algorithms are extended to minimize the period/relative deadline of the tasks for performance maximization. The algorithms are evaluated with a case study for a surveillance system and synthesized benchmarks.

  15. Stochastic nonlinear time series forecasting using time-delay reservoir computers: performance and universality.

    Science.gov (United States)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2014-07-01

    Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Computation of a long-time evolution in a Schroedinger system

    International Nuclear Information System (INIS)

    Girard, R.; Kroeger, H.; Labelle, P.; Bajzer, Z.

    1988-01-01

    We compare different techniques for the computation of a long-time evolution and the S matrix in a Schroedinger system. As an application we consider a two-nucleon system interacting via the Yamaguchi potential. We suggest computation of the time evolution for a very short time using Pade approximants, the long-time evolution being obtained by iterative squaring. Within the technique of strong approximation of Moller wave operators (SAM) we compare our calculation with computation of the time evolution in the eigenrepresentation of the Hamiltonian and with the standard Lippmann-Schwinger solution for the S matrix. We find numerical agreement between these alternative methods for time-evolution computation up to half the number of digits of internal machine precision, and fairly rapid convergence of both techniques towards the Lippmann-Schwinger solution

  17. Imprecise results: Utilizing partial computations in real-time systems

    Science.gov (United States)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  18. Real-Time Thevenin Impedance Computation

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Jóhannsson, Hjörtur

    2013-01-01

    operating state, and strict time constraints are difficult to adhere to as the complexity of the grid increases. Several suggested approaches for real-time stability assessment require Thevenin impedances to be determined for the observed system conditions. By combining matrix factorization, graph reduction......, and parallelization, we develop an algorithm for computing Thevenin impedances an order of magnitude faster than previous approaches. We test the factor-and-solve algorithm with data from several power grids of varying complexity, and we show how the algorithm allows realtime stability assessment of complex power...

  19. Spying on real-time computers to improve performance

    International Nuclear Information System (INIS)

    Taff, L.M.

    1975-01-01

    The sampled program-counter histogram, an established technique for shortening the execution times of programs, is described for a real-time computer. The use of a real-time clock allows particularly easy implementation. (Auth.)

  20. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    Science.gov (United States)

    Pendola, Martin; Saha, Subrata

    2015-01-01

    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  1. In Vitro Comparison of Holmium Lasers: Evidence for Shorter Fragmentation Time and Decreased Retropulsion Using a Modern Variable-pulse Laser.

    Science.gov (United States)

    Bell, John Roger; Penniston, Kristina L; Nakada, Stephen Y

    2017-09-01

    To compare the performance of variable- and fixed-pulse lasers on stone phantoms in vitro. Seven-millimeter stone phantoms were made to simulate calcium oxalate monohydrate stones using BegoStone plus. The in vitro setting was created with a clear polyvinyl chloride tube. For each trial, a stone phantom was placed at the open end of the tubing. The Cook Rhapsody H-30 variable-pulse laser was tested on both long- and short-pulse settings and was compared to the Dornier H-20 fixed-pulse laser; 5 trials were conducted for each trial arm. Fragmentation was accomplished with the use of a flexible ureteroscope and a 273-micron holmium laser fiber using settings of 1 J × 12 Hz. The treatment time (in minute) for complete fragmentation was recorded as was the total retropulsion distance (in centimeter) during treatment. Laser fibers were standardized for all repetitions. The treatment time was significantly shorter with the H-30 vs the H-20 laser (14.3 ± 2.5 vs 33.1 ± 8.9 minutes, P = .008). There was no difference between the treatment times using the long vs short pulse widths of the H-30 laser (14.4 ± 3.4 vs 14.3 ± 1.7 minutes, P = .93). Retropulsion differed by laser type and pulse width, H-30 long pulse (15.8 ± 5.7 cm), H-30 short pulse (54.8 ± 7.1 cm), and H-20 (33.2 ± 12.5 cm) (P laser fragmented stone phantoms in half the time of the H-20 laser regardless of the pulse width. Retropulsion effects differed between the lasers, with the H-30 causing the least retropulsion. Longer pulse widths result in less stone retropulsion. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  3. Shorter height is related to lower cardiovascular disease risk – A narrative review

    Directory of Open Access Journals (Sweden)

    Thomas T. Samaras

    2013-01-01

    Full Text Available Numerous Western studies have shown a negative correlation between height and cardiovascular disease. However, these correlations do not prove causation. This review provides a variety of studies showing short people have little to no cardiovascular disease. When shorter people are compared to taller people, a number of biological mechanisms evolve favoring shorter people, including reduced telomere shortening, lower atrial fibrillation, higher heart pumping efficiency, lower DNA damage, lower risk of blood clots, lower left ventricular hypertrophy and superior blood parameters. The causes of increased heart disease among shorter people in the developed world are related to lower income, excessive weight, poor diet, lifestyle factors, catch-up growth, childhood illness and poor environmental conditions. For short people in developed countries, the data indicate that a plant-based diet, leanness and regular exercise can substantially reduce the risk of cardiovascular disease.

  4. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  5. 43 CFR 45.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false How are time periods computed? 45.3... IN FERC HYDROPOWER LICENSES General Provisions § 45.3 How are time periods computed? (a) General... run is not included. (2) The last day of the period is included. (i) If that day is a Saturday, Sunday...

  6. Interactive vs passive screen time and nighttime sleep duration among school-aged children.

    Science.gov (United States)

    Yland, Jennifer; Guan, Stanford; Emanuele, Erin; Hale, Lauren

    2015-09-01

    Insufficient sleep among school-aged children is a growing concern, as numerous studies have shown that chronic short sleep duration increases the risk of poor academic performance and specific adverse health outcomes. We examined the association between weekday nighttime sleep duration and 3 types of screen exposure: television, computer use, and video gaming. We used age 9 data from an ethnically diverse national birth cohort study, the Fragile Families and Child Wellbeing Study, to assess the association between screen time and sleep duration among 9-year-olds, using screen time data reported by both the child (n = 3269) and by the child's primary caregiver (n= 2770). Within the child-reported models, children who watched more than 2 hours of television per day had shorter sleep duration by approximately 11 minutes per night compared to those who watched less than 2 hours of television (β = -0.18; P computer use were associated with reduced sleep duration. For both child- and parent-reported screen time measures, we did not find statistically significant differences in effect size across various types of screen time. Screen time from televisions and computers is associated with reduced sleep duration among 9-year-olds, using 2 sources of estimates of screen time exposure (child and parent reports). No specific type or use of screen time resulted in significantly shorter sleep duration than another, suggesting that caution should be advised against excessive use of all screens.

  7. Computer task performance by subjects with Duchenne muscular dystrophy.

    Science.gov (United States)

    Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira

    2016-01-01

    Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.

  8. Relativistic Photoionization Computations with the Time Dependent Dirac Equation

    Science.gov (United States)

    2016-10-12

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6795--16-9698 Relativistic Photoionization Computations with the Time Dependent Dirac... Photoionization Computations with the Time Dependent Dirac Equation Daniel F. Gordon and Bahman Hafizi Naval Research Laboratory 4555 Overlook Avenue, SW...Unclassified Unlimited Unclassified Unlimited 22 Daniel Gordon (202) 767-5036 Tunneling Photoionization Ionization of inner shell electrons by laser

  9. Computer program 'TRIO' for third order calculation of ion trajectory

    International Nuclear Information System (INIS)

    Matsuo, Takekiyo; Matsuda, Hisashi; Fujita, Yoshitaka; Wollnik, H.

    1976-01-01

    A computer program for the calculation of ion trajectory is described. This program ''TRIO'' (Third Order Ion Optics) is applicable to any ion optical system consisting of drift spaces, cylindrical or toroidal electric sector fields, homogeneous or inhomogeneous magnetic sector fields, magnetic and electrostatic Q-lenses. The influence of the fringing field is taken into consideration. A special device is introduced to the method of matrix multiplication to shorten the calculation time and the required time proves to be about 40 times shorter than the ordinary method as a result. The trajectory calculation is possible to execute with accuracy up to third order. Any one of three dispersion bases, momentum, energy, mass and energy, is possible to be selected. Full LIST of the computer program and an example are given. (auth.)

  10. Numerical computation of homogeneous slope stability.

    Science.gov (United States)

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).

  11. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  12. Computer-controlled neutron time-of-flight spectrometer. Part II

    International Nuclear Information System (INIS)

    Merriman, S.H.

    1979-12-01

    A time-of-flight spectrometer for neutron inelastic scattering research has been interfaced to a PDP-15/30 computer. The computer is used for experimental data acquisition and analysis and for apparatus control. This report was prepared to summarize the functions of the computer and to act as a users' guide to the software system

  13. Simple and Effective Algorithms: Computer-Adaptive Testing.

    Science.gov (United States)

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  14. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  15. Heterogeneous real-time computing in radio astronomy

    Science.gov (United States)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  16. Ubiquitous computing technology for just-in-time motivation of behavior change.

    Science.gov (United States)

    Intille, Stephen S

    2004-01-01

    This paper describes a vision of health care where "just-in-time" user interfaces are used to transform people from passive to active consumers of health care. Systems that use computational pattern recognition to detect points of decision, behavior, or consequences automatically can present motivational messages to encourage healthy behavior at just the right time. Further, new ubiquitous computing and mobile computing devices permit information to be conveyed to users at just the right place. In combination, computer systems that present messages at the right time and place can be developed to motivate physical activity and healthy eating. Computational sensing technologies can also be used to measure the impact of the motivational technology on behavior.

  17. SHORTER MENSTRUAL CYCLES ASSOCIATED WITH CHLORINATION BY-PRODUCTS IN DRINKING WATER

    Science.gov (United States)

    Shorter Menstrual Cycles Associated with Chlorination by-Products in Drinking Water. Gayle Windham, Kirsten Waller, Meredith Anderson, Laura Fenster, Pauline Mendola, Shanna Swan. California Department of Health Services.In previous studies of tap water consumption we...

  18. Introduction to massively-parallel computing in high-energy physics

    CERN Document Server

    AUTHOR|(CDS)2083520

    1993-01-01

    Ever since computers were first used for scientific and numerical work, there has existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time-scales. However, the vast leaps in processor performance achieved through advances in semi-conductor science have reached a hiatus as the technology comes up against the physical limits of the speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (S...

  19. Continuous-Time Symmetric Hopfield Nets are Computationally Universal

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 3 (2003), s. 693-733 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : continuous-time Hopfield network * Liapunov function * analog computation * computational power * Turing universality Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  20. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    Science.gov (United States)

    2015-09-13

    thermo-fluid analysis of a ground vehicle and its tires ST-SI Computational Analysis of a Vertical - Axis Wind Turbine We have successfully...of a vertical - axis wind turbine . Multiscale Compressible-Flow Computation with Particle Tracking We have successfully tested the multiscale...Tezduyar, Spenser McIntyre, Nikolay Kostov, Ryan Kolesar, Casey Habluetzel. Space–time VMS computation of wind - turbine rotor and tower aerodynamics

  1. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  2. 22 CFR 1429.21 - Computation of time for filing papers.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Computation of time for filing papers. 1429.21... MISCELLANEOUS AND GENERAL REQUIREMENTS General Requirements § 1429.21 Computation of time for filing papers. In... subchapter requires the filing of any paper, such document must be received by the Board or the officer or...

  3. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  4. Dry eye syndrome among computer users

    Science.gov (United States)

    Gajta, Aurora; Turkoanje, Daniela; Malaescu, Iosif; Marin, Catalin-Nicolae; Koos, Marie-Jeanne; Jelicic, Biljana; Milutinovic, Vuk

    2015-12-01

    Dry eye syndrome is characterized by eye irritation due to changes of the tear film. Symptoms include itching, foreign body sensations, mucous discharge and transitory vision blurring. Less occurring symptoms include photophobia and eye tiredness. Aim of the work was to determine the quality of the tear film and ocular dryness potential risk in persons who spend more than 8 hours using computers and possible correlations between severity of symptoms (dry eyes symptoms anamnesis) and clinical signs assessed by: Schirmer test I, TBUT (Tears break-up time), TFT (Tear ferning test). The results show that subjects using computer have significantly shorter TBUT (less than 5 s for 56 % of subjects and less than 10 s for 37 % of subjects), TFT type II/III in 50 % of subjects and type III 31% of subjects was found when compared to computer non users (TFT type I and II was present in 85,71% of subjects). Visual display terminal use, more than 8 hours daily, has been identified as a significant risk factor for dry eye. It's been advised to all persons who spend substantial time using computers to use artificial tears drops in order to minimize the symptoms of dry eyes syndrome and prevents serious complications.

  5. Computer simulations of long-time tails: what's new?

    NARCIS (Netherlands)

    Hoef, van der M.A.; Frenkel, D.

    1995-01-01

    Twenty five years ago Alder and Wainwright discovered, by simulation, the 'long-time tails' in the velocity autocorrelation function of a single particle in fluid [1]. Since then, few qualitatively new results on long-time tails have been obtained by computer simulations. However, within the

  6. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  7. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  8. 5 CFR 831.703 - Computation of annuities for part-time service.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...

  9. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  10. Cycle Time and Throughput Rate Modelling Study through the Simulation Platform

    Directory of Open Access Journals (Sweden)

    Fei Xiong

    2014-02-01

    Full Text Available The shorter cycle time (CT and higher throughput rate (TH are primary goals of the industry, including sensors and transducer factory. The common way of cycle time reduction is to reduce WIP, but such action may also reduce throughput. This paper will show one practical healthy heuristic algorithm based on tool time modelling to balance both the CT and the TH. This algorithm considers the factors that exist in the work in process (WIP and its constrains in modules of the factory. One computer simulation platform based on a semiconductor factory is built to verify this algorithm. The result of computing simulation experiments suggests that the WIP level calculated by this algorithm can achieve the good balance of CT and TH.

  11. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    Science.gov (United States)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  12. Numerical Computation of Homogeneous Slope Stability

    Directory of Open Access Journals (Sweden)

    Shuangshuang Xiao

    2015-01-01

    Full Text Available To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM and particle swarm optimization algorithm (PSO to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759 were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS.

  13. Homework schedule: an important factor associated with shorter sleep duration among Chinese school-aged children.

    Science.gov (United States)

    Li, Shenghui; Yang, Qian; Chen, Zhe; Jin, Xingming; Jiang, Fan; Shen, Xiaoming

    2014-09-03

    This study was designed to examine the hypothesis that homework schedule has adverse impacts on Chinese children's sleep-wake habits and sleep duration. A random sample of 19,299 children aged 5.08 to 11.99 years old participated in a large, cross-sectional survey. A parent-administered questionnaire was completed to quantify children's homework schedule and sleep behaviors. Generally, it was demonstrated that more homework schedule was significantly associated with later bedtime, later wake time, and shorter sleep duration. Among all sleep variables, bedtime and sleep duration during weekdays appeared to be most affected by homework schedule, especially homework schedule during weekdays.

  14. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  15. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    Science.gov (United States)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  16. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  17. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  18. Recent advances in the reconstruction of cranio-maxillofacial defects using computer-aided design/computer-aided manufacturing.

    Science.gov (United States)

    Oh, Ji-Hyeon

    2018-12-01

    With the development of computer-aided design/computer-aided manufacturing (CAD/CAM) technology, it has been possible to reconstruct the cranio-maxillofacial defect with more accurate preoperative planning, precise patient-specific implants (PSIs), and shorter operation times. The manufacturing processes include subtractive manufacturing and additive manufacturing and should be selected in consideration of the material type, available technology, post-processing, accuracy, lead time, properties, and surface quality. Materials such as titanium, polyethylene, polyetheretherketone (PEEK), hydroxyapatite (HA), poly-DL-lactic acid (PDLLA), polylactide-co-glycolide acid (PLGA), and calcium phosphate are used. Design methods for the reconstruction of cranio-maxillofacial defects include the use of a pre-operative model printed with pre-operative data, printing a cutting guide or template after virtual surgery, a model after virtual surgery printed with reconstructed data using a mirror image, and manufacturing PSIs by directly obtaining PSI data after reconstruction using a mirror image. By selecting the appropriate design method, manufacturing process, and implant material according to the case, it is possible to obtain a more accurate surgical procedure, reduced operation time, the prevention of various complications that can occur using the traditional method, and predictive results compared to the traditional method.

  19. Soft Real-Time PID Control on a VME Computer

    Science.gov (United States)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  20. Computer games and fine motor skills.

    Science.gov (United States)

    Borecki, Lukasz; Tolstych, Katarzyna; Pokorski, Mieczyslaw

    2013-01-01

    The study seeks to determine the influence of computer games on fine motor skills in young adults, an area of incomplete understanding and verification. We hypothesized that computer gaming could have a positive influence on basic motor skills, such as precision, aiming, speed, dexterity, or tremor. We examined 30 habitual game users (F/M - 3/27; age range 20-25 years) of the highly interactive game Counter Strike, in which players impersonate soldiers on a battlefield, and 30 age- and gender-matched subjects who declared never to play games. Selected tests from the Vienna Test System were used to assess fine motor skills and tremor. The results demonstrate that the game users scored appreciably better than the control subjects in all tests employed. In particular, the players did significantly better in the precision of arm-hand movements, as expressed by a lower time of errors, 1.6 ± 0.6 vs. 2.8 ± 0.6 s, a lower error rate, 13.6 ± 0.3 vs. 20.4 ± 2.2, and a shorter total time of performing a task, 14.6 ± 2.9 vs. 32.1 ± 4.5 s in non-players, respectively; p computer games on psychomotor functioning. We submit that playing computer games may be a useful training tool to increase fine motor skills and movement coordination.

  1. Computing return times or return periods with rare event algorithms

    Science.gov (United States)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  2. Near real-time digital holographic microscope based on GPU parallel computing

    Science.gov (United States)

    Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan

    2018-01-01

    A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,

  3. At least 10% shorter C–H bonds in cryogenic protein crystal structures than in current AMBER forcefields

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Yuan-Ping, E-mail: pang@mayo.edu

    2015-03-06

    High resolution protein crystal structures resolved with X-ray diffraction data at cryogenic temperature are commonly used as experimental data to refine forcefields and evaluate protein folding simulations. However, it has been unclear hitherto whether the C–H bond lengths in cryogenic protein structures are significantly different from those defined in forcefields to affect protein folding simulations. This article reports the finding that the C–H bonds in high resolution cryogenic protein structures are 10–14% shorter than those defined in current AMBER forcefields, according to 3709 C–H bonds in the cryogenic protein structures with resolutions of 0.62–0.79 Å. Also, 20 all-atom, isothermal–isobaric, 0.5-μs molecular dynamics simulations showed that chignolin folded from a fully-extended backbone formation to the native β-hairpin conformation in the simulations using AMBER forcefield FF12SB at 300 K with an aggregated native state population including standard error of 10 ± 4%. However, the aggregated native state population with standard error reduced to 3 ± 2% in the same simulations except that C–H bonds were shortened by 10–14%. Furthermore, the aggregated native state populations with standard errors increased to 35 ± 3% and 26 ± 3% when using FF12MC, which is based on AMBER forcefield FF99, with and without the shortened C–H bonds, respectively. These results show that the 10–14% bond length differences can significantly affect protein folding simulations and suggest that re-parameterization of C–H bonds according to the cryogenic structures could improve the ability of a forcefield to fold proteins in molecular dynamics simulations. - Highlights: • Cryogenic crystal structures are commonly used in computational studies of proteins. • C–H bonds in the cryogenic structures are shorter than those defined in forcefields. • A survey of 3709 C–H bonds shows that the cryogenic bonds are 10–14% shorter. • The

  4. Time reversibility, computer simulation, algorithms, chaos

    CERN Document Server

    Hoover, William Graham

    2012-01-01

    A small army of physicists, chemists, mathematicians, and engineers has joined forces to attack a classic problem, the "reversibility paradox", with modern tools. This book describes their work from the perspective of computer simulation, emphasizing the author's approach to the problem of understanding the compatibility, and even inevitability, of the irreversible second law of thermodynamics with an underlying time-reversible mechanics. Computer simulation has made it possible to probe reversibility from a variety of directions and "chaos theory" or "nonlinear dynamics" has supplied a useful vocabulary and a set of concepts, which allow a fuller explanation of irreversibility than that available to Boltzmann or to Green, Kubo and Onsager. Clear illustration of concepts is emphasized throughout, and reinforced with a glossary of technical terms from the specialized fields which have been combined here to focus on a common theme. The book begins with a discussion, contrasting the idealized reversibility of ba...

  5. Family History of Early Infant Death Correlates with Earlier Age at Diagnosis But Not Shorter Time to Diagnosis for Severe Combined Immunodeficiency

    Directory of Open Access Journals (Sweden)

    Anderson Dik Wai Luk

    2017-07-01

    Full Text Available BackgroundSevere combined immunodeficiency (SCID is fatal unless treated with hematopoietic stem cell transplant. Delay in diagnosis is common without newborn screening. Family history of infant death due to infection or known SCID (FH has been associated with earlier diagnosis.ObjectiveThe aim of this study was to identify the clinical features that affect age at diagnosis (AD and time to the diagnosis of SCID.MethodsFrom 2005 to 2016, 147 SCID patients were referred to the Asian Primary Immunodeficiency Network. Patients with genetic diagnosis, age at presentation (AP, and AD were selected for study.ResultsA total of 88 different SCID gene mutations were identified in 94 patients, including 49 IL2RG mutations, 12 RAG1 mutations, 8 RAG2 mutations, 7 JAK3 mutations, 4 DCLRE1C mutations, 4 IL7R mutations, 2 RFXANK mutations, and 2 ADA mutations. A total of 29 mutations were previously unreported. Eighty-three of the 94 patients fulfilled the selection criteria. Their median AD was 4 months, and the time to diagnosis was 2 months. The commonest SCID was X-linked (n = 57. A total of 29 patients had a positive FH. Candidiasis (n = 27 and bacillus Calmette–Guérin (BCG vaccine infection (n = 19 were the commonest infections. The median age for candidiasis and BCG infection documented were 3 months and 4 months, respectively. The median absolute lymphocyte count (ALC was 1.05 × 109/L with over 88% patients below 3 × 109/L. Positive FH was associated with earlier AP by 1 month (p = 0.002 and diagnosis by 2 months (p = 0.008, but not shorter time to diagnosis (p = 0.494. Candidiasis was associated with later AD by 2 months (p = 0.008 and longer time to diagnosis by 0.55 months (p = 0.003. BCG infections were not associated with age or time to diagnosis.ConclusionFH was useful to aid earlier diagnosis but was overlooked by clinicians and not by parents. Similarly, typical clinical features of

  6. Implications of shorter cells in PEP

    International Nuclear Information System (INIS)

    Wiedemann, H.

    1975-01-01

    Further studies on the beam-stay-clear requirements in PEP led to the conclusion that the vertical aperture needed to be enlarged. There are two main reasons for that: Observations at SPEAR indicate that the aperture should be large enough for a fully coupled beam. Full coupling of the horizontal and vertical betatron oscillations occurs not only occasionally when the energy, tune or betatron function at the interaction point is changed but also due to the beam/endash/beam effect of two strong colliding beams. The second reason for an increased aperture requirement is the nonlinear perturbation of the particle trajectories by the sextupoles. This perturbation increases a fully coupled beam by another 50% to 80%. Both effects together with a +-5 mm allowance for closed orbit perturbation result in a vertical beam-stay-clear in the bending magnets of +-4.8 to +-5.6 cm, compared to the present +-2.0 cm. This beam-stay-clear, together with additional space for vacuum chamber, etc., leads to very costly bending magnets. In this note, a shorter cell length is proposed which would reduce considerably the vertical beam-stay-clear requirements in the bending magnets. 7 figs

  7. Computation of reactor control rod drop time under accident conditions

    International Nuclear Information System (INIS)

    Dou Yikang; Yao Weida; Yang Renan; Jiang Nanyan

    1998-01-01

    The computational method of reactor control rod drop time under accident conditions lies mainly in establishing forced vibration equations for the components under action of outside forces on control rod driven line and motion equation for the control rod moving in vertical direction. The above two kinds of equations are connected by considering the impact effects between control rod and its outside components. Finite difference method is adopted to make discretization of the vibration equations and Wilson-θ method is applied to deal with the time history problem. The non-linearity caused by impact is iteratively treated with modified Newton method. Some experimental results are used to validate the validity and reliability of the computational method. Theoretical and experimental testing problems show that the computer program based on the computational method is applicable and reliable. The program can act as an effective tool of design by analysis and safety analysis for the relevant components

  8. Real-Time Accumulative Computation Motion Detectors

    Directory of Open Access Journals (Sweden)

    Saturnino Maldonado-Bascón

    2009-12-01

    Full Text Available The neurally inspired accumulative computation (AC method and its application to motion detection have been introduced in the past years. This paper revisits the fact that many researchers have explored the relationship between neural networks and finite state machines. Indeed, finite state machines constitute the best characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The article shows how to reach real-time performance after using a model described as a finite state machine. This paper introduces two steps towards that direction: (a A simplification of the general AC method is performed by formally transforming it into a finite state machine. (b A hardware implementation in FPGA of such a designed AC module, as well as an 8-AC motion detector, providing promising performance results. We also offer two case studies of the use of AC motion detectors in surveillance applications, namely infrared-based people segmentation and color-based people tracking, respectively.

  9. Television viewing, computer use and total screen time in Canadian youth.

    Science.gov (United States)

    Mark, Amy E; Boyce, William F; Janssen, Ian

    2006-11-01

    Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.

  10. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  11. 12 CFR 516.10 - How does OTS compute time periods under this part?

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false How does OTS compute time periods under this part? 516.10 Section 516.10 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPLICATION PROCESSING PROCEDURES § 516.10 How does OTS compute time periods under this part? In computing...

  12. Adolescents' technology and face-to-face time use predict objective sleep outcomes.

    Science.gov (United States)

    Tavernier, Royette; Heissel, Jennifer A; Sladek, Michael R; Grant, Kathryn E; Adam, Emma K

    2017-08-01

    The present study examined both within- and between-person associations between adolescents' time use (technology-based activities and face-to-face interactions with friends and family) and sleep behaviors. We also assessed whether age moderated associations between adolescents' time use with friends and family and sleep. Adolescents wore an actigraph monitor and completed brief evening surveys daily for 3 consecutive days. Adolescents (N=71; mean age=14.50 years old, SD=1.84; 43.7% female) were recruited from 3 public high schools in the Midwest. We assessed 8 technology-based activities (eg, texting, working on a computer), as well as time spent engaged in face-to-face interactions with friends and family, via questions on adolescents' evening surveys. Actigraph monitors assessed 3 sleep behaviors: sleep latency, sleep hours, and sleep efficiency. Hierarchical linear models indicated that texting and working on the computer were associated with shorter sleep, whereas time spent talking on the phone predicted longer sleep. Time spent with friends predicted shorter sleep latencies, while family time predicted longer sleep latencies. Age moderated the association between time spent with friends and sleep efficiency, as well as between family time and sleep efficiency. Specifically, longer time spent interacting with friends was associated with higher sleep efficiency but only among younger adolescents. Furthermore, longer family time was associated with higher sleep efficiency but only for older adolescents. Findings are discussed in terms of the importance of regulating adolescents' technology use and improving opportunities for face-to-face interactions with friends, particularly for younger adolescents. Copyright © 2017 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.

  13. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  14. Effect of the MCNP model definition on the computation time

    International Nuclear Information System (INIS)

    Šunka, Michal

    2017-01-01

    The presented work studies the influence of the method of defining the geometry in the MCNP transport code and its impact on the computational time, including the difficulty of preparing an input file describing the given geometry. Cases using different geometric definitions including the use of basic 2-dimensional and 3-dimensional objects and theirs combinations were studied. The results indicate that an inappropriate definition can increase the computational time by up to 59% (a more realistic case indicates 37%) for the same results and the same statistical uncertainty. (orig.)

  15. One long chain among shorter chains : the Flory approach revisited

    OpenAIRE

    Raphaël , E.; Fredrickson , G.; Pincus , P.

    1992-01-01

    We consider the mean square end-to-end distance of a long chain immersed in a monodisperse, concentrated solution of shorter, chemically identical chains. In contrast with the earlier work of Flory, no simplifying assumption on the wave vector dependence of the effective potential between segments is made. In order to obtain a closed form expression for the dimension of the long chain, we first derive a general expression for the mean square end-to-end distance of a flexible chain with arbitr...

  16. Association between TV viewing, computer use and overweight, determinants and competing activities of screen time in 4- to 13-year-old children.

    Science.gov (United States)

    de Jong, E; Visscher, T L S; HiraSing, R A; Heymans, M W; Seidell, J C; Renders, C M

    2013-01-01

    TV viewing and computer use is associated with childhood overweight, but it remains unclear as to how these behaviours could best be targeted. The aim of this study was to determine to what extent the association between TV viewing, computer use and overweight is explained by other determinants of overweight, to find determinants of TV viewing and computer use in the home environment and to investigate competing activities. A cross-sectional study was carried out among 4072 children aged 4-13 years in the city of Zwolle, the Netherlands. Data collection consisted of measured height, weight and waist circumference, and a parental questionnaire on socio-demographic characteristics, child's nutrition, physical activity (PA) and sedentary behaviour. Associations were studied with logistic regression analyses, for older and younger children, boys and girls separately. The odds ratio (OR) of being overweight was 1.70 (95% confidence interval (CI): 1.07-2.72) for viewing TV >1.5 h among 4- to 8-year-old children adjusted for all potential confounders. Computer use was not significantly associated with overweight. Determinants of TV viewing were as follows: having >2 TVs in the household (OR: 2.38; 95% CI: 1.66-3.41), a TV in the child's bedroom and not having rules on TV viewing. TV viewing and computer use were both associated with shorter sleep duration and not with less PA. Association between TV viewing and overweight is not explained by socio-demographic variables, drinking sugared drinks and eating snacks. Factors in the home environment influence children's TV viewing. Parents have a central role as they determine the number of TVs, rules and also their children's bedtime. Therefore, interventions to reduce screen time should support parents in making home environmental changes, especially when the children are young.

  17. Scalable space-time adaptive simulation tools for computational electrocardiology

    OpenAIRE

    Krause, Dorian; Krause, Rolf

    2013-01-01

    This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...

  18. NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation

    Energy Technology Data Exchange (ETDEWEB)

    Nikkel, D J

    2009-07-21

    This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3

  19. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  20. Real-time brain computer interface using imaginary movements

    DEFF Research Database (Denmark)

    El-Madani, Ahmad; Sørensen, Helge Bjarup Dissing; Kjær, Troels W.

    2015-01-01

    Background: Brain Computer Interface (BCI) is the method of transforming mental thoughts and imagination into actions. A real-time BCI system can improve the quality of life of patients with severe neuromuscular disorders by enabling them to communicate with the outside world. In this paper...

  1. Shorter telomeres in peripheral blood mononuclear cells from older persons with sarcopenia: results from an exploratory study

    Directory of Open Access Journals (Sweden)

    Emanuele eMarzetti

    2014-08-01

    Full Text Available Background. Telomere shortening in peripheral blood mononuclear cells (PBMCs has been associated with biological age and several chronic degenerative diseases. However, the relationship between telomere length and sarcopenia, a hallmark of the aging process, is unknown. The aim of the present study was therefore to determine whether PBMC telomeres obtained from sarcopenic older persons were shorter relative to non-sarcopenic peers. We further explored if PBMC telomere length was associated with frailty, a major clinical correlate of sarcopenia.Methods. Analyses were conducted in 142 persons aged >/= 65 years referred to a geriatric outpatient clinic (University Hospital. The presence of sarcopenia was established according to the European Working Group on Sarcopenia in Older People criteria, with bioelectrical impedance analysis used for muscle mass estimation. The frailty status was determined by both the Fried’s criteria (physical frailty, PF and a modified Rockwood’s frailty index (FI. Telomere length was measured in PBMCs by quantitative real-time polymerase chain reaction according to the Telomere/Single copy gene ratio (T/S method.Results. Among 142 outpatients (mean age 75.0 ± 6.5 years, 59.2% women, sarcopenia was diagnosed in 23 individuals (19.3%. The PF phenotype was detected in 74 participants (52.1%. The average FI score was 0.46 ± 0.17. PBMC telomeres were shorter in sarcopenic subjects (T/S = 0.21; 95% CI: 0.18 – 0.24 relative to non-sarcopenic individuals (T/S = 0.26; 95%: CI: 0.24 – 0.28; p = 0.01, independent of age, gender, smoking habit, or comorbidity. No significant associations were determined between telomere length and either PF or FI.Conclusion. PBMC telomere length, expressed as T/S values, is shorter in older outpatients with sarcopenia. The cross-sectional assessment of PBMC telomere length is not sufficient at capturing the complex, multidimensional syndrome of frailty.

  2. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  3. Variation in computer time with geometry prescription in monte carlo code KENO-IV

    International Nuclear Information System (INIS)

    Gopalakrishnan, C.R.

    1988-01-01

    In most studies, the Monte Carlo criticality code KENO-IV has been compared with other Monte Carlo codes, but evaluation of its performance with different box descriptions has not been done so far. In Monte Carlo computations, any fractional savings of computing time is highly desirable. Variation in computation time with box description in KENO for two different fast reactor fuel subassemblies of FBTR and PFBR is studied. The K eff of an infinite array of fuel subassemblies is calculated by modelling the subassemblies in two different ways (i) multi-region, (ii) multi-box. In addition to these two cases, excess reactivity calculations of FBTR are also performed in two ways to study this effect in a complex geometry. It is observed that the K eff values calculated by multi-region and multi-box models agree very well. However the increase in computation time from the multi-box to the multi-region is considerable, while the difference in computer storage requirements for the two models is negligible. This variation in computing time arises from the way the neutron is tracked in the two cases. (author)

  4. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  5. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  6. Photonic Design: From Fundamental Solar Cell Physics to Computational Inverse Design

    OpenAIRE

    Miller, Owen Dennis

    2012-01-01

    Photonic innovation is becoming ever more important in the modern world. Optical systems are dominating shorter and shorter communications distances, LED's are rapidly emerging for a variety of applications, and solar cells show potential to be a mainstream technology in the energy space. The need for novel, energy-efficient photonic and optoelectronic devices will only increase. This work unites fundamental physics and a novel computational inverse design approach towards such innovation....

  7. Computational Procedures for a Class of GI/D/k Systems in Discrete Time

    Directory of Open Access Journals (Sweden)

    Md. Mostafizur Rahman

    2009-01-01

    Full Text Available A class of discrete time GI/D/k systems is considered for which the interarrival times have finite support and customers are served in first-in first-out (FIFO order. The system is formulated as a single server queue with new general independent interarrival times and constant service duration by assuming cyclic assignment of customers to the identical servers. Then the queue length is set up as a quasi-birth-death (QBD type Markov chain. It is shown that this transformed GI/D/1 system has special structures which make the computation of the matrix R simple and efficient, thereby reducing the number of multiplications in each iteration significantly. As a result we were able to keep the computation time very low. Moreover, use of the resulting structural properties makes the computation of the distribution of queue length of the transformed system efficient. The computation of the distribution of waiting time is also shown to be simple by exploiting the special structures.

  8. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo; Abdelaziz, Ibrahim; Aldilaijan, Abdulla; Canini, Marco; Kalnis, Panos

    2017-01-01

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.

  9. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo

    2017-11-27

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers\\' computation time.

  10. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  11. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  12. A strategy for reducing turnaround time in design optimization using a distributed computer system

    Science.gov (United States)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  13. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  14. Computer model of the MFTF-B neutral beam Accel dc power supply

    International Nuclear Information System (INIS)

    Wilson, J.H.

    1983-01-01

    Using the SCEPTRE circuit modeling code, a computer model was developed for the MFTF Neutral Beam Power Supply System (NBPSS) Accel dc Power Supply (ADCPS). The ADCPS provides 90 kV, 88 A, to the Accel Modulator. Because of the complex behavior of the power supply, use of the computer model is necessary to adequately understand the power supply's behavior over a wide range of load conditions and faults. The model developed includes all the circuit components and parameters, and some of the stray values. The model has been well validated for transients with times on the order of milliseconds, and with one exception, for steady-state operation. When using a circuit modeling code for a system with a wide range of time constants, it can become impossible to obtain good solutions for all time ranges at once. The present model concentrates on the millisecond-range transients because the compensating capacitor bank tends to isolate the power supply from the load for faster transients. Attempts to include stray circuit elements with time constants in the microsecond and shorter range have had little success because of huge increases in computing time that result. The model has been successfully extended to include the accel modulator

  15. Challenges and opportunities of cloud computing for atmospheric sciences

    Science.gov (United States)

    Pérez Montes, Diego A.; Añel, Juan A.; Pena, Tomás F.; Wallom, David C. H.

    2016-04-01

    Cloud computing is an emerging technological solution widely used in many fields. Initially developed as a flexible way of managing peak demand it has began to make its way in scientific research. One of the greatest advantages of cloud computing for scientific research is independence of having access to a large cyberinfrastructure to fund or perform a research project. Cloud computing can avoid maintenance expenses for large supercomputers and has the potential to 'democratize' the access to high-performance computing, giving flexibility to funding bodies for allocating budgets for the computational costs associated with a project. Two of the most challenging problems in atmospheric sciences are computational cost and uncertainty in meteorological forecasting and climate projections. Both problems are closely related. Usually uncertainty can be reduced with the availability of computational resources to better reproduce a phenomenon or to perform a larger number of experiments. Here we expose results of the application of cloud computing resources for climate modeling using cloud computing infrastructures of three major vendors and two climate models. We show how the cloud infrastructure compares in performance to traditional supercomputers and how it provides the capability to complete experiments in shorter periods of time. The monetary cost associated is also analyzed. Finally we discuss the future potential of this technology for meteorological and climatological applications, both from the point of view of operational use and research.

  16. Applications of parallel computer architectures to the real-time simulation of nuclear power systems

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1988-01-01

    In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area

  17. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)

    2017-06-01

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.

  18. Computationally determining the salience of decision points for real-time wayfinding support

    Directory of Open Access Journals (Sweden)

    Makoto Takemiya

    2012-06-01

    Full Text Available This study introduces the concept of computational salience to explain the discriminatory efficacy of decision points, which in turn may have applications to providing real-time assistance to users of navigational aids. This research compared algorithms for calculating the computational salience of decision points and validated the results via three methods: high-salience decision points were used to classify wayfinders; salience scores were used to weight a conditional probabilistic scoring function for real-time wayfinder performance classification; and salience scores were correlated with wayfinding-performance metrics. As an exploratory step to linking computational and cognitive salience, a photograph-recognition experiment was conducted. Results reveal a distinction between algorithms useful for determining computational and cognitive saliences. For computational salience, information about the structural integration of decision points is effective, while information about the probability of decision-point traversal shows promise for determining cognitive salience. Limitations from only using structural information and motivations for future work that include non-structural information are elicited.

  19. Computation of transit times using the milestoning method with applications to polymer translocation

    Science.gov (United States)

    Hawk, Alexander T.; Konda, Sai Sriharsha M.; Makarov, Dmitrii E.

    2013-08-01

    Milestoning is an efficient approximation for computing long-time kinetics and thermodynamics of large molecular systems, which are inaccessible to brute-force molecular dynamics simulations. A common use of milestoning is to compute the mean first passage time (MFPT) for a conformational transition of interest. However, the MFPT is not always the experimentally observed timescale. In particular, the duration of the transition path, or the mean transit time, can be measured in single-molecule experiments, such as studies of polymers translocating through pores and fluorescence resonance energy transfer studies of protein folding. Here we show how to use milestoning to compute transit times and illustrate our approach by applying it to the translocation of a polymer through a narrow pore.

  20. A heterogeneous hierarchical architecture for real-time computing

    Energy Technology Data Exchange (ETDEWEB)

    Skroch, D.A.; Fornaro, R.J.

    1988-12-01

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  1. Modern EMC analysis I time-domain computational schemes

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of contemporary real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, the analysis covers the theory of the finite-difference time-domain, the transmission-line matrix/modeling, and the finite i

  2. Efficient quantum algorithm for computing n-time correlation functions.

    Science.gov (United States)

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  3. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  4. A Non-Linear Digital Computer Model Requiring Short Computation Time for Studies Concerning the Hydrodynamics of the BWR

    Energy Technology Data Exchange (ETDEWEB)

    Reisch, F; Vayssier, G

    1969-05-15

    This non-linear model serves as one of the blocks in a series of codes to study the transient behaviour of BWR or PWR type reactors. This program is intended to be the hydrodynamic part of the BWR core representation or the hydrodynamic part of the PWR heat exchanger secondary side representation. The equations have been prepared for the CSMP digital simulation language. By using the most suitable integration routine available, the ratio of simulation time to real time is about one on an IBM 360/75 digital computer. Use of the slightly different language DSL/40 on an IBM 7044 computer takes about four times longer. The code has been tested against the Eindhoven loop with satisfactory agreement.

  5. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  6. Climate Data Provenance Tracking for Just-In-Time Computation

    Science.gov (United States)

    Fries, S.; Nadeau, D.; Doutriaux, C.; Williams, D. N.

    2016-12-01

    The "Climate Data Management System" (CDMS) was created in 1996 as part of the Climate Data Analysis Tools suite of software. It provides a simple interface into a wide variety of climate data formats, and creates NetCDF CF-Compliant files. It leverages the NumPy framework for high performance computation, and is an all-in-one IO and computation package. CDMS has been extended to track manipulations of data, and trace that data all the way to the original raw data. This extension tracks provenance about data, and enables just-in-time (JIT) computation. The provenance for each variable is packaged as part of the variable's metadata, and can be used to validate data processing and computations (by repeating the analysis on the original data). It also allows for an alternate solution for sharing analyzed data; if the bandwidth for a transfer is prohibitively expensive, the provenance serialization can be passed in a much more compact format and the analysis rerun on the input data. Data provenance tracking in CDMS enables far-reaching and impactful functionalities, permitting implementation of many analytical paradigms.

  7. Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization

    Science.gov (United States)

    Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton

    As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.

  8. A computer model of the MFTF-B neutral beam accel dc power supply

    International Nuclear Information System (INIS)

    Wilson, J.H.

    1983-01-01

    Using the SCEPTRE circuit modeling code, a computer model was developed for the MFTF Neutral Beam Power Supply System (NBPSS) Accel DC Power Supply (ADCPS). The ADCPS provides 90 kV, 88 A, to the Accel Modulator. Because of the complex behavior of the power supply, use of the computer model is necessary to adequately understand the power supply's behavior over a wide range of load conditions and faults. The model developed includes all the circuit components and parameters, and some of the stray values. The model has been well validated for transients with times on the order of milliseconds, and with one exception, for steady-state operation. When using a circuit modeling code for a system with a wide range of time constants, it can become impossible to obtain good solutions for all time ranges at once. The present model concentrates on the millisecond-range transients because the compensating capacitor bank tends to isolate the power supply from the load for faster transients. Attempts to include stray circuit elements with time constants in the microsecond and shorter range have had little success because of hugh increases in computing time that result. The model has been successfully extended to include the accel modulator

  9. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra

    Energy Technology Data Exchange (ETDEWEB)

    Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu [Department of Chemistry, University of Washington, Seattle, Washington 98195 (United States)

    2016-06-21

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.

  10. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... unnecessary delays, contact your doctor before the exact time of your exam. Also inform your doctor of ... to be obtained in a shorter period of time, resulting in more detail and additional view capabilities. ...

  11. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... unnecessary delays, contact your doctor before the exact time of your exam. Also inform your doctor of ... to be obtained in a shorter period of time, resulting in more detail and additional view capabilities. ...

  12. Characterization and Computational Modeling of Minor Phases in Alloy LSHR

    Science.gov (United States)

    Jou, Herng-Jeng; Olson, Gregory; Gabb, Timothy; Garg, Anita; Miller, Derek

    2012-01-01

    The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approach equilibrium. Additional heat treatments were also performed for shorter times, to assess minor phase kinetics in non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their average sizes and total area fractions were determined. CALPHAD thermodynamics databases and PrecipiCalc(TradeMark), a computational precipitation modeling tool, were employed with Ni-base thermodynamics and diffusion databases to model and simulate the phase microstructural evolution observed in the experiments with an objective to identify the model limitations and the directions of model enhancement.

  13. The accuracy of molecular bond lengths computed by multireference electronic structure methods

    International Nuclear Information System (INIS)

    Shepard, Ron; Kedziora, Gary S.; Lischka, Hans; Shavitt, Isaiah; Mueller, Thomas; Szalay, Peter G.; Kallay, Mihaly; Seth, Michael

    2008-01-01

    We compare experimental R e values with computed R e values for 20 molecules using three multireference electronic structure methods, MCSCF, MR-SDCI, and MR-AQCC. Three correlation-consistent orbital basis sets are used, along with complete basis set extrapolations, for all of the molecules. These data complement those computed previously with single-reference methods. Several trends are observed. The SCF R e values tend to be shorter than the experimental values, and the MCSCF values tend to be longer than the experimental values. We attribute these trends to the ionic contamination of the SCF wave function and to the corresponding systematic distortion of the potential energy curve. For the individual bonds, the MR-SDCI R e values tend to be shorter than the MR-AQCC values, which in turn tend to be shorter than the MCSCF values. Compared to the previous single-reference results, the MCSCF values are roughly comparable to the MP4 and CCSD methods, which are more accurate than might be expected due to the fact that these MCSCF wave functions include no extra-valence electron correlation effects. This suggests that static valence correlation effects, such as near-degeneracies and the ability to dissociate correctly to neutral fragments, play an important role in determining the shape of the potential energy surface, even near equilibrium structures. The MR-SDCI and MR-AQCC methods predict R e values with an accuracy comparable to, or better than, the best single-reference methods (MP4, CCSD, and CCSD(T)), despite the fact that triple and higher excitations into the extra-valence orbital space are included in the single-reference methods but are absent in the multireference wave functions. The computed R e values using the multireference methods tend to be smooth and monotonic with basis set improvement. The molecular structures are optimized using analytic energy gradients, and the timings for these calculations show the practical advantage of using variational wave

  14. The accuracy of molecular bond lengths computed by multireference electronic structure methods

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, Ron [Chemical Sciences and Engineering Division, Argonne National Laboratory, Argonne, IL 60439 (United States)], E-mail: shepard@tcg.anl.gov; Kedziora, Gary S. [High Performance Technologies Inc., 2435 5th Street, WPAFB, OH 45433 (United States); Lischka, Hans [Institute for Theoretical Chemistry, University of Vienna, Waehringerstrasse 17, A-1090 Vienna (Austria); Shavitt, Isaiah [Department of Chemistry, University of Illinois, 600 S. Mathews Avenue, Urbana, IL 61801 (United States); Mueller, Thomas [Juelich Supercomputer Centre, Research Centre Juelich, D-52425 Juelich (Germany); Szalay, Peter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eoetvoes Lorand University, P.O. Box 32, H-1518 Budapest (Hungary); Kallay, Mihaly [Department of Physical Chemistry and Materials Science, Budapest University of Technology and Economics, P.O. Box 91, H-1521 Budapest (Hungary); Seth, Michael [Department of Chemistry, University of Calgary, 2500 University Drive, N.W., Calgary, Alberta, T2N 1N4 (Canada)

    2008-06-16

    We compare experimental R{sub e} values with computed R{sub e} values for 20 molecules using three multireference electronic structure methods, MCSCF, MR-SDCI, and MR-AQCC. Three correlation-consistent orbital basis sets are used, along with complete basis set extrapolations, for all of the molecules. These data complement those computed previously with single-reference methods. Several trends are observed. The SCF R{sub e} values tend to be shorter than the experimental values, and the MCSCF values tend to be longer than the experimental values. We attribute these trends to the ionic contamination of the SCF wave function and to the corresponding systematic distortion of the potential energy curve. For the individual bonds, the MR-SDCI R{sub e} values tend to be shorter than the MR-AQCC values, which in turn tend to be shorter than the MCSCF values. Compared to the previous single-reference results, the MCSCF values are roughly comparable to the MP4 and CCSD methods, which are more accurate than might be expected due to the fact that these MCSCF wave functions include no extra-valence electron correlation effects. This suggests that static valence correlation effects, such as near-degeneracies and the ability to dissociate correctly to neutral fragments, play an important role in determining the shape of the potential energy surface, even near equilibrium structures. The MR-SDCI and MR-AQCC methods predict R{sub e} values with an accuracy comparable to, or better than, the best single-reference methods (MP4, CCSD, and CCSD(T)), despite the fact that triple and higher excitations into the extra-valence orbital space are included in the single-reference methods but are absent in the multireference wave functions. The computed R{sub e} values using the multireference methods tend to be smooth and monotonic with basis set improvement. The molecular structures are optimized using analytic energy gradients, and the timings for these calculations show the practical

  15. A shorter and more specific oral sensitization-based experimental model of food allergy in mice.

    Science.gov (United States)

    Bailón, Elvira; Cueto-Sola, Margarita; Utrilla, Pilar; Rodríguez-Ruiz, Judith; Garrido-Mesa, Natividad; Zarzuelo, Antonio; Xaus, Jordi; Gálvez, Julio; Comalada, Mònica

    2012-07-31

    Cow's milk protein allergy (CMPA) is one of the most prevalent human food-borne allergies, particularly in children. Experimental animal models have become critical tools with which to perform research on new therapeutic approaches and on the molecular mechanisms involved. However, oral food allergen sensitization in mice requires several weeks and is usually associated with unspecific immune responses. To overcome these inconveniences, we have developed a new food allergy model that takes only two weeks while retaining the main characters of allergic response to food antigens. The new model is characterized by oral sensitization of weaned Balb/c mice with 5 doses of purified cow's milk protein (CMP) plus cholera toxin (CT) for only two weeks and posterior challenge with an intraperitoneal administration of the allergen at the end of the sensitization period. In parallel, we studied a conventional protocol that lasts for seven weeks, and also the non-specific effects exerted by CT in both protocols. The shorter protocol achieves a similar clinical score as the original food allergy model without macroscopically affecting gut morphology or physiology. Moreover, the shorter protocol caused an increased IL-4 production and a more selective antigen-specific IgG1 response. Finally, the extended CT administration during the sensitization period of the conventional protocol is responsible for the exacerbated immune response observed in that model. Therefore, the new model presented here allows a reduction not only in experimental time but also in the number of animals required per experiment while maintaining the features of conventional allergy models. We propose that the new protocol reported will contribute to advancing allergy research. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  17. A computer-based time study system for timber harvesting operations

    Science.gov (United States)

    Jingxin Wang; Joe McNeel; John Baumgras

    2003-01-01

    A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...

  18. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... the need for sedation and general anesthesia. New technologies that will make even faster scanning possible are becoming increasingly available. For children this means shorter imaging times and less time ...

  19. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  20. VNAP2: a computer program for computation of two-dimensional, time-dependent, compressible, turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Cline, M.C.

    1981-08-01

    VNAP2 is a computer program for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow. VNAP2 solves the two-dimensional, time-dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing-length model, a one-equation model, or the Jones-Launder two-equation model. The geometry may be a single- or a dual-flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference-plane-characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free-jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet-powered afterbodies, airfoils, and free-jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  1. A Matter of Computer Time

    Science.gov (United States)

    Celano, Donna; Neuman, Susan B.

    2010-01-01

    Many low-income children do not have the opportunity to develop the computer skills necessary to succeed in our technological economy. Their only access to computers and the Internet--school, afterschool programs, and community organizations--is woefully inadequate. Educators must work to close this knowledge gap and to ensure that low-income…

  2. A note on computing average state occupation times

    Directory of Open Access Journals (Sweden)

    Jan Beyersmann

    2014-05-01

    Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.

  3. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  4. Long-Term Costs and Health Consequences of Issuing Shorter Duration Prescriptions for Patients with Chronic Health Conditions in the English NHS.

    Science.gov (United States)

    Martin, Adam; Payne, Rupert; Wilson, Edward Cf

    2018-06-01

    The National Health Service (NHS) in England spends over £9 billion on prescription medicines dispensed in primary care, of which over two-thirds is accounted for by repeat prescriptions. Recently, GPs in England have been urged to limit the duration of repeat prescriptions, where clinically appropriate, to 28 days to reduce wastage and hence contain costs. However, shorter prescriptions will increase transaction costs and thus may not be cost saving. Furthermore, there is evidence to suggest that shorter prescriptions are associated with lower adherence, which would be expected to lead to lower clinical benefit. The objective of this study is to estimate the cost-effectiveness of 3-month versus 28-day repeat prescriptions from the perspective of the NHS. We adapted three previously developed UK policy-relevant models, incorporating transaction (dispensing fees, prescriber time) and drug wastage costs associated with 3-month and 28-day prescriptions in three case studies: antihypertensive medications for prevention of cardiovascular events; drugs to improve glycaemic control in patients with type 2 diabetes; and treatments for depression. In all cases, 3-month prescriptions were associated with lower costs and higher QALYs than 28-day prescriptions. This is driven by assumptions that higher adherence leads to improved disease control, lower costs and improved QALYs. Longer repeat prescriptions may be cost-effective compared with shorter ones. However, the quality of the evidence base on which this modelling is based is poor. Any policy rollout should be within the context of a trial such as a stepped-wedge cluster design.

  5. Real-time data-intensive computing

    Energy Technology Data Exchange (ETDEWEB)

    Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander; MacDowell, Alastair A.; Padmore, Howard A.; Shapiro, David; Tamura, Nobumichi [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Beattie, Keith; Krishnan, Harinarayan; Patton, Simon J.; Perciano, Talita; Stromsness, Rune; Tull, Craig E.; Ushizima, Daniela [Computational Research Division, Lawrence Berkeley National Laboratory Berkeley CA 94720 (United States); Correa, Joaquin; Deslippe, Jack R. [National Energy Research Scientific Computing Center, Berkeley, CA 94720 (United States); Dart, Eli; Tierney, Brian L. [Energy Sciences Network, Berkeley, CA 94720 (United States); Daurer, Benedikt J.; Maia, Filipe R. N. C. [Uppsala University, Uppsala (Sweden); and others

    2016-07-27

    Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficient closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.

  6. GRAPHIC, time-sharing magnet design computer programs at Argonne

    International Nuclear Information System (INIS)

    Lari, R.J.

    1974-01-01

    This paper describes three magnet design computer programs in use at the Zero Gradient Synchrotron of Argonne National Laboratory. These programs are used in the time sharing mode in conjunction with a Tektronix model 4012 graphic display terminal. The first program in called TRIM, the second MAGNET, and the third GFUN. (U.S.)

  7. Dyslexics' faster decay of implicit memory for sounds and words is manifested in their shorter neural adaptation.

    Science.gov (United States)

    Jaffe-Dax, Sagi; Frenkel, Or; Ahissar, Merav

    2017-01-24

    Dyslexia is a prevalent reading disability whose underlying mechanisms are still disputed. We studied the neural mechanisms underlying dyslexia using a simple frequency-discrimination task. Though participants were asked to compare the two tones in each trial, implicit memory of previous trials affected their responses. We hypothesized that implicit memory decays faster among dyslexics. We tested this by increasing the temporal intervals between consecutive trials, and by measuring the behavioral impact and ERP responses from the auditory cortex. Dyslexics showed a faster decay of implicit memory effects on both measures, with similar time constants. Finally, faster decay of implicit memory also characterized the impact of sound regularities in benefitting dyslexics' oral reading rate. Their benefit decreased faster as a function of the time interval from the previous reading of the same non-word. We propose that dyslexics' shorter neural adaptation paradoxically accounts for their longer reading times, since it reduces their temporal window of integration of past stimuli, resulting in noisier and less reliable predictions for both simple and complex stimuli. Less reliable predictions limit their acquisition of reading expertise.

  8. FRANTIC: a computer code for time dependent unavailability analysis

    International Nuclear Information System (INIS)

    Vesely, W.E.; Goldberg, F.F.

    1977-03-01

    The FRANTIC computer code evaluates the time dependent and average unavailability for any general system model. The code is written in FORTRAN IV for the IBM 370 computer. Non-repairable components, monitored components, and periodically tested components are handled. One unique feature of FRANTIC is the detailed, time dependent modeling of periodic testing which includes the effects of test downtimes, test overrides, detection inefficiencies, and test-caused failures. The exponential distribution is used for the component failure times and periodic equations are developed for the testing and repair contributions. Human errors and common mode failures can be included by assigning an appropriate constant probability for the contributors. The output from FRANTIC consists of tables and plots of the system unavailability along with a breakdown of the unavailability contributions. Sensitivity studies can be simply performed and a wide range of tables and plots can be obtained for reporting purposes. The FRANTIC code represents a first step in the development of an approach that can be of direct value in future system evaluations. Modifications resulting from use of the code, along with the development of reliability data based on operating reactor experience, can be expected to provide increased confidence in its use and potential application to the licensing process

  9. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  10. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  11. Computed tomographic pelvimetry in English bulldogs.

    Science.gov (United States)

    Dobak, Tetyda P; Voorhout, George; Vernooij, Johannes C M; Boroffka, Susanne A E B

    2018-05-31

    English bulldogs have been reported to have a high incidence of dystocia and caesarean section is often performed electively in this breed. A narrow pelvic canal is the major maternal factor contributing to obstructive dystocia. The objective of this cross-sectional study was to assess the pelvic dimensions of 40 clinically healthy English bulldogs using computed tomography pelvimetry. A control group consisting of 30 non-brachycephalic dogs that underwent pelvic computed tomography was retrospectively collected from the patient archive system. Univariate analysis of variance was used to compare computed tomography pelvimetry of both groups and the effects of weight and gender on the measurements. In addition, ratios were obtained to address pelvic shape differences. A significantly (P = 0.00) smaller pelvic size was found in English bulldogs compared to the control group for all computed tomography measurements: width and length of the pelvis, pelvic inlet and caudal pelvic aperture. The pelvic conformation was significantly different between the groups, English bulldogs had an overall shorter pelvis and pelvic canal and a narrower pelvic outlet. Weight had a significant effect on all measurements whereas gender that only had a significant effect on some (4/11) pelvic dimensions. Our findings prove that English bulldogs have a generally reduced pelvic size as well as a shorter pelvis and narrower pelvic outlet when compared to non-brachycephalic breeds. We suggest that some of our measurements may serve as a baseline for pelvic dimensions in English bulldogs and may be useful for future studies on dystocia in this breed. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Personal computer as a part of radiometric gauges

    International Nuclear Information System (INIS)

    Machaj, B.

    1992-01-01

    Following the world tendency in the application of microcomputers in measuring instrumentation, standard personal computer, compatible with IBM PC/XT, was employed in the isotope gauges developed lately in the Institute. The are: beta backscatter coating thickness gauge, X-ray fluorescence multilayer coating thickness gauge with X-ray tube as radiation source, and automatic airborne dust pollution gauge. A simple interface containing: address decoder, programmable pulse counter, some buffers and latches was sufficient to connect the computer to the measuring head of the gauges. Thanks to the possibility of programming in higher level language (turbo Pascal), the gauges were developed in much shorter time than required for classical electronics. The experience gained during development of the gauges shows, that even in case of a simple instrument such as the beta backscatter coating thickness gauge, it is more economical to employ ready-made personal computer than to elaborate specific electronics for it, unless the gauge is produced in large numbers. The use of personal computer is particularly advantageous, when processing of the signal or control of the measuring cycle is more sophisticated, as in the case of the two other gauges. Block diagrams of the gauges and their interfaces are presented in the paper. In case of the airborne dust pollution gauge, flow chart of the computer program is also given. (author). 3 refs, 4 figs

  13. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    Science.gov (United States)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  14. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... may find the taste mildly unpleasant even if mixed with soda or juice; however, most patients can ... and less time required to hold still in order to produce clear images. Also, shorter scan times ...

  15. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... scanner. top of page How does the procedure work? In many ways, CT scanning is like other ... For children this means shorter imaging times and less time required to hold still in order to ...

  16. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Marlowe Thomas J.

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants. We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results. We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.

  17. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  18. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  19. Reconditioning of Computers

    DEFF Research Database (Denmark)

    Bundgaard, Anja Marie

    2017-01-01

    Fast technological development and short innovation cycles has resulted in shorter life spans for certain consumer electronics. Reconditioning is proposed as one of the strategies to close the loops in the circular economy and increase the lifespan of products and components. The paper therefore...... qualitative research interviews. The case study of the contextual barriers indicated that trust from the buyer and the seller of the used computers were important for a viable business. If trust was not in place, it could be a potential barrier. Furthermore, economy obsolescence and a lack of influence...

  20. The role of romantic attraction and conflict resolution in predicting shorter and longer relationship maintenance among adolescents.

    Science.gov (United States)

    Appel, Israel; Shulman, Shmuel

    2015-04-01

    This study examined the role of romantic attraction and conflict resolution patterns in shorter and longer relationship maintenance among adolescent couples. Data were used from 55 couples aged 15-18 years. Partners completed the Romantic Attraction scale and were observed negotiating a disagreement. Three and 6 months later, they were asked to report whether they were still together. Findings indicated that partners' romantic attraction and the tendency to minimize disagreements during interaction predicted shorter relationship maintenance. In contrast, longer relationship maintenance was predicted by partners' capability to resolve conflicts constructively in a positive atmosphere. Findings are embedded and discussed within Fisher's (2004) evolutionary theory of love.

  1. Computing Refined Buneman Trees in Cubic Time

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Östlin, A.

    2003-01-01

    Reconstructing the evolutionary tree for a set of n species based on pairwise distances between the species is a fundamental problem in bioinformatics. Neighbor joining is a popular distance based tree reconstruction method. It always proposes fully resolved binary trees despite missing evidence...... in the underlying distance data. Distance based methods based on the theory of Buneman trees and refined Buneman trees avoid this problem by only proposing evolutionary trees whose edges satisfy a number of constraints. These trees might not be fully resolved but there is strong combinatorial evidence for each...... proposed edge. The currently best algorithm for computing the refined Buneman tree from a given distance measure has a running time of O(n 5) and a space consumption of O(n 4). In this paper, we present an algorithm with running time O(n 3) and space consumption O(n 2). The improved complexity of our...

  2. Shorter Decentralized Attribute-Based Encryption via Extended Dual System Groups

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-01-01

    Full Text Available Decentralized attribute-based encryption (ABE is a special form of multiauthority ABE systems, in which no central authority and global coordination are required other than creating the common reference parameters. In this paper, we propose a new decentralized ABE in prime-order groups by using extended dual system groups. We formulate some assumptions used to prove the security of our scheme. Our proposed scheme is fully secure under the standard k-Lin assumption in random oracle model and can support any monotone access structures. Compared with existing fully secure decentralized ABE systems, our construction has shorter ciphertexts and secret keys. Moreover, fast decryption is achieved in our system, in which ciphertexts can be decrypted with a constant number of pairings.

  3. Computational intelligence in time series forecasting theory and engineering applications

    CERN Document Server

    Palit, Ajoy K

    2005-01-01

    Foresight in an engineering enterprise can make the difference between success and failure, and can be vital to the effective control of industrial systems. Applying time series analysis in the on-line milieu of most industrial plants has been problematic owing to the time and computational effort required. The advent of soft computing tools offers a solution. The authors harness the power of intelligent technologies individually and in combination. Examples of the particular systems and processes susceptible to each technique are investigated, cultivating a comprehensive exposition of the improvements on offer in quality, model building and predictive control and the selection of appropriate tools from the plethora available. Application-oriented engineers in process control, manufacturing, production industry and research centres will find much to interest them in this book. It is suitable for industrial training purposes, as well as serving as valuable reference material for experimental researchers.

  4. Theory and computation of disturbance invariant sets for discrete-time linear systems

    Directory of Open Access Journals (Sweden)

    Kolmanovsky Ilya

    1998-01-01

    Full Text Available This paper considers the characterization and computation of invariant sets for discrete-time, time-invariant, linear systems with disturbance inputs whose values are confined to a specified compact set but are otherwise unknown. The emphasis is on determining maximal disturbance-invariant sets X that belong to a specified subset Γ of the state space. Such d-invariant sets have important applications in control problems where there are pointwise-in-time state constraints of the form χ ( t ∈ Γ . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.

  5. Ultrasonic divergent-beam scanner for time-of-flight tomography with computer evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Glover, G H

    1978-03-02

    The rotatable ultrasonic divergent-beam scanner is designed for time-of-flight tomography with computer evaluation. With it there can be measured parameters that are of importance for the structure of soft tissues, e.g. time as a function of the velocity distribution along a certain path of flight(the method is analogous to the transaxial X-ray tomography). Moreover it permits to perform the quantitative measurement of two-dimensional velocity distributions and may therefore be applied to serial examinations for detecting cancer of the breast. As computers digital memories as well as analog-digital-hybrid systems are suitable.

  6. Computing moment to moment BOLD activation for real-time neurofeedback

    Science.gov (United States)

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  7. Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to demonstrate key elements of feasibility for a high speed automated time domain terahertz computed axial tomography (TD-THz CT) non destructive...

  8. Marital disruption is associated with shorter salivary telomere length in a probability sample of older adults.

    Science.gov (United States)

    Whisman, Mark A; Robustelli, Briana L; Sbarra, David A

    2016-05-01

    Marital disruption (i.e., marital separation, divorce) is associated with a wide range of poor mental and physical health outcomes, including increased risk for all-cause mortality. One biological intermediary that may help explain the association between marital disruption and poor health is accelerated cellular aging. This study examines the association between marital disruption and salivary telomere length in a United States probability sample of adults ≥50 years of age. Participants were 3526 individuals who participated in the 2008 wave of the Health and Retirement Study. Telomere length assays were performed using quantitative real-time polymerase chain reaction (qPCR) on DNA extracted from saliva samples. Health and lifestyle factors, traumatic and stressful life events, and neuroticism were assessed via self-report. Linear regression analyses were conducted to examine the associations between predictor variables and salivary telomere length. Based on their marital status data in the 2006 wave, people who were separated or divorced had shorter salivary telomeres than people who were continuously married or had never been married, and the association between marital disruption and salivary telomere length was not moderated by gender or neuroticism. Furthermore, the association between marital disruption and salivary telomere length remained statistically significant after adjusting for demographic and socioeconomic variables, neuroticism, cigarette use, body mass, traumatic life events, and other stressful life events. Additionally, results revealed that currently married adults with a history of divorce evidenced shorter salivary telomeres than people who were continuously married or never married. Accelerated cellular aging, as indexed by telomere shortening, may be one pathway through which marital disruption is associated with morbidity and mortality. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. The return trip is felt shorter only postdictively: A psychophysiological study of the return trip effect [corrected].

    Directory of Open Access Journals (Sweden)

    Ryosuke Ozawa

    Full Text Available The return trip often seems shorter than the outward trip even when the distance and actual time are identical. To date, studies on the return trip effect have failed to confirm its existence in a situation that is ecologically valid in terms of environment and duration. In addition, physiological influences as part of fundamental timing mechanisms in daily activities have not been investigated in the time perception literature. The present study compared round-trip and non-round-trip conditions in an ecological situation. Time estimation in real time and postdictive estimation were used to clarify the situations where the return trip effect occurs. Autonomic nervous system activity was evaluated from the electrocardiogram using the Lorenz plot to demonstrate the relationship between time perception and physiological indices. The results suggest that the return trip effect is caused only postdictively. Electrocardiographic analysis revealed that the two experimental conditions induced different responses in the autonomic nervous system, particularly in sympathetic nervous function, and that parasympathetic function correlated with postdictive timing. To account for the main findings, the discrepancy between the two time estimates is discussed in the light of timing strategies, i.e., prospective and retrospective timing, which reflect different emphasis on attention and memory processes. Also each timing method, i.e., the verbal estimation, production or comparative judgment, has different characteristics such as the quantification of duration in time units or knowledge of the target duration, which may be responsible for the discrepancy. The relationship between postdictive time estimation and the parasympathetic nervous system is also discussed.

  10. ATM/RB1 mutations predict shorter overall survival in urothelial cancer.

    Science.gov (United States)

    Yin, Ming; Grivas, Petros; Emamekhoo, Hamid; Mendiratta, Prateek; Ali, Siraj; Hsu, JoAnn; Vasekar, Monali; Drabick, Joseph J; Pal, Sumanta; Joshi, Monika

    2018-03-30

    Mutations of DNA repair genes, e.g. ATM/RB1 , are frequently found in urothelial cancer (UC) and have been associated with better response to cisplatin-based chemotherapy. Further external validation of the prognostic value of ATM/RB1 mutations in UC can inform clinical decision making and trial designs. In the discovery dataset, ATM/RB1 mutations were present in 24% of patients and were associated with shorter OS (adjusted HR 2.67, 95% CI, 1.45-4.92, p = 0.002). There was a higher mutation load in patients carrying ATM/RB1 mutations (median mutation load: 6.7 versus 5.5 per Mb, p = 0.072). In the validation dataset, ATM/RB1 mutations were present in 22.2% of patients and were non-significantly associated with shorter OS (adjusted HR 1.87, 95% CI, 0.97-3.59, p = 0.06) and higher mutation load (median mutation load: 8.1 versus 7.2 per Mb, p = 0.126). Exome sequencing data of 130 bladder UC patients from The Cancer Genome Atlas (TCGA) dataset were analyzed as a discovery cohort to determine the prognostic value of ATM/RB1 mutations. Results were validated in an independent cohort of 81 advanced UC patients. Cox proportional hazard regression analysis was performed to calculate the hazard ratio (HR) and 95% confidence interval (CI) to compare overall survival (OS). ATM/RB1 mutations may be a biomarker of poor prognosis in unselected UC patients and may correlate with higher mutational load. Further studies are required to determine factors that can further stratify prognosis and evaluate predictive role of ATM/RB1 mutation status to immunotherapy and platinum-based chemotherapy.

  11. Computer simulation of energy dissipation from near threshold knock-ons in Fe3Al

    International Nuclear Information System (INIS)

    Schade, G.; Leighly, H.P. Jr.; Edwards, D.R.

    1976-01-01

    A computer program has been developed and used to model a series of knock-ons near the damage energy threshold in a micro-crystallite of the ordered alloy Fe 3 Al. The primary paths of energy removal from the knock-on site were found to be along the [100] and [111] directions by means of focusing type collision chains. The relative importance of either direction as an energy removal path varied with the initial knock-on direction and also changed with time during the course of the knock-on event. The time rate of energy removal was found to be greatest in the [111] direction due to the shorter interatomic distances between atoms along this direction

  12. Why shorter half-times of repair lead to greater damage in pulsed brachytherapy

    International Nuclear Information System (INIS)

    Fowler, J.F.

    1993-01-01

    Pulsed brachytherapy consists of replacing continuous irradiation at low dose-rate with a series of medium dose-rate fractions in the same overall time and to the same total dose. For example, pulses of 1 Gy given every 2 hr or 2 Gy given every 4 hr would deliver the same 70 Gy in 140 hr as continuous irradiation at 0.5 Gy/hr. If higher dose-rates are used, even with gaps between the pulses, the biological effects are always greater. Provided that dose rates in the pulse do not exceed 3 Gy/hr, and provided that pulses are given as often as every 2 hr, the inevitable increases of biological effect are no larger than a few percent (of biologically effective dose or extrapolated response dose). However, these increases are more likely to exceed 10% (and thus become clinically significant) if the half-time of repair of sublethal damage is short (less than 1 hr) rather than long. This somewhat unexpected finding is explained in detail here. The rise and fall of Biologically Effective Dose (and hence of Relative Effectiveness, for a constant dose in each pulse) is calculated during and after single pulses, assuming a range of values of T 1/2 , the half-time of sublethal damage repair. The area under each curve is proportional to Biologically Effective Dose and therefore to log cell kill. Pulses at 3 Gy/hr do yield greater biological effect (dose x integrated Relative Effectiveness) than lower dose-rate pulses or continuous irradiation at 0.5 Gy/hr. The contrast is greater for the short T 1/2 of 0.5 hr than for the longer T 1/2 of 1.5 hr. More biological damage will be done (compared with traditional low dose rate brachytherapy) in tissues with short T 1/2 (0.1-1 hr) than in tissues with longer T 1/2 values. 8 refs., 3 figs

  13. Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — In this Phase 2 project, we propose to develop, construct, and deliver to NASA a computed axial tomography time-domain terahertz (CT TD-THz) non destructive...

  14. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  15. The effect of shorter exposure versus prolonged exposure on treatment outcome in Tourette syndrome and chronic tic disorders - an open trial.

    Science.gov (United States)

    van de Griendt, Jolande M T M; van Dijk, Maarten K; Verdellen, Cara W J; Verbraak, Marc J P M

    2018-01-11

    Exposure and response prevention has shown to be an effective strategy and is considered a first-line intervention in the behavioural treatment of tic disorders. Prior research demonstrated significant tic reduction after 12 two hour sessions. In this open trial, the question is addressed whether, relative to these prolonged sessions, exposure sessions of shorter duration yield differential outcome for patients with tic disorders. A total of 29 patients diagnosed with Tourette syndrome (TS) or chronic tic disorder were treated with shorter exposure sessions (1 h), and these data were compared to the data from a study about prolonged exposure (2 h, n = 21). Outcome was measured by the Yale Global Tic Severity Scale (YGTSS). Results suggest that after taking the difference in illness duration between the two groups into account, the effectiveness of shorter exposure sessions is not inferior to that of prolonged exposure. Results suggest that treatment with shorter exposure might be more efficient and more patients can be reached. Future research is needed to gain more insight into the mechanisms underlying the efficacy of behavioural treatments for tics.

  16. 78 FR 38949 - Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response

    Science.gov (United States)

    2013-06-28

    ... exposed to various forms of cyber attack. In some cases, attacks can be thwarted through the use of...-3383-01] Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response... systems will be successfully attacked. When a successful attack occurs, the job of a Computer Security...

  17. CROSAT: A digital computer program for statistical-spectral analysis of two discrete time series

    International Nuclear Information System (INIS)

    Antonopoulos Domis, M.

    1978-03-01

    The program CROSAT computes directly from two discrete time series auto- and cross-spectra, transfer and coherence functions, using a Fast Fourier Transform subroutine. Statistical analysis of the time series is optional. While of general use the program is constructed to be immediately compatible with the ICL 4-70 and H316 computers at AEE Winfrith, and perhaps with minor modifications, with any other hardware system. (author)

  18. A Variable Impacts Measurement in Random Forest for Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jae-Hee Hur

    2017-01-01

    Full Text Available Recently, the importance of mobile cloud computing has increased. Mobile devices can collect personal data from various sensors within a shorter period of time and sensor-based data consists of valuable information from users. Advanced computation power and data analysis technology based on cloud computing provide an opportunity to classify massive sensor data into given labels. Random forest algorithm is known as black box model which is hardly able to interpret the hidden process inside. In this paper, we propose a method that analyzes the variable impact in random forest algorithm to clarify which variable affects classification accuracy the most. We apply Shapley Value with random forest to analyze the variable impact. Under the assumption that every variable cooperates as players in the cooperative game situation, Shapley Value fairly distributes the payoff of variables. Our proposed method calculates the relative contributions of the variables within its classification process. In this paper, we analyze the influence of variables and list the priority of variables that affect classification accuracy result. Our proposed method proves its suitability for data interpretation in black box model like a random forest so that the algorithm is applicable in mobile cloud computing environment.

  19. Moderate Exercise Allows for shorter Recovery Time in Critical Limb Ischemia

    Directory of Open Access Journals (Sweden)

    Anne Lejay

    2017-07-01

    Full Text Available Whether and how moderate exercise might allow for accelerated limb recovery in chronic critical limb ischemia (CLI remains to be determined. Chronic CLI was surgically induced in mice, and the effect of moderate exercise (training five times per week over a 3-week period was investigated. Tissue damages and functional scores were assessed on the 4th, 6th, 10th, 20th, and 30th day after surgery. Mice were sacrificed 48 h after the last exercise session in order to assess muscle structure, mitochondrial respiration, calcium retention capacity, oxidative stress and transcript levels of genes encoding proteins controlling mitochondrial functions (PGC1α, PGC1β, NRF1 and anti-oxidant defenses markers (SOD1, SOD2, catalase. CLI resulted in tissue damages and impaired functional scores. Mitochondrial respiration and calcium retention capacity were decreased in the ischemic limb of the non-exercised group (Vmax = 7.11 ± 1.14 vs. 9.86 ± 0.86 mmol 02/min/g dw, p < 0.001; CRC = 7.01 ± 0.97 vs. 11.96 ± 0.92 microM/mg dw, p < 0.001, respectively. Moderate exercise reduced tissue damages, improved functional scores, and restored mitochondrial respiration and calcium retention capacity in the ischemic limb (Vmax = 9.75 ± 1.00 vs. 9.82 ± 0.68 mmol 02/min/g dw; CRC = 11.36 ± 1.33 vs. 12.01 ± 1.24 microM/mg dw, respectively. Exercise also enhanced the transcript levels of PGC1α, PGC1β, NRF1, as well as SOD1, SOD2, and catalase. Moderate exercise restores mitochondrial respiration and calcium retention capacity, and it has beneficial functional effects in chronic CLI, likely by stimulating reactive oxygen species-induced biogenesis and anti-oxidant defenses. These data support further development of exercise therapy even in advanced peripheral arterial disease.

  20. Numerical Nuclear Second Derivatives on a Computing Grid: Enabling and Accelerating Frequency Calculations on Complex Molecular Systems.

    Science.gov (United States)

    Yang, Tzuhsiung; Berry, John F

    2018-06-04

    The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.

  1. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    Science.gov (United States)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  2. On some methods for improving time of reachability sets computation for the dynamic system control problem

    Science.gov (United States)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  3. A computationally simple and robust method to detect determinism in a time series

    DEFF Research Database (Denmark)

    Lu, Sheng; Ju, Ki Hwan; Kanters, Jørgen K.

    2006-01-01

    We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals. The IS ......We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals...

  4. Hardware architecture design of image restoration based on time-frequency domain computation

    Science.gov (United States)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  5. Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment

    Science.gov (United States)

    He, Aiguo

    2011-01-01

    Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…

  6. A real-time computational model for estimating kinematics of ankle ligaments.

    Science.gov (United States)

    Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Quan

    2016-01-01

    An accurate assessment of ankle ligament kinematics is crucial in understanding the injury mechanisms and can help to improve the treatment of an injured ankle, especially when used in conjunction with robot-assisted therapy. A number of computational models have been developed and validated for assessing the kinematics of ankle ligaments. However, few of them can do real-time assessment to allow for an input into robotic rehabilitation programs. An ankle computational model was proposed and validated to quantify the kinematics of ankle ligaments as the foot moves in real-time. This model consists of three bone segments with three rotational degrees of freedom (DOFs) and 12 ankle ligaments. This model uses inputs for three position variables that can be measured from sensors in many ankle robotic devices that detect postures within the foot-ankle environment and outputs the kinematics of ankle ligaments. Validation of this model in terms of ligament length and strain was conducted by comparing it with published data on cadaver anatomy and magnetic resonance imaging. The model based on ligament lengths and strains is in concurrence with those from the published studies but is sensitive to ligament attachment positions. This ankle computational model has the potential to be used in robot-assisted therapy for real-time assessment of ligament kinematics. The results provide information regarding the quantification of kinematics associated with ankle ligaments related to the disability level and can be used for optimizing the robotic training trajectory.

  7. Real time simulation of large systems on mini-computer

    International Nuclear Information System (INIS)

    Nakhle, Michel; Roux, Pierre.

    1979-01-01

    Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr

  8. Real-time computing in environmental monitoring of a nuclear power plant

    International Nuclear Information System (INIS)

    Deme, S.; Lang, E.; Nagy, Gy.

    1987-06-01

    A real-time computing method is described for calculating the environmental radiation exposure due to a nuclear power plant both at normal operation and at accident. The effects of the Gaussian plume are recalculated in every ten minutes based on meteorological parameters measured at a height of 20 and 120 m as well as on emission data. At normal operation the quantity of radioactive materials released through the stacks is measured and registered while, at an accident, the source strength is unknown and the calculated relative data are normalized to the values measured at the eight environmental monitoring stations. The doses due to noble gases and to dry and wet deposition as well as the time integral of 131 I concentration are calculated and stored by a professional personal computer for 720 points of the environment of 11 km radius. (author)

  9. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    Science.gov (United States)

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  10. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junghoon Lee

    2011-03-01

    Full Text Available Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  11. Computations of concentration of radon and its decay products against time. Computer program; Obliczanie koncentracji radonu i jego produktow rozpadu w funkcji czasu. Program komputerowy

    Energy Technology Data Exchange (ETDEWEB)

    Machaj, B. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1996-12-31

    This research is aimed to develop a device for continuous monitoring of radon in the air, by measuring alpha activity of radon and its short lived decay products. The influence of alpha activity variation of radon and its daughters on the measured results is of importance and requires a knowledge of this variation with time. Employing the measurement of alpha radiation of radon and of its short lived decay products, require knowledge of radon concentration variation and its decay products against the time. A computer program in Turbo Pascal language was therefore developed performing the computations employing the known relations involved, the program being adapted for IBM PC computers. The presented program enables computation of activity of {sup 222}Rn and its daughter products: {sup 218}Po, {sup 214}Pb, {sup 214}Bi and {sup 214}Po every 1 min within the period of 0-255 min for any state of radiation equilibrium between the radon and its daughter products. The program permits also to compute alpha activity of {sup 222}Rn + {sup 218}Po + {sup 214}Po against time and the total alpha activity at selected interval of time. The results of computations are stored on the computer hard disk in ASCII format and are used a graphic program e.g. by DrawPerfect program to make diagrams. Equations employed for computation of the alpha activity of radon and its decay products as well as the description of program functions are given. (author). 2 refs, 4 figs.

  12. Bound on quantum computation time: Quantum error correction in a critical environment

    International Nuclear Information System (INIS)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-01-01

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  13. Pre-hospital electrocardiogram triage with tele-cardiology support is associated with shorter time-to-balloon and higher rates of timely reperfusion even in rural areas: data from the Bari- Barletta/Andria/Trani public emergency medical service 118 registry on primary angioplasty in ST-elevation myocardial infarction.

    Science.gov (United States)

    Brunetti, Natale Daniele; Di Pietro, Gaetano; Aquilino, Ambrogio; Bruno, Angela I; Dellegrottaglie, Giulia; Di Giuseppe, Giuseppe; Lopriore, Claudio; De Gennaro, Luisa; Lanzone, Saverio; Caldarola, Pasquale; Antonelli, Gianfranco; Di Biase, Matteo

    2014-09-01

    We report the preliminary data from a regional registry on ST-elevation myocardial infarction (STEMI) patients treated with primary angioplasty in Apulia, Italy; the region is covered by a single public health-care service, a single public emergency medical service (EMS), and a single tele-medicine service provider. Two hundred and ninety-seven consecutive patients with STEMI transferred by regional free public EMS 1-1-8 for primary-PCI were enrolled in the study; 123 underwent pre-hospital electrocardiograms (ECGs) triage by tele-cardiology support and directly referred for primary-PCI, those remaining were just transferred by 1-1-8 ambulances for primary percutaneous coronary intervention (PCI) (diagnosis not based on tele-medicine ECG; already hospitalised patients, emergency-room without tele-medicine support). Time from first ECG diagnostic for STEMI to balloon was recorded; a time-to-balloon primary-PCI). Pre-hospital triage with tele-cardiology ECG in an EMS registry from an area with more than one and a half million inhabitants was associated with shorter time-to-balloon and higher rates of timely treated patients, even in 'rural' areas. © The European Society of Cardiology 2014.

  14. Applicability of the shorter ‘Bangladesh regimen’ in high multidrug-resistant tuberculosis settings

    Directory of Open Access Journals (Sweden)

    Giovanni Sotgiu

    2017-03-01

    Full Text Available In spite of the recent introduction of two new drugs (delamanid and bedaquiline and a few repurposed compounds to treat multidrug-resistant and extensively drug-resistant tuberculosis (MDR- and XDR-TB, clinicians are facing increasing problems in designing effective regimens in severe cases. Recently a 9 to 12-month regimen (known as the ‘Bangladesh regimen’ proved to be effective in treating MDR-TB cases. It included an initial phase of 4 to 6 months of kanamycin, moxifloxacin, prothionamide, clofazimine, pyrazinamide, high-dose isoniazid, and ethambutol, followed by 5 months of moxifloxacin, clofazimine, pyrazinamide, and ethambutol. However, recent evidence from Europe and Latin America identified prevalences of resistance to the first-line drugs in this regimen (ethambutol and pyrazinamide exceeding 60%, and of prothionamide exceeding 50%. Furthermore, the proportions of resistance to the two most important pillars of the regimen – quinolones and kanamycin – were higher than 40%. Overall, only 14 out of 348 adult patients (4.0% were susceptible to all of the drugs composing the regimen, and were therefore potentially suitable for the ‘shorter regimen’. A shorter, cheaper, and well-tolerated MDR-TB regimen is likely to impact the number of patients treated and improve adherence if prescribed to the right patients through the systematic use of rapid MTBDRsl testing.

  15. Accelerated stress testing in a time-driven product development process

    NARCIS (Netherlands)

    Lu, Y.; Loh, H.T.; Brombacher, A.C.; Ouden, den P.H.

    2000-01-01

    In order to compete in the market, companies have to produce the right products with a shorter time to market and at lower costs than before. Shorter time to market requires the product development process (PDP) to change the way of working from the classical ‘wait and react’ to anticipating and

  16. The Educator´s Approach to Media Training and Computer Games within Leisure Time of School-children

    OpenAIRE

    MORAVCOVÁ, Dagmar

    2009-01-01

    The paper describes possible ways of approaching computer games playing as part of leisure time of school-children and deals with the significance of media training in leisure time. At first it specifies the concept of leisure time and its functions, then shows some positive and negative effects of the media. It further describes classical computer games, the problem of excess computer game playing and means of prevention. The paper deals with the educator's personality and the importance of ...

  17. The association between post-traumatic stress disorder and shorter telomere length: A systematic review and meta-analysis.

    Science.gov (United States)

    Li, Xuemei; Wang, Jiang; Zhou, Jianghua; Huang, Pan; Li, Jiping

    2017-08-15

    Post-traumatic stress disorder (PTSD) is a common psychiatric disorder, which may accelerate aging. Many study have investigated the association between telomeres length and PTSD, but results from published studies are contradictory. Therefore, Meta-analysis approaches were conducted to give more precise estimate of relationship between telomere length and PTSD. We systematically reviewed the databases of PUBMED, PsycINFO, Medline(Ovid SP) and EMBASE for all articles on the association between telomere length and PTSD. Data were summarized by using random-effects in the meta-analysis. The heterogeneity among studies were examined by using Cochrane's Q statistic and I-squared. Five eligible studies containing 3851 participants were included in our meta-analysis. Shorten telomere length was significantly associated with PTSD with mean difference of -0.19( 95% CI: -0.27, -0.01; P<0.001) with I-square of 96%. The results from subgroup analysis demonstrated that shorter telomere length was significantly associated with PTSD across all gender groups, with mean difference of -0.15( 95% CI: -0.29, -0.01; P=0.04) for female, mean difference of -0.17( 95% CI: -0.19, -0.15; P<0.001) for male. Meanwhile, shorten telomere length was significantly associated with sexual assault(mean difference =-0.15, 95% CI: -0.29, -0.01), childhood trauma (mean difference =-0.08, 95% CI: -0.19, -0.07), but not combat (mean difference =-0.39, 95% CI: -0.83, 0.05). Compared to the individuals without PTSD, individuals with PTSD have shorter telomere length, which has implications for early intervention and timely treatment to prevent future adverse health outcomes. Copyright © 2017. Published by Elsevier B.V.

  18. Computer use, sleep duration and health symptoms

    DEFF Research Database (Denmark)

    Nuutinen, Teija; Roos, Eva; Ray, Carola

    2014-01-01

    OBJECTIVES: This study investigated whether computer use is associated with health symptoms through sleep duration among 15-year olds in Finland, France and Denmark. METHODS: We used data from the WHO cross-national Health Behaviour in School-aged Children study collected in Finland, France...... and Denmark in 2010, including data on 5,402 adolescents (mean age 15.61 (SD 0.37), girls 53 %). Symptoms assessed included feeling low, irritability/bad temper, nervousness, headache, stomachache, backache, and feeling dizzy. We used structural equation modeling to explore the mediating effect of sleep...... duration on the association between computer use and symptom load. RESULTS: Adolescents slept approximately 8 h a night and computer use was approximately 2 h a day. Computer use was associated with shorter sleep duration and higher symptom load. Sleep duration partly mediated the association between...

  19. Shorter exposures to harder X-rays trigger early apoptotic events in Xenopus laevis embryos.

    Directory of Open Access Journals (Sweden)

    JiaJia Dong

    Full Text Available BACKGROUND: A long-standing conventional view of radiation-induced apoptosis is that increased exposure results in augmented apoptosis in a biological system, with a threshold below which radiation doses do not cause any significant increase in cell death. The consequences of this belief impact the extent to which malignant diseases and non-malignant conditions are therapeutically treated and how radiation is used in combination with other therapies. Our research challenges the current dogma of dose-dependent induction of apoptosis and establishes a new parallel paradigm to the photoelectric effect in biological systems. METHODOLOGY/PRINCIPAL FINDINGS: We explored how the energy of individual X-ray photons and exposure time, both factors that determine the total dose, influence the occurrence of cell death in early Xenopus embryo. Three different experimental scenarios were analyzed and morphological and biochemical hallmarks of apoptosis were evaluated. Initially, we examined cell death events in embryos exposed to increasing incident energies when the exposure time was preset. Then, we evaluated the embryo's response when the exposure time was augmented while the energy value remained constant. Lastly, we studied the incidence of apoptosis in embryos exposed to an equal total dose of radiation that resulted from increasing the incoming energy while lowering the exposure time. CONCLUSIONS/SIGNIFICANCE: Overall, our data establish that the energy of the incident photon is a major contributor to the outcome of the biological system. In particular, for embryos exposed under identical conditions and delivered the same absorbed dose of radiation, the response is significantly increased when shorter bursts of more energetic photons are used. These results suggest that biological organisms display properties similar to the photoelectric effect in physical systems and provide new insights into how radiation-mediated apoptosis should be understood and

  20. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    Science.gov (United States)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  1. A Real-Time Plagiarism Detection Tool for Computer-Based Assessments

    Science.gov (United States)

    Jeske, Heimo J.; Lall, Manoj; Kogeda, Okuthe P.

    2018-01-01

    Aim/Purpose: The aim of this article is to develop a tool to detect plagiarism in real time amongst students being evaluated for learning in a computer-based assessment setting. Background: Cheating or copying all or part of source code of a program is a serious concern to academic institutions. Many academic institutions apply a combination of…

  2. Instructional Advice, Time Advice and Learning Questions in Computer Simulations

    Science.gov (United States)

    Rey, Gunter Daniel

    2010-01-01

    Undergraduate students (N = 97) used an introductory text and a computer simulation to learn fundamental concepts about statistical analyses (e.g., analysis of variance, regression analysis and General Linear Model). Each learner was randomly assigned to one cell of a 2 (with or without instructional advice) x 2 (with or without time advice) x 2…

  3. Phased searching with NEAT in a time-scaled framework: experiments on a computer-aided detection system for lung nodules.

    Science.gov (United States)

    Tan, Maxine; Deklerck, Rudi; Cornelis, Jan; Jansen, Bart

    2013-11-01

    In the field of computer-aided detection (CAD) systems for lung nodules in computed tomography (CT) scans, many image features are presented and many artificial neural network (ANN) classifiers with various structural topologies are analyzed; frequently, the classifier topologies are selected by trial-and-error experiments. To avoid these trial and error approaches, we present a novel classifier that evolves ANNs using genetic algorithms, called "Phased Searching with NEAT in a Time or Generation-Scaled Framework", integrating feature selection with the classification task. We analyzed our method's performance on 360 CT scans from the public Lung Image Database Consortium database. We compare our method's performance with other more-established classifiers, namely regular NEAT, Feature-Deselective NEAT (FD-NEAT), fixed-topology ANNs, and support vector machines (SVMs) using ten-fold cross-validation experiments of all 360 scans. The results show that the proposed "Phased Searching" method performs better and faster than regular NEAT, better than FD-NEAT, and achieves sensitivities at 3 and 4 false positives (FP) per scan that are comparable with the fixed-topology ANN and SVM classifiers, but with fewer input features. It achieves a detection sensitivity of 83.0±9.7% with an average of 4FP/scan, for nodules with a diameter greater than or equal to 3mm. It also evolves networks with shorter evolution times and with lower complexities than regular NEAT (p=0.026 and pNEAT and by our approach shows that our approach searches for good solutions in lower dimensional search spaces, and evolves networks without superfluous structure. We have presented a novel approach that combines feature selection with the evolution of ANN topology and weights. Compared with the original threshold-based Phased Searching method of Green, our method requires fewer parameters and converges to the optimal network complexity required for the classification task at hand. The results of the

  4. Alternative majority-voting methods for real-time computing systems

    Science.gov (United States)

    Shin, Kang G.; Dolter, James W.

    1989-01-01

    Two techniques that provide a compromise between the high time overhead in maintaining synchronous voting and the difficulty of combining results in asynchronous voting are proposed. These techniques are specifically suited for real-time applications with a single-source/single-sink structure that need instantaneous error masking. They provide a compromise between a tightly synchronized system in which the synchronization overhead can be quite high, and an asynchronous system which lacks suitable algorithms for combining the output data. Both quorum-majority voting (QMV) and compare-majority voting (CMV) are most applicable to distributed real-time systems with single-source/single-sink tasks. All real-time systems eventually have to resolve their outputs into a single action at some stage. The development of the advanced information processing system (AIPS) and other similar systems serve to emphasize the importance of these techniques. Time bounds suggest that it is possible to reduce the overhead for quorum-majority voting to below that for synchronous voting. All the bounds assume that the computation phase is nonpreemptive and that there is no multitasking.

  5. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lent, Wineke A.M. van, E-mail: w.v.lent@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands); Deetman, Joost W., E-mail: j.deetman@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Teertstra, H. Jelle, E-mail: h.teertstra@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Muller, Sara H., E-mail: s.muller@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Hans, Erwin W., E-mail: e.w.hans@utwente.nl [University of Twente, School of Management and Governance, Dept. of Industrial Engineering and Business Intelligence Systems, Enschede (Netherlands); Harten, Wim H. van, E-mail: w.v.harten@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands)

    2012-11-15

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  6. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    International Nuclear Information System (INIS)

    Lent, Wineke A.M. van; Deetman, Joost W.; Teertstra, H. Jelle; Muller, Sara H.; Hans, Erwin W.; Harten, Wim H. van

    2012-01-01

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  7. Modeling of requirement specification for safety critical real time computer system using formal mathematical specifications

    International Nuclear Information System (INIS)

    Sankar, Bindu; Sasidhar Rao, B.; Ilango Sambasivam, S.; Swaminathan, P.

    2002-01-01

    Full text: Real time computer systems are increasingly used for safety critical supervision and control of nuclear reactors. Typical application areas are supervision of reactor core against coolant flow blockage, supervision of clad hot spot, supervision of undesirable power excursion, power control and control logic for fuel handling systems. The most frequent cause of fault in safety critical real time computer system is traced to fuzziness in requirement specification. To ensure the specified safety, it is necessary to model the requirement specification of safety critical real time computer systems using formal mathematical methods. Modeling eliminates the fuzziness in the requirement specification and also helps to prepare the verification and validation schemes. Test data can be easily designed from the model of the requirement specification. Z and B are the popular languages used for modeling the requirement specification. A typical safety critical real time computer system for supervising the reactor core of prototype fast breeder reactor (PFBR) against flow blockage is taken as case study. Modeling techniques and the actual model are explained in detail. The advantages of modeling for ensuring the safety are summarized

  8. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... order to produce clear images. Also, shorter scan times will make it easier for children to hold their breath during critical parts of the exam. CT scanning is painless, noninvasive ...

  9. High lung cancer surgical procedure volume is associated with shorter length of stay and lower risks of re-admission and death

    DEFF Research Database (Denmark)

    Møller, Henrik; Riaz, Sharma P; Holmberg, Lars

    2016-01-01

    It is debated whether treating cancer patients in high-volume surgical centres can lead to improvement in outcomes, such as shorter length of hospital stay, decreased frequency and severity of post-operative complications, decreased re-admission, and decreased mortality. The dataset for this anal......It is debated whether treating cancer patients in high-volume surgical centres can lead to improvement in outcomes, such as shorter length of hospital stay, decreased frequency and severity of post-operative complications, decreased re-admission, and decreased mortality. The dataset...... to their geographical population. Higher volume hospitals had shorter length of stay and the odds of re-admission were 15% lower in the highest hospital volume quintile compared with the lowest quintile. Mortality risks were 1% after 30 d and 3% after 90 d. Patients from hospitals in the highest volume quintile had...

  10. The relationship between TV/computer time and adolescents' health-promoting behavior: a secondary data analysis.

    Science.gov (United States)

    Chen, Mei-Yen; Liou, Yiing-Mei; Wu, Jen-Yee

    2008-03-01

    Television and computers provide significant benefits for learning about the world. Some studies have linked excessive television (TV) watching or computer game playing to disadvantage of health status or some unhealthy behavior among adolescents. However, the relationships between watching TV/playing computer games and adolescents adopting health promoting behavior were limited. This study aimed to discover the relationship between time spent on watching TV and on leisure use of computers and adolescents' health promoting behavior, and associated factors. This paper used secondary data analysis from part of a health promotion project in Taoyuan County, Taiwan. A cross-sectional design was used and purposive sampling was conducted among adolescents in the original project. A total of 660 participants answered the questions appropriately for this work between January and June 2004. Findings showed the mean age of the respondents was 15.0 +/- 1.7 years. The mean numbers of TV watching hours were 2.28 and 4.07 on weekdays and weekends respectively. The mean hours of leisure (non-academic) computer use were 1.64 and 3.38 on weekdays and weekends respectively. Results indicated that adolescents spent significant time watching TV and using the computer, which was negatively associated with adopting health-promoting behaviors such as life appreciation, health responsibility, social support and exercise behavior. Moreover, being boys, being overweight, living in a rural area, and being middle-school students were significantly associated with spending long periods watching TV and using the computer. Therefore, primary health care providers should record the TV and non-academic computer time of youths when conducting health promotion programs, and educate parents on how to become good and healthy electronic media users.

  11. Joint Time-Frequency-Space Classification of EEG in a Brain-Computer Interface Application

    Directory of Open Access Journals (Sweden)

    Molina Gary N Garcia

    2003-01-01

    Full Text Available Brain-computer interface is a growing field of interest in human-computer interaction with diverse applications ranging from medicine to entertainment. In this paper, we present a system which allows for classification of mental tasks based on a joint time-frequency-space decorrelation, in which mental tasks are measured via electroencephalogram (EEG signals. The efficiency of this approach was evaluated by means of real-time experimentations on two subjects performing three different mental tasks. To do so, a number of protocols for visualization, as well as training with and without feedback, were also developed. Obtained results show that it is possible to obtain good classification of simple mental tasks, in view of command and control, after a relatively small amount of training, with accuracies around 80%, and in real time.

  12. Split delivery vehicle routing problem with time windows: a case study

    Science.gov (United States)

    Latiffianti, E.; Siswanto, N.; Firmandani, R. A.

    2018-04-01

    This paper aims to implement an extension of VRP so called split delivery vehicle routing problem (SDVRP) with time windows in a case study involving pickups and deliveries of workers from several points of origin and several destinations. Each origin represents a bus stop and the destination represents either site or office location. An integer linear programming of the SDVRP problem is presented. The solution was generated using three stages of defining the starting points, assigning busses, and solving the SDVRP with time windows using an exact method. Although the overall computational time was relatively lengthy, the results indicated that the produced solution was better than the existing routing and scheduling that the firm used. The produced solution was also capable of reducing fuel cost by 9% that was obtained from shorter total distance travelled by the shuttle buses.

  13. Computational model for real-time determination of tritium inventory in a detritiation installation

    International Nuclear Information System (INIS)

    Bornea, Anisia; Stefanescu, Ioan; Zamfirache, Marius; Stefan, Iuliana; Sofalca, Nicolae; Bidica, Nicolae

    2008-01-01

    Full text: At ICIT Rm.Valcea an experimental pilot plant was built having as main objective the development of a technology for detritiation of heavy water processed in the CANDU-type reactors of the nuclear power plant at Cernavoda, Romania. The aspects related to safeguards and safety for such a detritiation installation being of great importance, a complex computational model has been developed. The model allows real-time calculation of tritium inventory in a working installation. The applied detritiation technology is catalyzed isotopic exchange coupled with cryogenic distillation. Computational models for non-steady working conditions have been developed for each process of isotopic exchange. By coupling these processes tritium inventory can be determined in real-time. The computational model was developed based on the experience gained on the pilot installation. The model uses a set of parameters specific to isotopic exchange processes. These parameters were experimentally determined in the pilot installation. The model is included in the monitoring system and uses as input data the parameters acquired in real-time from automation system of the pilot installation. A friendly interface has been created to visualize the final results as data or graphs. (authors)

  14. A real-time computer simulation of nuclear simulator software using standard PC hardware and linux environments

    International Nuclear Information System (INIS)

    Cha, K. H.; Kweon, K. C.

    2001-01-01

    A feasibility study, which standard PC hardware and Real-Time Linux are applied to real-time computer simulation of software for a nuclear simulator, is presented in this paper. The feasibility prototype was established with the existing software in the Compact Nuclear Simulator (CNS). Throughout the real-time implementation in the feasibility prototype, we has identified that the approach can enable the computer-based predictive simulation to be approached, due to both the remarkable improvement in real-time performance and the less efforts for real-time implementation under standard PC hardware and Real-Time Linux envrionments

  15. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  16. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  17. Construction time of PWRs

    Energy Technology Data Exchange (ETDEWEB)

    Carajilescov, Pedro; Moreira, Joao M.L., E-mail: pedro.carajilescov@ufabc.edu.b, E-mail: joao.moreira@ufabc.edu.b [Universidade Federal do ABC (UFABC), Santo Andre, SP (Brazil). Center of Engineering, Modeling and Applied Social Sciences

    2011-07-01

    The cost of electricity generated by nuclear power is greatly affected by the capital cost, which is dependent on the construction time of the plant. This work analyses the construction time of PWRs in several countries with different market structure and licensing experience. Countries which succeeded to establish a more collaborative environment among utilities, constructors, regulators, and energy planners through effective partnerships were able to build PWRs in shorter times. The construction time in Germany, France and Russia was around 80 months and in Japan, about 60 months. The envelope of 95% of the cases includes a range between 50 and 250 months of construction time. The evaluations show that construction time of PWRs has been longer for countries that did not hold the technology to build their own reactors, and depended on contracts with foreign suppliers. The nominal power of the reactors was considered a measure of plant size, technology complexity and standardization. Countries with standardized reactor designs (France, Japan and Russia) were able to build plants in shorter times. (author)

  18. Construction time of PWRs

    International Nuclear Information System (INIS)

    Carajilescov, Pedro; Moreira, Joao M.L.

    2011-01-01

    The cost of electricity generated by nuclear power is greatly affected by the capital cost, which is dependent on the construction time of the plant. This work analyses the construction time of PWRs in several countries with different market structure and licensing experience. Countries which succeeded to establish a more collaborative environment among utilities, constructors, regulators, and energy planners through effective partnerships were able to build PWRs in shorter times. The construction time in Germany, France and Russia was around 80 months and in Japan, about 60 months. The envelope of 95% of the cases includes a range between 50 and 250 months of construction time. The evaluations show that construction time of PWRs has been longer for countries that did not hold the technology to build their own reactors, and depended on contracts with foreign suppliers. The nominal power of the reactors was considered a measure of plant size, technology complexity and standardization. Countries with standardized reactor designs (France, Japan and Russia) were able to build plants in shorter times. (author)

  19. A neural computational model for animal's time-to-collision estimation.

    Science.gov (United States)

    Wang, Ling; Yao, Dezhong

    2013-04-17

    The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.

  20. Whole-body computed tomography in trauma patients: optimization of the patient scanning position significantly shortens examination time while maintaining diagnostic image quality

    Directory of Open Access Journals (Sweden)

    Hickethier T

    2018-05-01

    Full Text Available Tilman Hickethier,1,* Kamal Mammadov,1,* Bettina Baeßler,1 Thorsten Lichtenstein,1 Jochen Hinkelbein,2 Lucy Smith,3 Patrick Sven Plum,4 Seung-Hun Chon,4 David Maintz,1 De-Hua Chang1 1Department of Radiology, University Hospital of Cologne, Cologne, Germany; 2Department of Anesthesiology and Intensive Care Medicine, University Hospital of Cologne, Cologne, Germany; 3Faculty of Medicine, Memorial University of Newfoundland, St. John’s, Canada; 4Department of General, Visceral and Cancer Surgery, University Hospital of Cologne, Cologne, Germany *These authors contributed equally to this work Background: The study was conducted to compare examination time and artifact vulnerability of whole-body computed tomographies (wbCTs for trauma patients using conventional or optimized patient positioning. Patients and methods: Examination time was measured in 100 patients scanned with conventional protocol (Group A: arms positioned alongside the body for head and neck imaging and over the head for trunk imaging and 100 patients scanned with optimized protocol (Group B: arms flexed on a chest pillow without repositioning. Additionally, influence of two different scanning protocols on image quality in the most relevant body regions was assessed by two blinded readers. Results: Total wbCT duration was about 35% or 3:46 min shorter in B than in A. Artifacts in aorta (27 vs 6%, liver (40 vs 8% and spleen (27 vs 5% occurred significantly more often in B than in A. No incident of non-diagnostic image quality was reported, and no significant differences for lungs and spine were found. Conclusion: An optimized wbCT positioning protocol for trauma patients allows a significant reduction of examination time while still maintaining diagnostic image quality. Keywords: CT scan, polytrauma, acute care, time requirement, positioning

  1. Variable dead time counters: 2. A computer simulation

    International Nuclear Information System (INIS)

    Hooton, B.W.; Lees, E.W.

    1980-09-01

    A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)

  2. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  3. Shorter Fallow Cycles Affect the Availability of Noncrop Plant Resources in a Shifting Cultivation System

    Directory of Open Access Journals (Sweden)

    Sarah Paule. Dalle

    2006-12-01

    Full Text Available Shifting cultivation systems, one of the most widely distributed forms of agriculture in the tropics, provide not only crops of cultural significance, but also medicinal, edible, ritual, fuel, and forage resources, which contribute to the livelihoods, health, and cultural identity of local people. In many regions across the globe, shifting cultivation systems are undergoing important changes, one of the most pervasive being a shortening of the fallow cycle. Although there has been much attention drawn to declines in crop yields in conjunction with reductions in fallow times, little if any research has focused on the dynamics of noncrop plant resources. In this paper, we use a data set of 26 fields of the same age, i.e., ~1.5 yr, but differing in the length and frequency of past fallow cycles, to examine the impact of shorter fallow periods on the availability of noncrop plant resources. The resources examined are collected in shifting cultivation fields by the Yucatec Maya in Quintana Roo, Mexico. These included firewood, which is cut from remnant trees and stumps spared at the time of felling, and 17 forage species that form part of the weed vegetation. Firewood showed an overall decrease in basal area with shorter fallow cycles, which was mostly related to the smaller diameter of the spared stumps and trees in short-fallow milpas. In contrast, forage species showed a mixed response. Species increasing in abundance in short-fallow milpas tended to be short-lived herbs and shrubs often with weedy habits, whereas those declining in abundance were predominantly pioneer trees and animal-dispersed species. Coppicing tree species showed a neutral response to fallow intensity. Within the cultural and ecological context of our study area, we expect that declines in firewood availability will be most significant for livelihoods because of the high reliance on firewood for local fuel needs and the fact that the main alternative source of firewood, forest

  4. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... of the body, in a shorter period of time. Modern CT scanners are so fast that they can scan through large sections of the body in just a few seconds. Such speed is beneficial for ...

  5. The Glycated Albumin (GA) to HbA1c Ratio Reflects Shorter-Term Glycemic Control than GA: Analysis of Patients with Fulminant Type 1 Diabetes.

    Science.gov (United States)

    Koga, Masafumi; Inada, Shinya; Nakao, Taisei; Kawamori, Ryuzo; Kasayama, Soji

    2017-01-01

    Glycated albumin (GA) reflects shorter-term glycemic control than HbA1c. We have reported that HbA1c is paradoxically increased in diabetic patients whose glycemic control deteriorated before ameliorating. In this study, we analyzed paradoxical increases of glycemic control indicators after treatment in patients with fulminant type 1 diabetes (FT1D). We also investigated whether the GA/HbA1c ratio may reflect shorter-term glycemic control than GA. Five FT1D patients whose post-treatment HbA1c and GA levels were measured were enrolled. We also used a formula to estimate HbA1c and GA from the fictitious models of changes in plasma glucose in FT1D patients. In this model, the periods during which HbA1c, GA, and the GA/HbA1c ratio were higher than at the first visit were compared. In addition, the half-life for the GA/HbA1c ratio was calculated in accordance with the half-lives for HbA1c and GA (36 and 14 days, respectively). In all FT1D patients, HbA1c levels 2-4 weeks after treatment were increased, with three patients (60%) experiencing an increase of GA levels. In contrast, an increase of the GA/HbA1c ratio was observed in only one patient. In all of the different models of changes in plasma glucose in FT1D patients, the length of time during which the values were higher than at the first visit was in the order of HbA1c > GA > GA/HbA1c ratio. The half-life for the GA/HbA1c ratio was 9 days, shorter than GA. These findings suggest that the GA/HbA1c ratio reflects shorter-term glycemic control than GA. © 2016 Wiley Periodicals, Inc.

  6. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    Science.gov (United States)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  7. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    International Nuclear Information System (INIS)

    Wang, Henry; Ma Yunzhi; Pratx, Guillem; Xing Lei

    2011-01-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  8. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Henry [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Ma Yunzhi; Pratx, Guillem; Xing Lei, E-mail: hwang41@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847 (United States)

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  9. Television Viewing, Computer Use, Time Driving and All‐Cause Mortality: The SUN Cohort

    Science.gov (United States)

    Basterra‐Gortari, Francisco Javier; Bes‐Rastrollo, Maira; Gea, Alfredo; Núñez‐Córdoba, Jorge María; Toledo, Estefanía; Martínez‐González, Miguel Ángel

    2014-01-01

    Background Sedentary behaviors have been directly associated with all‐cause mortality. However, little is known about different types of sedentary behaviors in relation to overall mortality. Our objective was to assess the association between different sedentary behaviors and all‐cause mortality. Methods and Results In this prospective, dynamic cohort study (the SUN Project) 13 284 Spanish university graduates with a mean age of 37 years were followed‐up for a median of 8.2 years. Television, computer, and driving time were assessed at baseline. Poisson regression models were fitted to examine the association between each sedentary behavior and total mortality. All‐cause mortality incidence rate ratios (IRRs) per 2 hours per day were 1.40 (95% confidence interval (CI): 1.06 to 1.84) for television viewing, 0.96 (95% CI: 0.79 to 1.18) for computer use, and 1.14 (95% CI: 0.90 to 1.44) for driving, after adjustment for age, sex, smoking status, total energy intake, Mediterranean diet adherence, body mass index, and physical activity. The risk of mortality was twofold higher for participants reporting ≥3 h/day of television viewing than for those reporting Television viewing was directly associated with all‐cause mortality. However, computer use and time spent driving were not significantly associated with higher mortality. Further cohort studies and trials designed to assess whether reductions in television viewing are able to reduce mortality are warranted. The lack of association between computer use or time spent driving and mortality needs further confirmation. PMID:24965030

  10. Application verification research of cloud computing technology in the field of real time aerospace experiment

    Science.gov (United States)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  11. Sitting Time, Physical Activity and Sleep by Work Type and Pattern—The Australian Longitudinal Study on Women’s Health

    Directory of Open Access Journals (Sweden)

    Bronwyn K. Clark

    2017-03-01

    Full Text Available Data from the Australian Longitudinal Study on Women’s Health were used to examine how work was associated with time spent sleeping, sitting and in physical activity (PA, in working women. Young (31–36 years; 2009 and mid-aged (59–64 years; 2010 women reported sleep (categorised as shorter ≤6 h/day and longer ≥8 h/day and sitting time (work, transport, television, non-work computer, and other; summed for total sitting time on the most recent work and non-work day; and moderate and vigorous PA (categorised as meeting/not meeting guidelines in the previous week. Participants reported occupation (manager/professional; clerical/sales; trades/transport/labourer, work hours (part-time; full-time and work pattern (shift/night; not shift/night. The odds of shorter sleep on work days was higher in both cohorts for women who worked shift or night hours. Longer sitting time on work days, made up primarily of sitting for work, was found for managers/professionals, clerical/sales and full-time workers. In the young cohort, clerical/sales workers and in the mid-aged cohort, full-time workers were less likely to meet PA guidelines. These results suggest multiple behaviour interventions tailored to work patterns and occupational category may be useful to improve the sleep, sitting and activity of working women.

  12. 21 CFR 10.20 - Submission of documents to Division of Dockets Management; computation of time; availability for...

    Science.gov (United States)

    2010-04-01

    ... Management; computation of time; availability for public disclosure. 10.20 Section 10.20 Food and Drugs FOOD... Management; computation of time; availability for public disclosure. (a) A submission to the Division of Dockets Management of a petition, comment, objection, notice, compilation of information, or any other...

  13. Reliability of real-time computing with radiation data feedback at accidental release

    International Nuclear Information System (INIS)

    Deme, S.; Feher, I.; Lang, E.

    1990-01-01

    At the first workshop in 1985 we reported on the real-time dose computing method used at the Paks Nuclear Power Plant and on the telemetric system developed for the normalization of the computed data. At present, the computing method normalized for the telemetric data represents the primary information for deciding on any necessary counter measures in case of a nuclear reactor accident. In this connection we analyzed the reliability of the results obtained in this manner. The points of the analysis were: how the results are influenced by the choice of certain parameters that cannot be determined by direct methods and how the improperly chosen diffusion parameters would distort the determination of environmental radiation parameters normalized on the basis of the measurements ( 131 I activity concentration, gamma dose rate) at points lying at a given distance from the measuring stations. A further source of errors may be that, when determining the level of gamma radiation, the radionuclide doses in the cloud and on the ground surface are measured together by the environmental monitoring stations, whereas these doses appear separately in the computations. At the Paks NPP it is the time integral of the aiborne activity concentration of vapour form 131 I which is determined. This quantity includes neither the other physical and chemical forms of 131 I nor the other isotopes of radioiodine. We gave numerical examples for the uncertainties due to the above factors. As a result, we arrived at the conclusions that there is a need to decide on accident-related measures based on the computing method that the dose uncertainties may reach one order of magnitude for points lying far from the monitoring stations. Different measures are discussed to make the uncertainties significantly lower

  14. Estimation of time-series properties of gourd observed solar irradiance data using cloud properties derived from satellite observations

    Science.gov (United States)

    Watanabe, T.; Nohara, D.

    2017-12-01

    The shorter temporal scale variation in the downward solar irradiance at the ground level (DSI) is not understood well because researches in the shorter-scale variation in the DSI is based on the ground observation and ground observation stations are located coarsely. Use of dataset derived from satellite observation will overcome such defect. DSI data and MODIS cloud properties product are analyzed simultaneously. Three metrics: mean, standard deviation and sample entropy, are used to evaluate time-series properties of the DSI. Three metrics are computed from two-hours time-series centered at the observation time of MODIS over the ground observation stations. We apply the regression methods to design prediction models of each three metrics from cloud properties. The validation of the model accuracy show that mean and standard deviation are predicted with a higher degree of accuracy and that the accuracy of prediction of sample entropy, which represents the complexity of time-series, is not high. One of causes of lower prediction skill of sample entropy is the resolution of the MODIS cloud properties. Higher sample entropy is corresponding to the rapid fluctuation, which is caused by the small and unordered cloud. It seems that such clouds isn't retrieved well.

  15. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  16. Rigorous bounds on survival times in circular accelerators and efficient computation of fringe-field transfer maps

    International Nuclear Information System (INIS)

    Hoffstaetter, G.H.

    1994-12-01

    Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)

  17. Design considerations for computationally constrained two-way real-time video communication

    Science.gov (United States)

    Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.

    2009-08-01

    Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.

  18. Association of mutations in the hemochromatosis gene with shorter life expectancy

    DEFF Research Database (Denmark)

    Bathum, L; Christiansen, L; Nybo, H

    2001-01-01

    BACKGROUND: To investigate whether the frequency of carriers of mutations in the HFE gene associated with hereditary hemochromatosis diminishes with age as an indication that HFE mutations are associated with increased mortality. It is of value in the debate concerning screening for hereditary...... hemochromatosis to determine the significance of heterozygosity. METHODS: Genotyping for mutations in exons 2 and 4 of the HFE gene using denaturing gradient gel electrophoresis in 1784 participants aged 45 to 100 years from 4 population-based studies: all 183 centenarians from the Danish Centenarian Study, 601...... in the distribution of mutations in exon 2 in the different age groups. CONCLUSIONS: In a high-carrier frequency population like Denmark, mutations in HFE show an age-related reduction in the frequency of heterozygotes for C282Y, which suggests that carrier status is associated with shorter life expectancy....

  19. Cloud computing platform for real-time measurement and verification of energy performance

    International Nuclear Information System (INIS)

    Ke, Ming-Tsun; Yeh, Chia-Hung; Su, Cheng-Jie

    2017-01-01

    Highlights: • Application of PSO algorithm can improve the accuracy of the baseline model. • M&V cloud platform automatically calculates energy performance. • M&V cloud platform can be applied in all energy conservation measures. • Real-time operational performance can be monitored through the proposed platform. • M&V cloud platform facilitates the development of EE programs and ESCO industries. - Abstract: Nations worldwide are vigorously promoting policies to improve energy efficiency. The use of measurement and verification (M&V) procedures to quantify energy performance is an essential topic in this field. Currently, energy performance M&V is accomplished via a combination of short-term on-site measurements and engineering calculations. This requires extensive amounts of time and labor and can result in a discrepancy between actual energy savings and calculated results. In addition, the M&V period typically lasts for periods as long as several months or up to a year, the failure to immediately detect abnormal energy performance not only decreases energy performance, results in the inability to make timely correction, and misses the best opportunity to adjust or repair equipment and systems. In this study, a cloud computing platform for the real-time M&V of energy performance is developed. On this platform, particle swarm optimization and multivariate regression analysis are used to construct accurate baseline models. Instantaneous and automatic calculations of the energy performance and access to long-term, cumulative information about the energy performance are provided via a feature that allows direct uploads of the energy consumption data. Finally, the feasibility of this real-time M&V cloud platform is tested for a case study involving improvements to a cold storage system in a hypermarket. Cloud computing platform for real-time energy performance M&V is applicable to any industry and energy conservation measure. With the M&V cloud platform, real-time

  20. Is equity confined to the shorter term projects - and if not, what does it need?

    International Nuclear Information System (INIS)

    Cryan, T.

    1996-01-01

    There are two types of equity investor generally found in shorter term energy projects: energy project developers or sponsors who view a given project as buying or building a business; and financial investors who have viewed an investment as buying a stream of cash flows. This article examines the objectives and needs of these two investor groups, and discusses the principal issues which govern their respective decision-making process. (author)

  1. Computer-games for gravitational wave science outreach: Black Hole Pong and Space Time Quest

    International Nuclear Information System (INIS)

    Carbone, L; Bond, C; Brown, D; Brückner, F; Grover, K; Lodhia, D; Mingarelli, C M F; Fulda, P; Smith, R J E; Unwin, R; Vecchio, A; Wang, M; Whalley, L; Freise, A

    2012-01-01

    We have established a program aimed at developing computer applications and web applets to be used for educational purposes as well as gravitational wave outreach activities. These applications and applets teach gravitational wave physics and technology. The computer programs are generated in collaboration with undergraduates and summer students as part of our teaching activities, and are freely distributed on a dedicated website. As part of this program, we have developed two computer-games related to gravitational wave science: 'Black Hole Pong' and 'Space Time Quest'. In this article we present an overview of our computer related outreach activities and discuss the games and their educational aspects, and report on some positive feedback received.

  2. Kajian dan Implementasi Real TIME Operating System pada Single Board Computer Berbasis Arm

    OpenAIRE

    A, Wiedjaja; M, Handi; L, Jonathan; Christian, Benyamin; Kristofel, Luis

    2014-01-01

    Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system) which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC) ARM-based, namely Pandaboard ES with ...

  3. Computer image analysis of etched tracks from ionizing radiation

    Science.gov (United States)

    Blanford, George E.

    1994-01-01

    I proposed to continue a cooperative research project with Dr. David S. McKay concerning image analysis of tracks. Last summer we showed that we could measure track densities using the Oxford Instruments eXL computer and software that is attached to an ISI scanning electron microscope (SEM) located in building 31 at JSC. To reduce the dependence on JSC equipment, we proposed to transfer the SEM images to UHCL for analysis. Last summer we developed techniques to use digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays during near surface exposure on the Moon. The track densities are related to the exposure conditions (depth and time). Distributions of the number of grains as a function of their track densities can reveal the modality of soil maturation. As part of a consortium effort to better understand the maturation of lunar soil and its relation to its infrared reflectance properties, we worked on lunar samples 67701,205 and 61221,134. These samples were etched for a shorter time (6 hours) than last summer's sample and this difference has presented problems for establishing the correct analysis conditions. We used computer counting and measurement of area to obtain preliminary track densities and a track density distribution that we could interpret for sample 67701,205. This sample is a submature soil consisting of approximately 85 percent mature soil mixed with approximately 15 percent immature, but not pristine, soil.

  4. Computing time-series suspended-sediment concentrations and loads from in-stream turbidity-sensor and streamflow data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Doug; Ziegler, Andrew C.

    2010-01-01

    Over the last decade, use of a method for computing suspended-sediment concentration and loads using turbidity sensors—primarily nephelometry, but also optical backscatter—has proliferated. Because an in- itu turbidity sensor is capa le of measuring turbidity instantaneously, a turbidity time series can be recorded and related directly to time-varying suspended-sediment concentrations. Depending on the suspended-sediment characteristics of the measurement site, this method can be more reliable and, in many cases, a more accurate means for computing suspended-sediment concentrations and loads than traditional U.S. Geological Survey computational methods. Guidelines and procedures for estimating time s ries of suspended-sediment concentration and loading as a function of turbidity and streamflow data have been published in a U.S. Geological Survey Techniques and Methods Report, Book 3, Chapter C4. This paper is a summary of these guidelines and discusses some of the concepts, s atistical procedures, and techniques used to maintain a multiyear suspended sediment time series.

  5. Dose-time relationships for post-irradiation cutaneous telangiectasia

    International Nuclear Information System (INIS)

    Cohen, L.; Ubaldi, S.E.

    1977-01-01

    Seventy-five patients who had received electron beam radiation a year or more previously were studied. The irradiated skin portals were photographed and late reactions graded in terms of the number and severity of telangiectatic lesions observed. The skin dose, number of fractions, overall treatment time and irradiated volume were recorded in each case. A Strandqvist-type iso-effect line was derived for this response. A multi-probit search program also was used to derive best-fitting cell population kinetic parameters for the same data. From these parameters a comprehensive iso-effect table could be computed for a wide range of treatment schedules including daily treatment as well as fractionation at shorter and longer intervals; this provided a useful set of normal tissue tolerance limits for late effects

  6. Shorter preschool, leukocyte telomere length is associated with obesity at age 9 in Latino children.

    Science.gov (United States)

    Kjaer, T W; Faurholt-Jepsen, D; Mehta, K M; Christensen, V B; Epel, E; Lin, J; Blackburn, E; Wojcicki, J M

    2018-04-01

    The aim of this study was to determine the potential role of leukocyte telomere length as a biomarker for development of childhood obesity in a low-income Latino population. A birth cohort of Latino children (N = 201) in San Francisco (recruited May 2006-May 2007) was followed until age 9 and assessed annually for obesity and dietary intake. Leukocyte telomere length was measured at 4 and 5 years (n = 102) and assessed as a predictor for obesity at age 9, adjusting for known risk factors. Furthermore, leukocyte telomere length at age 4 and 5 was evaluated as a possible mediator of the relationship between excessive sugar-sweetened beverage consumption and obesity at age 9. Shorter leukocyte telomere length in preschoolers was associated with obesity at age 9 (adjusted odds ratio 0.35, 95% confidence interval 0.13-0.94) after adjustment for known risk factors. Telomere length mediated 11% of the relationship between excessive sugar-sweetened beverage consumption and obesity. Shorter leukocyte telomere length may be an indicator of future obesity risk in high-risk populations as it is particularly sensitive to damage from oxidative stress exposure, including those from sugar-sweetened beverages. © 2017 World Obesity Federation.

  7. Are Shorter Versions of the Positive and Negative Syndrome Scale (PANSS) Doable? A Critical Review.

    Science.gov (United States)

    Lindenmayer, Jean-Pierre

    2017-12-01

    The Positive and Negative Syndrome Scale (PANSS) is a well-established assessment tool for measuring symptom severity in schizophrenia. Researchers and clinicians have been interested in the development of a short version of the PANSS that could reduce the burden of its administration for patients and raters. The author presents a comprehensive overview of existing brief PANSS measures, including their strengths and limitations, and discusses some possible next steps. There are two available scales that offer a reduced number of original PANSS items: PANSS-14 and PANSS-19; and two shorter versions that include six items: Brief PANSS and PANSS-6. The PANSS-6 has been tested quite extensively in established trials and appears to demonstrate high sensitivity to change and an established cut off definition for remission. Prospective testing in new antipsychotic treatment trials is still required for these shorter versions of PANSS. In addition, they need to be supplemented with interview guides, as well as provide conversion formulas to translate total scores from the short PANSS versions to the PANSS-30. Both short versions of the PANSS are essentially designed to evaluate response to antipsychotic treatment. Future PANSS scale development needs to address specific measurement of treatment-responsive positive symptoms by including treatment-sensitive items, as well as illness-phase specific PANSS tools.

  8. Green computing: power optimisation of vfi-based real-time multiprocessor dataflow applications

    NARCIS (Netherlands)

    Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2015-01-01

    Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor

  9. In this issue: Time to replace doctors’ judgement with computers

    Directory of Open Access Journals (Sweden)

    Simon de Lusignan

    2015-11-01

    Full Text Available Informaticians continue to rise to the challenge, set by the English Health Minister, of trying to replace doctors’ judgement with computers. This issue describes successes and where there are barriers. However, whilst there is progress this tends to be incremental and there are grand challenges to be overcome before computers can replace clinician. These grand challenges include: (1 improving usability so it is possible to more readily incorporate technology into clinical workflow; (2 rigorous new analytic methods that make use of the mass of available data, ‘Big data’, to create real-world evidence; (3 faster ways of meeting regulatory and legal requirements including ensuring privacy; (4 provision of reimbursement models to fund innovative technology that can substitute for clinical time and (5 recognition that innovations that improve quality also often increase cost. Informatics more is likely to support and augment clinical decision making rather than replace clinicians.

  10. Time-of-Flight Sensors in Computer Graphics

    DEFF Research Database (Denmark)

    Kolb, Andreas; Barth, Erhardt; Koch, Reinhard

    2009-01-01

    , including Computer Graphics, Computer Vision and Man Machine Interaction (MMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real...

  11. Heterogeneous computing architecture for fast detection of SNP-SNP interactions.

    Science.gov (United States)

    Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros

    2014-06-25

    The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.

  12. Individual and family environmental correlates of television and computer time in 10- to 12-year-old European children: the ENERGY-project.

    Science.gov (United States)

    Verloigne, Maïté; Van Lippevelde, Wendy; Bere, Elling; Manios, Yannis; Kovács, Éva; Grillenberger, Monika; Maes, Lea; Brug, Johannes; De Bourdeaudhuij, Ilse

    2015-09-18

    The aim was to investigate which individual and family environmental factors are related to television and computer time separately in 10- to-12-year-old children within and across five European countries (Belgium, Germany, Greece, Hungary, Norway). Data were used from the ENERGY-project. Children and one of their parents completed a questionnaire, including questions on screen time behaviours and related individual and family environmental factors. Family environmental factors included social, political, economic and physical environmental factors. Complete data were obtained from 2022 child-parent dyads (53.8 % girls, mean child age 11.2 ± 0.8 years; mean parental age 40.5 ± 5.1 years). To examine the association between individual and family environmental factors (i.e. independent variables) and television/computer time (i.e. dependent variables) in each country, multilevel regression analyses were performed using MLwiN 2.22, adjusting for children's sex and age. In all countries, children reported more television and/or computer time, if children and their parents thought that the maximum recommended level for watching television and/or using the computer was higher and if children had a higher preference for television watching and/or computer use and a lower self-efficacy to control television watching and/or computer use. Most physical and economic environmental variables were not significantly associated with television or computer time. Slightly more individual factors were related to children's computer time and more parental social environmental factors to children's television time. We also found different correlates across countries: parental co-participation in television watching was significantly positively associated with children's television time in all countries, except for Greece. A higher level of parental television and computer time was only associated with a higher level of children's television and computer time in Hungary. Having rules

  13. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  14. Real-time computer treatment of THz passive device images with the high image quality

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  15. An assessment of the real-time application capabilities of the SIFT computer system

    Science.gov (United States)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  16. Computation of the Short-Time Linear Canonical Transform with Dual Window

    Directory of Open Access Journals (Sweden)

    Lei Huang

    2017-01-01

    Full Text Available The short-time linear canonical transform (STLCT, which maps the time domain signal into the joint time and frequency domain, has recently attracted some attention in the area of signal processing. However, its applications are still limited due to the fact that selection of coefficients of the short-time linear canonical series (STLCS is not unique, because time and frequency elementary functions (together known as basis function of STLCS do not constitute an orthogonal basis. To solve this problem, this paper investigates a dual window solution. First, the nonorthogonal problem that suffered from original window is fulfilled by orthogonal condition with dual window. Then based on the obtained condition, a dual window computation approach of the GT is extended to the STLCS. In addition, simulations verify the validity of the proposed condition and solutions. Furthermore, some possible applied directions are discussed.

  17. Manual cross check of computed dose times for motorised wedged fields

    International Nuclear Information System (INIS)

    Porte, J.

    2001-01-01

    If a mass of tissue equivalent material is exposed in turn to wedged and open radiation fields of the same size, for equal times, it is incorrect to assume that the resultant isodose pattern will be effectively that of a wedge having half the angle of the wedged field. Computer programs have been written to address the problem of creating an intermediate wedge field, commonly known as a motorized wedge. The total exposure time is apportioned between the open and wedged fields, to produce a beam modification equivalent to that of a wedged field of a given wedge angle. (author)

  18. ADAPTATION OF JOHNSON SEQUENCING ALGORITHM FOR JOB SCHEDULING TO MINIMISE THE AVERAGE WAITING TIME IN CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    SOUVIK PAL

    2016-09-01

    Full Text Available Cloud computing is an emerging paradigm of Internet-centric business computing where Cloud Service Providers (CSPs are providing services to the customer according to their needs. The key perception behind cloud computing is on-demand sharing of resources available in the resource pool provided by CSP, which implies new emerging business model. The resources are provisioned when jobs arrive. The job scheduling and minimization of waiting time are the challenging issue in cloud computing. When a large number of jobs are requested, they have to wait for getting allocated to the servers which in turn may increase the queue length and also waiting time. This paper includes system design for implementation which is concerned with Johnson Scheduling Algorithm that provides the optimal sequence. With that sequence, service times can be obtained. The waiting time and queue length can be reduced using queuing model with multi-server and finite capacity which improves the job scheduling model.

  19. Reservoir computer predictions for the Three Meter magnetic field time evolution

    Science.gov (United States)

    Perevalov, A.; Rojas, R.; Lathrop, D. P.; Shani, I.; Hunt, B. R.

    2017-12-01

    The source of the Earth's magnetic field is the turbulent flow of liquid metal in the outer core. Our experiment's goal is to create Earth-like dynamo, to explore the mechanisms and to understand the dynamics of the magnetic and velocity fields. Since it is a complicated system, predictions of the magnetic field is a challenging problem. We present results of mimicking the three Meter experiment by a reservoir computer deep learning algorithm. The experiment is a three-meter diameter outer sphere and a one-meter diameter inner sphere with the gap filled with liquid sodium. The spheres can rotate up to 4 and 14 Hz respectively, giving a Reynolds number near to 108. Two external electromagnets apply magnetic fields, while an array of 31 external and 2 internal Hall sensors measure the resulting induced fields. We use this magnetic probe data to train a reservoir computer to predict the 3M time evolution and mimic waves in the experiment. Surprisingly accurate predictions can be made for several magnetic dipole time scales. This shows that such a complicated MHD system's behavior can be predicted. We gratefully acknowledge support from NSF EAR-1417148.

  20. A stable computational scheme for stiff time-dependent constitutive equations

    International Nuclear Information System (INIS)

    Shih, C.F.; Delorenzi, H.G.; Miller, A.K.

    1977-01-01

    Viscoplasticity and creep type constitutive equations are increasingly being employed in finite element codes for evaluating the deformation of high temperature structural members. These constitutive equations frequently exhibit stiff regimes which makes an analytical assessment of the structure very costly. A computational scheme for handling deformation in stiff regimes is proposed in this paper. By the finite element discretization, the governing partial differential equations in the spatial (x) and time (t) variables are reduced to a system of nonlinear ordinary differential equations in the independent variable t. The constitutive equations are expanded in a Taylor's series about selected values of t. The resulting system of differential equations are then integrated by an implicit scheme which employs a predictor technique to initiate the Newton-Raphson procedure. To examine the stability and accuracy of the computational scheme, a series of calculations were carried out for uniaxial specimens and thick wall tubes subjected to mechanical and thermal loading. (Auth.)

  1. Decreasing Transition Times in Elementary School Classrooms: Using Computer-Assisted Instruction to Automate Intervention Components

    Science.gov (United States)

    Hine, Jeffrey F.; Ardoin, Scott P.; Foster, Tori E.

    2015-01-01

    Research suggests that students spend a substantial amount of time transitioning between classroom activities, which may reduce time spent academically engaged. This study used an ABAB design to evaluate the effects of a computer-assisted intervention that automated intervention components previously shown to decrease transition times. We examined…

  2. Minimally invasive oesophagectomy more expensive than open despite shorter length of stay.

    Science.gov (United States)

    Dhamija, Anish; Dhamija, Ankit; Hancock, Jacquelyn; McCloskey, Barbara; Kim, Anthony W; Detterbeck, Frank C; Boffa, Daniel J

    2014-05-01

    The minimally invasive oesophagectomy (MIO) approach offers a number of advantages over open approaches including reduced discomfort, shorter length of stay and a faster recovery to baseline status. On the other hand, minimally invasive procedures typically are longer and consume greater disposable instrumentation, potentially resulting in a greater overall cost. The objective of this study was to compare costs associated with various oesophagectomy approaches for oesophageal cancer. An institutional Resource Information Management System (RIMS) was queried for cost data relating to hospital expenditures (as opposed to billings or collections). The RIMS was searched for patients undergoing oesophagectomy for oesophageal cancer between 2003 and 2012 via minimally invasive, open transthoracic (OTT) (including Ivor Lewis, modified McKeown or thoracoabdominal) or transhiatal approaches. Patients that were converted from minimally invasive to open, or involved hybrid procedures, were excluded. A total of 160 oesophagectomies were identified, including 61 minimally invasive, 35 open transthoracic and 64 transhiatal. Costs on the day of surgery averaged higher in the MIO group ($12 476 ± 2190) compared with the open groups, OTT ($8202 ± 2512, P < 0.0001) or OTH ($5809 ± 2575, P < 0.0001). The median costs associated with the entire hospitalization also appear to be higher in the MIO group ($25 935) compared with OTT ($24 440) and OTH ($15 248). The average length of stay was lowest in the MIO group (11 ± 9 days) compared with OTT (19 ± 18 days, P = 0.006) and OTH (18 ± 28 days P = 0.07). The operative mortality was similar in the three groups (MIO = 3%, OTT = 9% and OTH = 3%). The operating theatre costs associated with minimally invasive oesophagectomy are significantly higher than OTT or OTH approaches. Unfortunately, a shorter hospital stay after MIO does not consistently offset higher surgical expense, as total hospital costs trend higher in the MIO patients. In

  3. Real-time dynamics of lattice gauge theories with a few-qubit quantum computer

    Science.gov (United States)

    Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer

    2016-06-01

    Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.

  4. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  5. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  6. Artificial neuron operations and spike-timing-dependent plasticity using memristive devices for brain-inspired computing

    Science.gov (United States)

    Marukame, Takao; Nishi, Yoshifumi; Yasuda, Shin-ichi; Tanamoto, Tetsufumi

    2018-04-01

    The use of memristive devices for creating artificial neurons is promising for brain-inspired computing from the viewpoints of computation architecture and learning protocol. We present an energy-efficient multiplier accumulator based on a memristive array architecture incorporating both analog and digital circuitries. The analog circuitry is used to full advantage for neural networks, as demonstrated by the spike-timing-dependent plasticity (STDP) in fabricated AlO x /TiO x -based metal-oxide memristive devices. STDP protocols for controlling periodic analog resistance with long-range stability were experimentally verified using a variety of voltage amplitudes and spike timings.

  7. Real time recording system of radioisotopes by local area network (LAN) computer system and user input processing

    International Nuclear Information System (INIS)

    Shinohara, Kunio; Ito, Atsushi; Kawaguchi, Hajime; Yanase, Makoto; Uno, Kiyoshi.

    1991-01-01

    A computer-assisted real time recording system was developed for management of radioisotopes. The system composed of two personal computers forming LAN, identification-card (ID-card) reader, and electricity-operating door-lock. One computer is operated by radiation safety staffs and stores the records of radioisotopes. The users of radioisotopes are registered in this computer. Another computer is installed in front of the storage room for radioisotopes. This computer is ready for operation by a registered ID-card and is input data by the user. After the completion of data input, the door to the storage room is unlocked. The present system enables us the following merits: Radiation safety staffs can easily keep up with the present states of radioisotopes in the storage room and save much labor. Radioactivity is always corrected. The upper limit of radioactivities in use per day is automatically checked and users are regulated when they input the amounts to be used. Users can obtain storage records of radioisotopes any time. In addition, the system is applicable to facilities which have more than two storage rooms. (author)

  8. Quantification of Artifact Reduction With Real-Time Cine Four-Dimensional Computed Tomography Acquisition Methods

    International Nuclear Information System (INIS)

    Langner, Ulrich W.; Keall, Paul J.

    2010-01-01

    Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.

  9. Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation.

    Science.gov (United States)

    Tempel, David G; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.

  10. Short time ahead wind power production forecast

    International Nuclear Information System (INIS)

    Sapronova, Alla; Meissner, Catherine; Mana, Matteo

    2016-01-01

    An accurate prediction of wind power output is crucial for efficient coordination of cooperative energy production from different sources. Long-time ahead prediction (from 6 to 24 hours) of wind power for onshore parks can be achieved by using a coupled model that would bridge the mesoscale weather prediction data and computational fluid dynamics. When a forecast for shorter time horizon (less than one hour ahead) is anticipated, an accuracy of a predictive model that utilizes hourly weather data is decreasing. That is because the higher frequency fluctuations of the wind speed are lost when data is averaged over an hour. Since the wind speed can vary up to 50% in magnitude over a period of 5 minutes, the higher frequency variations of wind speed and direction have to be taken into account for an accurate short-term ahead energy production forecast. In this work a new model for wind power production forecast 5- to 30-minutes ahead is presented. The model is based on machine learning techniques and categorization approach and using the historical park production time series and hourly numerical weather forecast. (paper)

  11. Short time ahead wind power production forecast

    Science.gov (United States)

    Sapronova, Alla; Meissner, Catherine; Mana, Matteo

    2016-09-01

    An accurate prediction of wind power output is crucial for efficient coordination of cooperative energy production from different sources. Long-time ahead prediction (from 6 to 24 hours) of wind power for onshore parks can be achieved by using a coupled model that would bridge the mesoscale weather prediction data and computational fluid dynamics. When a forecast for shorter time horizon (less than one hour ahead) is anticipated, an accuracy of a predictive model that utilizes hourly weather data is decreasing. That is because the higher frequency fluctuations of the wind speed are lost when data is averaged over an hour. Since the wind speed can vary up to 50% in magnitude over a period of 5 minutes, the higher frequency variations of wind speed and direction have to be taken into account for an accurate short-term ahead energy production forecast. In this work a new model for wind power production forecast 5- to 30-minutes ahead is presented. The model is based on machine learning techniques and categorization approach and using the historical park production time series and hourly numerical weather forecast.

  12. Real time computer control of a nonlinear Multivariable System via Linearization and Stability Analysis

    International Nuclear Information System (INIS)

    Raza, K.S.M.

    2004-01-01

    This paper demonstrates that if a complicated nonlinear, non-square, state-coupled multi variable system is smartly linearized and subjected to a thorough stability analysis then we can achieve our design objectives via a controller which will be quite simple (in term of resource usage and execution time) and very efficient (in terms of robustness). Further the aim is to implement this controller via computer in a real time environment. Therefore first a nonlinear mathematical model of the system is achieved. An intelligent work is done to decouple the multivariable system. Linearization and stability analysis techniques are employed for the development of a linearized and mathematically sound control law. Nonlinearities like the saturation in actuators are also been catered. The controller is then discretized using Runge-Kutta integration. Finally the discretized control law is programmed in a computer in a real time environment. The programme is done in RT -Linux using GNU C for the real time realization of the control scheme. The real time processes, like sampling and controlled actuation, and the non real time processes, like graphical user interface and display, are programmed as different tasks. The issue of inter process communication, between real time and non real time task is addressed quite carefully. The results of this research pursuit are presented graphically. (author)

  13. Computer-based system for inspection of water chemistry regimes in WWER-type nuclear power plants

    International Nuclear Information System (INIS)

    Burcl, R.; Novak, M.; Malenka, P.

    1993-01-01

    The unsatisfactory situation in water chemistry testing at nuclear power plants with WWER type reactors is described. The testing primarily relies on laboratory analyses of manually taken samples. About 40 samples from one unit are tested per shift, which comprises approximately 250 determinations of various parameters. The time between two determinations is no shorter than 4 to 6 hours, thus rapid parameter changes between two determinations fail to be monitored. A novel system of automated chemistry monitoring is outlined, feasible for WWER type reactors. The system comprises 10 sets of sensors for monitoring all the relevant chemistry parameters of both the primary and secondary coolant circuits. Each sensor set has its own autonomous computer which secures its function even in case of loss of the chemical information network. The entire system is controlled by a master computer which also collects the results and provides contact with the power plant's information system. (Z.S.). 1 fig

  14. Access to Electric Light Is Associated with Shorter Sleep Duration in a Traditionally Hunter-Gatherer Community.

    Science.gov (United States)

    de la Iglesia, Horacio O; Fernández-Duque, Eduardo; Golombek, Diego A; Lanza, Norberto; Duffy, Jeanne F; Czeisler, Charles A; Valeggia, Claudia R

    2015-08-01

    Access to electric light might have shifted the ancestral timing and duration of human sleep. To test this hypothesis, we studied two communities of the historically hunter-gatherer indigenous Toba/Qom in the Argentinean Chaco. These communities share the same ethnic and sociocultural background, but one has free access to electricity while the other relies exclusively on natural light. We fitted participants in each community with wrist activity data loggers to assess their sleep-wake cycles during one week in the summer and one week in the winter. During the summer, participants with access to electricity had a tendency to a shorter daily sleep bout (43 ± 21 min) than those living under natural light conditions. This difference was due to a later daily bedtime and sleep onset in the community with electricity, but a similar sleep offset and rise time in both communities. In the winter, participants without access to electricity slept longer (56 ± 17 min) than those with access to electricity, and this was also related to earlier bedtimes and sleep onsets than participants in the community with electricity. In both communities, daily sleep duration was longer during the winter than during the summer. Our field study supports the notion that access to inexpensive sources of artificial light and the ability to create artificially lit environments must have been key factors in reducing sleep in industrialized human societies. © 2015 The Author(s).

  15. Comprehensive borehole management for shorter drilling time; Umfassendes Bohrfortschrittsmanagement zur Verkuerzung der Bohrprojektdauer

    Energy Technology Data Exchange (ETDEWEB)

    Roehrlich, M. [ExxonMobil Production Deutschland GmbH, Hannover (Germany)

    2007-09-13

    In 2006, the trademarked ExxonMobil Fast Drill Process (FDP) was introduced also in the German ExxonMobil boreholes. The process is to maximize the drilling speed for every meter drilled. The process makes it possible to ensure borehole management on the basis of quantitative data and in consideration of all phases that are relevant for sinking a borehole. The FDP is used world-wide in all ExxonMobil drilling departments. More than 1.35 million meters are drilled annually in many different boreholes with different geological conditions, drilling profiles and international sites. The results were similar in many cases, with a significant increase in ROP and drill bit life, and with less damage caused by vibrations. FDP was developed on the basis of real time monitoring of the specific mechanical energy (MSE) required for drilling. MSE monitoring was found to be an effective tool dor detecting inefficient functioning of the drill bit and the overall system. To make operation more efficient, the causes must be identified and measures must be taken accordingly, taking into account the potential risks involved in such measures. MSE monitoring is a tool while FDPL is a broad management process ensuring that MSE and many other data sources are used effectively for optimisation of the ROP. Consequent implementation of the process resulted in a significant increase of the ROP. The major elements required for achieving this goal are discussed. (orig.)

  16. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  17. Person-related determinants of TV viewing and computer time in a cohort of young Dutch adults: Who sits the most?

    NARCIS (Netherlands)

    Uijtdewilligen, L.; Singh, A.S.; Chin A Paw, M.J.M.; Twisk, J.W.R.; van Mechelen, W.

    2015-01-01

    We aimed to assess the associations of person-related factors with leisure time television (TV) viewing and computer time among young adults. We analyzed self-reported TV viewing (h/week) and leisure computer time (h/week) from 475 Dutch young adults (47% male) who had participated in the Amsterdam

  18. Y2K issues for real time computer systems for fast breeder test reactor

    International Nuclear Information System (INIS)

    Swaminathan, P.

    1999-01-01

    Presentation shows the classification of real time systems related to operation, control and monitoring of the fast breeder test reactor. Software life cycle includes software requirement specification, software design description, coding, commissioning, operation and management. A software scheme in supervisory computer of fast breeder test rector is described with the twenty years of experience in design, development, installation, commissioning, operation and maintenance of computer based supervision control system for nuclear installation with a particular emphasis on solving the Y2K problem

  19. Elastic Spatial Query Processing in OpenStack Cloud Computing Environment for Time-Constraint Data Analysis

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2017-03-01

    Full Text Available Geospatial big data analysis (GBDA is extremely significant for time-constraint applications such as disaster response. However, the time-constraint analysis is not yet a trivial task in the cloud computing environment. Spatial query processing (SQP is typical computation-intensive and indispensable for GBDA, and the spatial range query, join query, and the nearest neighbor query algorithms are not scalable without using MapReduce-liked frameworks. Parallel SQP algorithms (PSQPAs are trapped in screw-processing, which is a known issue in Geoscience. To satisfy time-constrained GBDA, we propose an elastic SQP approach in this paper. First, Spark is used to implement PSQPAs. Second, Kubernetes-managed Core Operation System (CoreOS clusters provide self-healing Docker containers for running Spark clusters in the cloud. Spark-based PSQPAs are submitted to Docker containers, where Spark master instances reside. Finally, the horizontal pod auto-scaler (HPA would scale-out and scale-in Docker containers for supporting on-demand computing resources. Combined with an auto-scaling group of virtual instances, HPA helps to find each of the five nearest neighbors for 46,139,532 query objects from 834,158 spatial data objects in less than 300 s. The experiments conducted on an OpenStack cloud demonstrate that auto-scaling containers can satisfy time-constraint GBDA in clouds.

  20. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... multislice CT" or "multidetector CT," allow thinner slices to be obtained, resulting in more detail of the body, in a shorter period of time. Modern CT scanners are so fast that they can scan through large sections of the body in just a few seconds. Such speed is beneficial for ...

  1. Towards OpenVL: Improving Real-Time Performance of Computer Vision Applications

    Science.gov (United States)

    Shen, Changsong; Little, James J.; Fels, Sidney

    Meeting constraints for real-time performance is a main issue for computer vision, especially for embedded computer vision systems. This chapter presents our progress on our open vision library (OpenVL), a novel software architecture to address efficiency through facilitating hardware acceleration, reusability, and scalability for computer vision systems. A logical image understanding pipeline is introduced to allow parallel processing. We also discuss progress on our middleware—vision library utility toolkit (VLUT)—that enables applications to operate transparently over a heterogeneous collection of hardware implementations. OpenVL works as a state machine,with an event-driven mechanismto provide users with application-level interaction. Various explicit or implicit synchronization and communication methods are supported among distributed processes in the logical pipelines. The intent of OpenVL is to allow users to quickly and easily recover useful information from multiple scenes, in a cross-platform, cross-language manner across various software environments and hardware platforms. To validate the critical underlying concepts of OpenVL, a human tracking system and a local positioning system are implemented and described. The novel architecture separates the specification of algorithmic details from the underlying implementation, allowing for different components to be implemented on an embedded system without recompiling code.

  2. Shorter Leukocyte Telomere Length in Relation to Presumed Nonalcoholic Fatty Liver Disease in Mexican-American Men in NHANES 1999–2002

    Directory of Open Access Journals (Sweden)

    Janet M. Wojcicki

    2017-01-01

    Full Text Available Leukocyte telomere length is shorter in response to chronic disease processes associated with inflammation such as diabetes mellitus and coronary artery disease. Data from the National Health and Nutrition Examination Survey (NHANES from 1999 to 2002 was used to explore the relationship between leukocyte telomere length and presumed NAFLD, as indicated by elevated serum alanine aminotransferase (ALT levels, obesity, or abdominal obesity. Logistic regression models were used to evaluate the relationship between telomere length and presumed markers of NAFLD adjusting for possible confounders. There was no relationship between elevated ALT levels, abdominal obesity, or obesity and telomere length in adjusted models in NHANES (OR 1.13, 95% CI 0.48–2.65; OR 1.17, 95% CI 0.52–2.62, resp.. Mexican-American men had shorter telomere length in relation to presumed NAFLD (OR 0.07, 95% CI 0.006–0.79 and using different indicators of NAFLD (OR 0.012, 95% CI 0.0006–0.24. Mexican origin with presumed NAFLD had shorter telomere length than men in other population groups. Longitudinal studies are necessary to evaluate the role of telomere length as a potential predictor to assess pathogenesis of NALFD in Mexicans.

  3. A Low-Cost Time-Hopping Impulse Radio System for High Data Rate Transmission

    Directory of Open Access Journals (Sweden)

    Jinyun Zhang

    2005-03-01

    Full Text Available We present an efficient, low-cost implementation of time-hopping impulse radio that fulfills the spectral mask mandated by the FCC and is suitable for high-data-rate, short-range communications. Key features are (i all-baseband implementation that obviates the need for passband components, (ii symbol-rate (not chip rate sampling, A/D conversion, and digital signal processing, (iii fast acquisition due to novel search algorithms, and (iv spectral shaping that can be adapted to accommodate different spectrum regulations and interference environments. Computer simulations show that this system can provide 110 Mbps at 7–10 m distance, as well as higher data rates at shorter distances under FCC emissions limits. Due to the spreading concept of time-hopping impulse radio, the system can sustain multiple simultaneous users, and can suppress narrowband interference effectively.

  4. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  5. Job tasks, computer use, and the decreasing part-time pay penalty for women in the UK

    NARCIS (Netherlands)

    Elsayed, A.E.A.; de Grip, A.; Fouarge, D.

    2014-01-01

    Using data from the UK Skills Surveys, we show that the part-time pay penalty for female workers within low- and medium-skilled occupations decreased significantly over the period 1997-2006. The convergence in computer use between part-time and full-time workers within these occupations explains a

  6. The position of a standard optical computer mouse affects cardiorespiratory responses during the operation of a computer under time constraints.

    Science.gov (United States)

    Sako, Shunji; Sugiura, Hiromichi; Tanoue, Hironori; Kojima, Makoto; Kono, Mitsunobu; Inaba, Ryoichi

    2014-08-01

    This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1) the distal position (DP), in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2) the proximal position (PP), in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses), operating efficiencies (based on word counts), and fatigue levels (based on the visual analog scale - VAS). Oxygen consumption (VO(2)), the ratio of inspiration time to respiration time (T(i)/T(total)), respiratory rate (RR), minute ventilation (VE), and the ratio of expiration to inspiration (Te/T(i)) were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT), carbon dioxide output rates (VCO(2)/VE), and oxygen extraction fractions (VO(2)/VE) were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when operating a computer.

  7. An effective support system of emergency medical services with tablet computers.

    Science.gov (United States)

    Yamada, Kosuke C; Inoue, Satoshi; Sakamoto, Yuichiro

    2015-02-27

    /30,709) in 2011. The system entry completion rate by the emergency personnel was 100.00% (93,110/93,110) and by the medical staff was 46.11% (14,159/30,709) to 47.57% (14,639/30,772) over a three-year period. Finally, the new system reduced the operational costs by 40,000,000 yen (about $400,000 US dollars) a year. The transportation time by ambulance was shorter following the implementation of the tablet computer in the current support system of EMS in Saga Prefecture, Japan. The cloud computing reduced the cost of the EMS system.

  8. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    Science.gov (United States)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  9. Prevalance of neck pain in computer users

    International Nuclear Information System (INIS)

    Sabeen, F.; Bashir, M.S.; Hussain, S.I.

    2013-01-01

    Prolonged use of computers during daily work activities and recreation is often cited as a cause of neck pain. Neck pain and computer users are clearly connected due to extended periods of sitting in a certain position with no breaks to stretch the neck muscles. Pro-longed computer use with neck bent forward, will cause the anterior neck muscles to gradually get shorter and tighter, while the muscles in the back of neck will grow longer and weaker. These changes will lead to development of neck pain. Objectives: To find incidence of neck pain in computer users, association between neck pain and prolong sitting in wrong posture, association between effects of break during prolong work, association between types of chair in use in prolong sitting and occurrence of neck pain. Methodology: For this observational study data was collected through Questionnaires from office workers (computer users), and students. Results: Out of 50 persons 72% of computer users had neck pain. Strong association was found between neck pain and prolonged computer use (p = 0.001). Those who took break during their work had less neck pain. No significant association was found between type of chair in use and neck pain. Neck pain and type of system in use also had no significant association. Conclusion: So duration of computer use and frequency of breaks are associated with neck pain at work. Severe Neck pain was found in people who use computer for more than 5 hours a day. (author)

  10. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    ... their high frequency content while among TEM data sets with low frequency content, the averaging times for the FEM ellipticity were shorter than the TEM quality. Keywords: ellipticity, frequency domain, frequency electromagnetic method, model parameter, orientation error, time domain, transient electromagnetic method

  11. Variable-Field Analytical Ultracentrifugation: I. Time-Optimized Sedimentation Equilibrium

    Science.gov (United States)

    Ma, Jia; Metrick, Michael; Ghirlando, Rodolfo; Zhao, Huaying; Schuck, Peter

    2015-01-01

    Sedimentation equilibrium (SE) analytical ultracentrifugation (AUC) is a gold standard for the rigorous determination of macromolecular buoyant molar masses and the thermodynamic study of reversible interactions in solution. A significant experimental drawback is the long time required to attain SE, which is usually on the order of days. We have developed a method for time-optimized SE (toSE) with defined time-varying centrifugal fields that allow SE to be attained in a significantly (up to 10-fold) shorter time than is usually required. To achieve this, numerical Lamm equation solutions for sedimentation in time-varying fields are computed based on initial estimates of macromolecular transport properties. A parameterized rotor-speed schedule is optimized with the goal of achieving a minimal time to equilibrium while limiting transient sample preconcentration at the base of the solution column. The resulting rotor-speed schedule may include multiple over- and underspeeding phases, balancing the formation of gradients from strong sedimentation fluxes with periods of high diffusional transport. The computation is carried out in a new software program called TOSE, which also facilitates convenient experimental implementation. Further, we extend AUC data analysis to sedimentation processes in such time-varying centrifugal fields. Due to the initially high centrifugal fields in toSE and the resulting strong migration, it is possible to extract sedimentation coefficient distributions from the early data. This can provide better estimates of the size of macromolecular complexes and report on sample homogeneity early on, which may be used to further refine the prediction of the rotor-speed schedule. In this manner, the toSE experiment can be adapted in real time to the system under study, maximizing both the information content and the time efficiency of SE experiments. PMID:26287634

  12. [Evaluation of production and clinical working time of computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays for complete denture].

    Science.gov (United States)

    Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X

    2017-02-18

    To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (Pmanufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being

  13. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    Science.gov (United States)

    2015-06-01

    CONFABULATION BASED REAL-TIME ANOMALY DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE SYRACUSE...DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-12-1-0251 5b. GRANT...processors including graphic processor units (GPUs) and Intel Xeon Phi processors. Experimental results showed significant speedups, which can enable

  14. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    International Nuclear Information System (INIS)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide

    2007-01-01

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustrate our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls

  15. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Sarrut, David; Guigues, Laurent

    2008-01-01

    Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy

  16. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  17. Automated selection of brain regions for real-time fMRI brain-computer interfaces

    Science.gov (United States)

    Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio

    2017-02-01

    Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.

  18. Self-Motion Perception: Assessment by Real-Time Computer Generated Animations

    Science.gov (United States)

    Parker, Donald E.

    1999-01-01

    Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?

  19. ALMOD-JRC computer program

    International Nuclear Information System (INIS)

    Cacciabue, P.C.; Lisanti, B.; Tozzi, A.

    1984-01-01

    This paper discusses the details concerning the newly developed or modified models of the computer program ALMOD-JRC, originating from ALMOD 3/Rel 4. The most important argument for the implementation of the new models was the need to enlarge the spectrum of the simulated phenomena, and to improve the simulation of experimental facilities such as LOFT or LOBI. This has led to a better formulation of the heat transfer and pressure drops correlations and to the implementation of the treatment of the heat losses to structural materials. In particular a series of test cases on real power plants, a pre-test examination of a LOBI station blackout ATWS experiment and the post test analysis of the L9-3 experiment, show the ability of ALMOD-JRC to correctly simulate PWR incident sequences. Although in ALMOD-JRC the code capabilities have been expanded, the limitations of the original version of the program still hold for what concerns the treatment of the coolant thermohydraulics as homogeneous flow for the two phase conditions in the primary coolant circuit. The other interesting feature of the new code is the remarkably shorter running times obtained with the introduction of simplified numerical treatments for the solving equations, without significant loss of accuracy of results

  20. An Implementation of Parallel and Networked Computing Schemes for the Real-Time Image Reconstruction Based on Electrical Tomography

    International Nuclear Information System (INIS)

    Park, Sook Hee

    2001-02-01

    This thesis implements and analyzes the parallel and networked computing libraries based on the multiprocessor computer architecture as well as networked computers, aiming at improving the computation speed of ET(Electrical Tomography) system which requires enormous CPU time in reconstructing the unknown internal state of the target object. As an instance of the typical tomography technology, ET partitions the cross-section of the target object into the tiny elements and calculates the resistivity of them with signal values measured at the boundary electrodes surrounding the surface of the object after injecting the predetermined current pattern through the object. The number of elements is determined considering the trade-off between the accuracy of the reconstructed image and the computation time. As the elements become more finer, the number of element increases, and the system can get the better image. However, the reconstruction time increases polynomially with the number of partitioned elements since the procedure consists of a number of time consuming matrix operations such as multiplication, inverse, pseudo inverse, Jacobian and so on. Consequently, the demand for improving computation speed via multiple processor grows indispensably. Moreover, currently released PCs can be stuffed with up to 4 CPUs interconnected to the shared memory while some operating systems enable the application process to benefit from such computer by allocating the threaded job to each CPU, resulting in concurrent processing. In addition, a networked computing or cluster computing environment is commonly available to almost every computer which contains communication protocol and is connected to local or global network. After partitioning the given job(numerical operation), each CPU or computer calculates the partial result independently, and the results are merged via common memory to produce the final result. It is desirable to adopt the commonly used library such as Matlab to

  1. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Science.gov (United States)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  2. Computing Camps for Girls : A First-Time Experience at the University of Limerick

    NARCIS (Netherlands)

    McInerney, Clare; Lamprecht, A.L.; Margaria, Tiziana

    2018-01-01

    Increasing the number of females in ICT-related university courses has been a major concern for several years. In 2015, we offered a girls-only computing summer camp for the first time, as a new component in our education and outreach activities to foster students’ interest in our discipline. In

  3. Investigating the influence of eating habits, body weight and television programme preferences on television viewing time and domestic computer usage.

    Science.gov (United States)

    Raptou, Elena; Papastefanou, Georgios; Mattas, Konstadinos

    2017-01-01

    The present study explored the influence of eating habits, body weight and television programme preference on television viewing time and domestic computer usage, after adjusting for sociodemographic characteristics and home media environment indicators. In addition, potential substitution or complementarity in screen time was investigated. Individual level data were collected via questionnaires that were administered to a random sample of 2,946 Germans. The econometric analysis employed a seemingly unrelated bivariate ordered probit model to conjointly estimate television viewing time and time engaged in domestic computer usage. Television viewing and domestic computer usage represent two independent behaviours in both genders and across all age groups. Dietary habits have a significant impact on television watching with less healthy food choices associated with increasing television viewing time. Body weight is found to be positively correlated with television screen time in both men and women, and overweight individuals have a higher propensity for heavy television viewing. Similar results were obtained for age groups where an increasing body mass index (BMI) in adults over 24 years old is more likely to be positively associated with a higher duration of television watching. With respect to dietary habits of domestic computer users, participants aged over 24 years of both genders seem to adopt more healthy dietary patterns. A downward trend in the BMI of domestic computer users was observed in women and adults aged 25-60 years. On the contrary, young domestic computer users 18-24 years old have a higher body weight than non-users. Television programme preferences also affect television screen time with clear differences to be observed between genders and across different age groups. In order to reduce total screen time, health interventions should target different types of screen viewing audiences separately.

  4. A Computational Model for Real-Time Calculation of Electric Field due to Transcranial Magnetic Stimulation in Clinics

    Directory of Open Access Journals (Sweden)

    Alessandra Paffi

    2015-01-01

    Full Text Available The aim of this paper is to propose an approach for an accurate and fast (real-time computation of the electric field induced inside the whole brain volume during a transcranial magnetic stimulation (TMS procedure. The numerical solution implements the admittance method for a discretized realistic brain model derived from Magnetic Resonance Imaging (MRI. Results are in a good agreement with those obtained using commercial codes and require much less computational time. An integration of the developed code with neuronavigation tools will permit real-time evaluation of the stimulated brain regions during the TMS delivery, thus improving the efficacy of clinical applications.

  5. A practical O(n log2 n) time algorithm for computing the triplet distance on binary trees

    DEFF Research Database (Denmark)

    Sand, Andreas; Pedersen, Christian Nørgaard Storm; Mailund, Thomas

    2013-01-01

    rooted binary trees in time O (n log2 n). The algorithm is related to an algorithm for computing the quartet distance between two unrooted binary trees in time O (n log n). While the quartet distance algorithm has a very severe overhead in the asymptotic time complexity that makes it impractical compared......The triplet distance is a distance measure that compares two rooted trees on the same set of leaves by enumerating all sub-sets of three leaves and counting how often the induced topologies of the tree are equal or different. We present an algorithm that computes the triplet distance between two...

  6. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  7. Is Alcohol Use Disorder Identification Test (AUDIT or its shorter versions more useful to identify risky drinkers in a Chinese population? A diagnostic study.

    Directory of Open Access Journals (Sweden)

    Benjamin H K Yip

    Full Text Available To examine the diagnostic performance of shorter versions of Alcohol Use Disorder Identification Test (AUDIT, including Alcohol Consumption (AUDIT-C, in identifying risky drinkers in primary care settings using conventional performance measures, supplemented by decision curve analysis and reclassification table.A cross-sectional study of adult males in general outpatient clinics in Hong Kong. The study included only patients who reported at least sometimes drinking alcoholic beverages. Timeline follow back alcohol consumption assessment method was used as the reference standard. A Chinese translated and validated 10-item AUDIT (Ch-AUDIT was used as a screening tool of risky drinking.Of the participants, 21.7% were classified as risky drinkers. AUDIT-C has the best overall performance among the shorter versions of Ch-AUDIT. The AUC of AUDIT-C was comparable to Ch-AUDIT (0.898 vs 0.901, p-value = 0.959. Decision curve analysis revealed that when the threshold probability ranged from 15-30%, the AUDIT-C had a higher net-benefit than all other screens. AUDIT-C improved the reclassification of risky drinking when compared to Ch-AUDIT (net reclassification improvement = 0.167. The optimal cut-off of AUDIT-C was at ≥5.Given the rising levels of alcohol consumption in the Chinese regions, this Chinese translated 3-item instrument provides convenient and time-efficient risky drinking screening and may become an increasingly useful tool.

  8. Computer-Aided Software Engineering - An approach to real-time software development

    Science.gov (United States)

    Walker, Carrie K.; Turkovich, John J.

    1989-01-01

    A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.

  9. Potential contribution of multiplanar reconstruction (MPR) to computer-aided detection of lung nodules on MDCT

    International Nuclear Information System (INIS)

    Matsumoto, Sumiaki; Ohno, Yoshiharu; Yamagata, Hitoshi; Nogami, Munenobu; Kono, Atsushi; Sugimura, Kazuro

    2012-01-01

    Purpose: To evaluate potential benefits of using multiplanar reconstruction (MPR) in computer-aided detection (CAD) of lung nodules on multidetector computed tomography (MDCT). Materials and methods: MDCT datasets of 60 patients with suspected lung nodules were retrospectively collected. Using “second-read” CAD, two radiologists (Readers 1 and 2) independently interpreted these datasets for the detection of non-calcified nodules (≥4 mm) with concomitant confidence rating. They did this task twice, first without MPR (using only axial images), and then 4 weeks later with MPR (using also coronal and sagittal MPR images), where the total reading time per dataset, including the time taken to assess the detection results of CAD software (CAD assessment time), was recorded. The total reading time and CAD assessment time without MPR and those with MPR were statistically compared for each reader. The radiologists’ performance for detecting nodules without MPR and the performance with MPR were compared using jackknife free-response receiver operating characteristic (JAFROC) analysis. Results: Compared to the CAD assessment time without MPR (mean, 69 s and 57 s for Readers 1 and 2), the CAD assessment time with MPR (mean, 46 s and 45 s for Readers 1 and 2) was significantly reduced (P < 0.001). For Reader 1, the total reading time was also significantly shorter in the case with MPR. There was no significant difference between the detection performances without MPR and with MPR. Conclusion: The use of MPR has the potential to improve the workflow in CAD of lung nodules on MDCT.

  10. Neural Computations in a Dynamical System with Multiple Time Scales

    Directory of Open Access Journals (Sweden)

    Yuanyuan Mi

    2016-09-01

    Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.

  11. Effect of correlated decay on fault-tolerant quantum computation

    Science.gov (United States)

    Lemberger, B.; Yavuz, D. D.

    2017-12-01

    We analyze noise in the circuit model of quantum computers when the qubits are coupled to a common bosonic bath and discuss the possible failure of scalability of quantum computation. Specifically, we investigate correlated (super-radiant) decay between the qubit energy levels from a two- or three-dimensional array of qubits without imposing any restrictions on the size of the sample. We first show that regardless of how the spacing between the qubits compares with the emission wavelength, correlated decay produces errors outside the applicability of the threshold theorem. This is because the sum of the norms of the two-body interaction Hamiltonians (which can be viewed as the upper bound on the single-qubit error) that decoheres each qubit scales with the total number of qubits and is unbounded. We then discuss two related results: (1) We show that the actual error (instead of the upper bound) on each qubit scales with the number of qubits. As a result, in the limit of large number of qubits in the computer, N →∞ , correlated decay causes each qubit in the computer to decohere in ever shorter time scales. (2) We find the complete eigenvalue spectrum of the exchange Hamiltonian that causes correlated decay in the same limit. We show that the spread of the eigenvalue distribution grows faster with N compared to the spectrum of the unperturbed system Hamiltonian. As a result, as N →∞ , quantum evolution becomes completely dominated by the noise due to correlated decay. These results argue that scalable quantum computing may not be possible in the circuit model in a two- or three- dimensional geometry when the qubits are coupled to a common bosonic bath.

  12. The comparison of bolus tracking and test bolus techniques for computed tomography thoracic angiography in healthy beagles

    Directory of Open Access Journals (Sweden)

    Nicolette Cassel

    2013-05-01

    Full Text Available Computed tomography thoracic angiography studies were performed on five adult beagles using the bolus tracking (BT technique and the test bolus (TB technique, which were performed at least two weeks apart. For the BT technique, 2 mL/kg of 300 mgI/mL iodinated contrast agent was injected intravenously. Scans were initiated when the contrast in the aorta reached 150 Hounsfield units (HU. For the TB technique, the dogs received a test dose of 15% of 2 mL/kg of 300 mgI/mL iodinated contrast agent, followed by a series of low dose sequential scans. The full dose of the contrast agent was then administered and the scans were conducted at optimal times as identified from time attenuation curves. Mean attenuation in HU was measured in the aorta (Ao and right caudal pulmonary artery (rCPA. Additional observations included the study duration, milliAmpere (mA, computed tomography dose index volume (CTDI[vol] and dose length product (DLP. The attenuation in the Ao (BT = 660 52 HU ± 138 49 HU, TB = 469 82 HU ± 199 52 HU, p = 0.13 and in the rCPA (BT = 606 34 HU ± 143 37 HU, TB = 413 72 HU ± 174.99 HU, p = 0.28 did not differ significantly between the two techniques. The BT technique was conducted in a significantly shorter time period than the TB technique (p = 0.03. The mean mA for the BT technique was significantly lower than the TB technique (p = 0.03, as was the mean CTDI(vol (p = 0.001. The mean DLP did not differ significantly between the two techniques (p = 0.17. No preference was given to either technique when evaluating the Ao or rCPA but the BT technique was shown to be shorter in duration and resulted in less DLP than the TB technique.

  13. Cost-effectiveness of longer-term versus shorter-term provision of antibiotics in patients with persistent symptoms attributed to Lyme disease

    NARCIS (Netherlands)

    Berende, A.; Nieuwenhuis, L.; Hofstede, H.J.M. ter; Vos, F.J.; Vogelaar, M.L.; Tromp, M.A.; Middendorp, H. van; Donders, A.R.T.; Evers, A.W.M.; Kullberg, B.J.; Adang, E.M.M.

    2018-01-01

    BACKGROUND: The treatment of persistent symptoms attributed to Lyme disease remains controversial. Recently, the PLEASE study did not demonstrate any additional clinical benefit of longer-term versus shorter-term antibiotic treatment. However, the economic impact of the antibiotic strategies has not

  14. The use of diffusion theory to compute invasion effects for the pulsed neutron thermal decay time log

    International Nuclear Information System (INIS)

    Tittle, C.W.

    1992-01-01

    Diffusion theory has been successfully used to model the effect of fluid invasion into the formation for neutron porosity logs and for the gamma-gamma density log. The purpose of this paper is to present results of computations using a five-group time-dependent diffusion code on invasion effects for the pulsed neutron thermal decay time log. Previous invasion studies by the author involved the use of a three-dimensional three-group steady-state diffusion theory to model the dual-detector thermal neutron porosity log and the gamma-gamma density log. The five-group time-dependent code MGNDE (Multi-Group Neutron Diffusion Equation) used in this work was written by Ferguson. It has been successfully used to compute the intrinsic formation life-time correction for pulsed neutron thermal decay time logs. This application involves the effect of fluid invasion into the formation

  15. Computer-generated versus nurse-determined strategy for incubator humidity and time to regain birthweight

    NARCIS (Netherlands)

    Helder, Onno K.; Mulder, Paul G. H.; van Goudoever, Johannes B.

    2008-01-01

    To compare effects on premature infants' weight gain of a computer-generated and a nurse-determined incubator humidity strategy. An optimal humidity protocol is thought to reduce time to regain birthweight. Prospective randomized controlled design. Level IIIC neonatal intensive care unit in the

  16. Math modeling and computer mechanization for real time simulation of rotary-wing aircraft

    Science.gov (United States)

    Howe, R. M.

    1979-01-01

    Mathematical modeling and computer mechanization for real time simulation of rotary wing aircraft is discussed. Error analysis in the digital simulation of dynamic systems, such as rotary wing aircraft is described. The method for digital simulation of nonlinearities with discontinuities, such as exist in typical flight control systems and rotor blade hinges, is discussed.

  17. Prevalence and correlates of problematic internet experiences and computer-using time: a two-year longitudinal study in korean school children.

    Science.gov (United States)

    Yang, Su-Jin; Stewart, Robert; Lee, Ju-Yeon; Kim, Jae-Min; Kim, Sung-Wan; Shin, Il-Seon; Yoon, Jin-Sang

    2014-01-01

    To measure the prevalence of and factors associated with online inappropriate sexual exposure, cyber-bullying victimisation, and computer-using time in early adolescence. A two-year, prospective school survey was performed with 1,173 children aged 13 at baseline. Data collected included demographic factors, bullying experience, depression, anxiety, coping strategies, self-esteem, psychopathology, attention-deficit hyperactivity disorder symptoms, and school performance. These factors were investigated in relation to problematic Internet experiences and computer-using time at age 15. The prevalence of online inappropriate sexual exposure, cyber-bullying victimisation, academic-purpose computer overuse, and game-purpose computer overuse was 31.6%, 19.2%, 8.5%, and 21.8%, respectively, at age 15. Having older siblings, more weekly pocket money, depressive symptoms, anxiety symptoms, and passive coping strategy were associated with reported online sexual harassment. Male gender, depressive symptoms, and anxiety symptoms were associated with reported cyber-bullying victimisation. Female gender was associated with academic-purpose computer overuse, while male gender, lower academic level, increased height, and having older siblings were associated with game-purpose computer-overuse. Different environmental and psychological factors predicted different aspects of problematic Internet experiences and computer-using time. This knowledge is important for framing public health interventions to educate adolescents about, and prevent, internet-derived problems.

  18. Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code

    Directory of Open Access Journals (Sweden)

    GHULAM MUSTAFA

    2017-01-01

    Full Text Available Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system

  19. Just-in-time compilation-inspired methodology for parallelization of compute intensive java code

    International Nuclear Information System (INIS)

    Mustafa, G.; Ghani, M.U.

    2017-01-01

    Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time) compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum) benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system. (author)

  20. Interstitial lung abnormalities in treatment-naïve advanced non-small-cell lung cancer patients are associated with shorter survival

    Energy Technology Data Exchange (ETDEWEB)

    Nishino, Mizuki, E-mail: Mizuki_Nishino@DFCI.HARVARD.EDU [Department of Radiology, Brigham and Women' s Hospital, 75 Francis St., Boston, MA 02115 (United States); Department of Imaging, Dana-Farber Cancer Institute, 450 Brookline Ave., Boston, MA 02215 (United States); Cardarella, Stephanie [Department of Medical Oncology, Dana-Farber Cancer Institute, 450 Brookline Ave., Boston, MA 02215, (United States); Dahlberg, Suzanne E. [Department of Biostatistics and Computational Biology, Dana-Farber Cancer Institute, 450 Brookline Ave., Boston, MA 02215 (United States); Araki, Tetsuro [Department of Radiology, Brigham and Women' s Hospital, 75 Francis St., Boston, MA 02115 (United States); Lydon, Christine; Jackman, David M.; Rabin, Michael S. [Department of Medical Oncology, Dana-Farber Cancer Institute, 450 Brookline Ave., Boston, MA 02215, (United States); Hatabu, Hiroto [Department of Radiology, Brigham and Women' s Hospital, 75 Francis St., Boston, MA 02115 (United States); Johnson, Bruce E. [Department of Medical Oncology, Dana-Farber Cancer Institute, 450 Brookline Ave., Boston, MA 02215, (United States)

    2015-05-15

    Highlights: • Interstitial lung abnormalities were present in 14% of stage IV NSCLC patients. • ILA was more common in older patients with heavier smoking history. • ILA was associated with shorter survival after adjusting for smoking and therapy. • ILA could be an additional independent marker for survival in advanced NSCLC. - Abstract: Objective: Interstitial lung diseases are associated with increased risk of lung cancer. The prevalence of ILA at diagnosis of advanced non-small-cell lung cancer (NSCLC) and its impact on overall survival (OS) remain to be investigated. Materials and method: The study included 120 treatment-naïve stage IV NSCLC patients (53 males, 67 females). ILA was scored on CT prior to any systemic therapy using a 4-point scale [0 = no evidence of ILA, 1 = equivocal for ILA, 2 = suspicious for ILA, 3 = ILA] by a sequential reading method previously reported. ILA scores of 2 or 3 indicated the presence of ILA. Results: ILA was present in 17 patients (14%) with advanced NSCLC prior to any treatment (score3: n = 2, score2: n = 15). These 17 patients were significantly older (median age: 69 vs. 63, p = 0.04) and had a heavier smoking history (median: 40 vs. 15.5 pack-year, p = 0.003) than those with ILA score 0 or 1. Higher ILA scores were associated with shorter OS (p = 0.001). Median OS of the 17 patients with ILA was 7.2 months [95%CI: 2.9–9.4] compared to 14.8 months [95%CI: 11.1–18.4] in patients with ILA score 0 or 1 (p = 0.002). In a multivariate model, the presence of ILA remained significant for increased risk for death (HR = 2.09, p = 0.028) after adjusting for first-line systemic therapy (chemotherapy, p < 0.001; TKI, p < 0.001; each compared to no therapy) and pack years of smoking (p = 0.40). Conclusion: Radiographic ILA was present in 14% of treatment-naïve advanced NSCLC patients. Higher ILA scores were associated with shorter OS, indicating that ILA could be a marker of shorter survival in advanced NSCLC.

  1. Search times and probability of detection in time-limited search

    Science.gov (United States)

    Wilson, David; Devitt, Nicole; Maurer, Tana

    2005-05-01

    When modeling the search and target acquisition process, probability of detection as a function of time is important to war games and physical entity simulations. Recent US Army RDECOM CERDEC Night Vision and Electronics Sensor Directorate modeling of search and detection has focused on time-limited search. Developing the relationship between detection probability and time of search as a differential equation is explored. One of the parameters in the current formula for probability of detection in time-limited search corresponds to the mean time to detect in time-unlimited search. However, the mean time to detect in time-limited search is shorter than the mean time to detect in time-unlimited search and the relationship between them is a mathematical relationship between these two mean times. This simple relationship is derived.

  2. Real time computer system with distributed microprocessors

    International Nuclear Information System (INIS)

    Heger, D.; Steusloff, H.; Syrbe, M.

    1979-01-01

    The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de

  3. Project Energise: Using participatory approaches and real time computer prompts to reduce occupational sitting and increase work time physical activity in office workers.

    Science.gov (United States)

    Gilson, Nicholas D; Ng, Norman; Pavey, Toby G; Ryde, Gemma C; Straker, Leon; Brown, Wendy J

    2016-11-01

    This efficacy study assessed the added impact real time computer prompts had on a participatory approach to reduce occupational sedentary exposure and increase physical activity. Quasi-experimental. 57 Australian office workers (mean [SD]; age=47 [11] years; BMI=28 [5]kg/m 2 ; 46 men) generated a menu of 20 occupational 'sit less and move more' strategies through participatory workshops, and were then tasked with implementing strategies for five months (July-November 2014). During implementation, a sub-sample of workers (n=24) used a chair sensor/software package (Sitting Pad) that gave real time prompts to interrupt desk sitting. Baseline and intervention sedentary behaviour and physical activity (GENEActiv accelerometer; mean work time percentages), and minutes spent sitting at desks (Sitting Pad; mean total time and longest bout) were compared between non-prompt and prompt workers using a two-way ANOVA. Workers spent close to three quarters of their work time sedentary, mostly sitting at desks (mean [SD]; total desk sitting time=371 [71]min/day; longest bout spent desk sitting=104 [43]min/day). Intervention effects were four times greater in workers who used real time computer prompts (8% decrease in work time sedentary behaviour and increase in light intensity physical activity; pcomputer prompts facilitated the impact of a participatory approach on reductions in occupational sedentary exposure, and increases in physical activity. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  4. Decreasing Computational Time for VBBinaryLensing by Point Source Approximation

    Science.gov (United States)

    Tirrell, Bethany M.; Visgaitis, Tiffany A.; Bozza, Valerio

    2018-01-01

    The gravitational lens of a binary system produces a magnification map that is more intricate than a single object lens. This map cannot be calculated analytically and one must rely on computational methods to resolve. There are generally two methods of computing the microlensed flux of a source. One is based on ray-shooting maps (Kayser, Refsdal, & Stabell 1986), while the other method is based on an application of Green’s theorem. This second method finds the area of an image by calculating a Riemann integral along the image contour. VBBinaryLensing is a C++ contour integration code developed by Valerio Bozza, which utilizes this method. The parameters at which the source object could be treated as a point source, or in other words, when the source is far enough from the caustic, was of interest to substantially decrease the computational time. The maximum and minimum values of the caustic curves produced, were examined to determine the boundaries for which this simplification could be made. The code was then run for a number of different maps, with separation values and accuracies ranging from 10-1 to 10-3, to test the theoretical model and determine a safe buffer for which minimal error could be made for the approximation. The determined buffer was 1.5+5q, with q being the mass ratio. The theoretical model and the calculated points worked for all combinations of the separation values and different accuracies except the map with accuracy and separation equal to 10-3 for y1 max. An alternative approach has to be found in order to accommodate a wider range of parameters.

  5. Infinite possibilities: Computational structures technology

    Science.gov (United States)

    Beam, Sherilee F.

    1994-12-01

    Computational Fluid Dynamics (or CFD) methods are very familiar to the research community. Even the general public has had some exposure to CFD images, primarily through the news media. However, very little attention has been paid to CST--Computational Structures Technology. Yet, no important design can be completed without it. During the first half of this century, researchers only dreamed of designing and building structures on a computer. Today their dreams have become practical realities as computational methods are used in all phases of design, fabrication and testing of engineering systems. Increasingly complex structures can now be built in even shorter periods of time. Over the past four decades, computer technology has been developing, and early finite element methods have grown from small in-house programs to numerous commercial software programs. When coupled with advanced computing systems, they help engineers make dramatic leaps in designing and testing concepts. The goals of CST include: predicting how a structure will behave under actual operating conditions; designing and complementing other experiments conducted on a structure; investigating microstructural damage or chaotic, unpredictable behavior; helping material developers in improving material systems; and being a useful tool in design systems optimization and sensitivity techniques. Applying CST to a structure problem requires five steps: (1) observe the specific problem; (2) develop a computational model for numerical simulation; (3) develop and assemble software and hardware for running the codes; (4) post-process and interpret the results; and (5) use the model to analyze and design the actual structure. Researchers in both industry and academia continue to make significant contributions to advance this technology with improvements in software, collaborative computing environments and supercomputing systems. As these environments and systems evolve, computational structures technology will

  6. Efficient Buffer Capacity and Scheduler Setting Computation for Soft Real-Time Stream Processing Applications

    NARCIS (Netherlands)

    Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef

    2007-01-01

    Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique

  7. Green computing: power optimisation of VFI-based real-time multiprocessor dataflow applications (extended version)

    NARCIS (Netherlands)

    Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2015-01-01

    Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor

  8. Systemic lupus erythematosus and vitamin D deficiency are associated with shorter telomere length among African Americans: a case-control study.

    Directory of Open Access Journals (Sweden)

    Brett M Hoffecker

    Full Text Available Systemic lupus erythematosus (SLE is a chronic systemic autoimmune disease that disproportionately affects African American females. The causes of SLE are unknown but postulated to be a combination of genetic predisposition and environmental triggers. Vitamin D deficiency is one of the possible environmental triggers. In this study we evaluated relationships between vitamin D status, cellular aging (telomere length and anti-telomere antibodies among African American Gullah women with SLE. The study population included African American female SLE patients and unaffected controls from the Sea Island region of South Carolina. Serum 25-hydroxyvitamin D levels were measured using a nonchromatographic radioimmunoassay. Telomere length was measured in genomic DNA of peripheral blood mononuclear cells (PBMCs by monochrome multiplex quantitative PCR. Anti-telomere antibody levels were measured by enzyme-linked immunosorbent assay (ELISA. Patients with SLE had significantly shorter telomeres and higher anti-telomere antibody titers compared to age- and gender-matched unaffected controls. There was a positive correlation between anti-telomere antibody levels and disease activity among patients and a significant correlation of shorter telomeres with lower 25-hydroxyvitamin D levels in both patients and controls. In follow-up examination of a subset of the patients, the patients who remained vitamin D deficient tended to have shorter telomeres than those patients whose 25-hydroxyvitamin D levels were repleted. Increasing 25-hydroxyvitamin D levels in African American patients with SLE may be beneficial in maintaining telomere length and preventing cellular aging. Moreover, anti-telomere antibody levels may be a promising biomarker of SLE status and disease activity.

  9. A State-of-the-Art Review of the Real-Time Computer-Aided Study of the Writing Process

    Science.gov (United States)

    Abdel Latif, Muhammad M.

    2008-01-01

    Writing researchers have developed various methods for investigating the writing process since the 1970s. The early 1980s saw the occurrence of the real-time computer-aided study of the writing process that relies on the protocols generated by recording the computer screen activities as writers compose using the word processor. This article…

  10. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  11. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  12. Time-Domain Terahertz Computed Axial Tomography NDE System

    Science.gov (United States)

    Zimdars, David

    2012-01-01

    NASA has identified the need for advanced non-destructive evaluation (NDE) methods to characterize aging and durability in aircraft materials to improve the safety of the nation's airline fleet. 3D THz tomography can play a major role in detection and characterization of flaws and degradation in aircraft materials, including Kevlar-based composites and Kevlar and Zylon fabric covers for soft-shell fan containment where aging and durability issues are critical. A prototype computed tomography (CT) time-domain (TD) THz imaging system has been used to generate 3D images of several test objects including a TUFI tile (a thermal protection system tile used on the Space Shuttle and possibly the Orion or similar capsules). This TUFI tile had simulated impact damage that was located and the depth of damage determined. The CT motion control gan try was designed and constructed, and then integrated with a T-Ray 4000 control unit and motion controller to create a complete CT TD-THz imaging system prototype. A data collection software script was developed that takes multiple z-axis slices in sequence and saves the data for batch processing. The data collection software was integrated with the ability to batch process the slice data with the CT TD-THz image reconstruction software. The time required to take a single CT slice was decreased from six minutes to approximately one minute by replacing the 320 ps, 100-Hz waveform acquisition system with an 80 ps, 1,000-Hz waveform acquisition system. The TD-THZ computed tomography system was built from pre-existing commercial off-the-shelf subsystems. A CT motion control gantry was constructed from COTS components that can handle larger samples. The motion control gantry allows inspection of sample sizes of up to approximately one cubic foot (.0.03 cubic meters). The system reduced to practice a CT-TDTHz system incorporating a COTS 80- ps/l-kHz waveform scanner. The incorporation of this scanner in the system allows acquisition of 3D

  13. Factors influencing the latency of simple reaction time.

    Science.gov (United States)

    Woods, David L; Wyma, John M; Yund, E William; Herron, Timothy J; Reed, Bruce

    2015-01-01

    Simple reaction time (SRT), the minimal time needed to respond to a stimulus, is a basic measure of processing speed. SRTs were first measured by Francis Galton in the 19th century, who reported visual SRT latencies below 190 ms in young subjects. However, recent large-scale studies have reported substantially increased SRT latencies that differ markedly in different laboratories, in part due to timing delays introduced by the computer hardware and software used for SRT measurement. We developed a calibrated and temporally precise SRT test to analyze the factors that influence SRT latencies in a paradigm where visual stimuli were presented to the left or right hemifield at varying stimulus onset asynchronies (SOAs). Experiment 1 examined a community sample of 1469 subjects ranging in age from 18 to 65. Mean SRT latencies were short (231, 213 ms when corrected for hardware delays) and increased significantly with age (0.55 ms/year), but were unaffected by sex or education. As in previous studies, SRTs were prolonged at shorter SOAs and were slightly faster for stimuli presented in the visual field contralateral to the responding hand. Stimulus detection time (SDT) was estimated by subtracting movement initiation time, measured in a speeded finger tapping test, from SRTs. SDT latencies averaged 131 ms and were unaffected by age. Experiment 2 tested 189 subjects ranging in age from 18 to 82 years in a different laboratory using a larger range of SOAs. Both SRTs and SDTs were slightly prolonged (by 7 ms). SRT latencies increased with age while SDT latencies remained stable. Precise computer-based measurements of SRT latencies show that processing speed is as fast in contemporary populations as in the Victorian era, and that age-related increases in SRT latencies are due primarily to slowed motor output.

  14. Factors influencing the latency of simple reaction time

    Directory of Open Access Journals (Sweden)

    David L Woods

    2015-03-01

    Full Text Available Simple reaction time (SRT, the minimal time needed to respond to a stimulus, is a basic measure of processing speed. SRTs were first measured by Francis Galton in the 19th century who reported visual SRT latencies below 190 ms in young subjects. However, recent large-scale studies have reported substantially increased SRT latencies that differ markedly in different laboratories, in part due to timing delays introduced by computer hardware and software used for SRT measurement. We developed a calibrated and temporally-precise SRT paradigm to analyze the factors that influence SRT latencies in a paradigm where visual stimuli were presented to the left or right hemifield at varying stimulus onset asynchronies (SOAs. Experiment 1 examined a community sample of 1469 subjects ranging in age from 18 to 65. Mean SRT latencies were short (231 ms, 213 ms when corrected for hardware delays and increased significantly with age (0.55 ms/year, but were unaffected by sex or education. As in previous studies, SRTs were prolonged at shorter SOAs and were slightly faster for stimuli presented in the visual field contralateral to the responding hand. Stimulus detection time (SDT was estimated by subtracting movement-initiation time, measured in a speeded finger-tapping test, from SRTs. SDT latencies averaged 131 ms and were unaffected by age. Experiment 2 tested 189 subjects ranging in age from 18 to 82 years in a different laboratory using a larger range of SOAs. Both SRTs and SDTs were slightly prolonged (by 7 ms. SRT latencies increased with age while SDT latencies did not. Precise computer-based measurements of SRT latencies show that processing speed is as fast in contemporary populations as in those from the Victorian era and that age-related increases in SRT latencies are due primarily to slowed motor output.

  15. Factors influencing the latency of simple reaction time

    Science.gov (United States)

    Woods, David L.; Wyma, John M.; Yund, E. William; Herron, Timothy J.; Reed, Bruce

    2015-01-01

    Simple reaction time (SRT), the minimal time needed to respond to a stimulus, is a basic measure of processing speed. SRTs were first measured by Francis Galton in the 19th century, who reported visual SRT latencies below 190 ms in young subjects. However, recent large-scale studies have reported substantially increased SRT latencies that differ markedly in different laboratories, in part due to timing delays introduced by the computer hardware and software used for SRT measurement. We developed a calibrated and temporally precise SRT test to analyze the factors that influence SRT latencies in a paradigm where visual stimuli were presented to the left or right hemifield at varying stimulus onset asynchronies (SOAs). Experiment 1 examined a community sample of 1469 subjects ranging in age from 18 to 65. Mean SRT latencies were short (231, 213 ms when corrected for hardware delays) and increased significantly with age (0.55 ms/year), but were unaffected by sex or education. As in previous studies, SRTs were prolonged at shorter SOAs and were slightly faster for stimuli presented in the visual field contralateral to the responding hand. Stimulus detection time (SDT) was estimated by subtracting movement initiation time, measured in a speeded finger tapping test, from SRTs. SDT latencies averaged 131 ms and were unaffected by age. Experiment 2 tested 189 subjects ranging in age from 18 to 82 years in a different laboratory using a larger range of SOAs. Both SRTs and SDTs were slightly prolonged (by 7 ms). SRT latencies increased with age while SDT latencies remained stable. Precise computer-based measurements of SRT latencies show that processing speed is as fast in contemporary populations as in the Victorian era, and that age-related increases in SRT latencies are due primarily to slowed motor output. PMID:25859198

  16. The position of a standard optical computer mouse affects cardiorespiratory responses during the operation of a computer under time constraints

    Directory of Open Access Journals (Sweden)

    Shunji Sako

    2014-08-01

    Full Text Available Objectives: This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. Material and Methods: The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1 the distal position (DP, in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2 the proximal position (PP, in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses, operating efficiencies (based on word counts, and fatigue levels (based on the visual analog scale – VAS. Results: Oxygen consumption (VO2, the ratio of inspiration time to respiration time (Ti/Ttotal, respiratory rate (RR, minute ventilation (VE, and the ratio of expiration to inspiration (Te/Ti were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT, carbon dioxide output rates (VCO2/VE, and oxygen extraction fractions (VO2/VE were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Conclusions: Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when

  17. Wake-up times following sedation with sevoflurane versus propofol after cardiac surgery.

    Science.gov (United States)

    Hellström, Jan; Öwall, Anders; Sackey, Peter V

    2012-10-01

    Intravenous sedation in the intensive care unit (ICU) may contribute to altered consciousness and prolonged mechanical ventilation. We tested the hypothesis that replacing intravenous propofol with inhaled sevoflurane for sedation after cardiac surgery would lead to shorter wake-up times, quicker patient cooperation, and less delusional memories. Following coronary artery bypass surgery with cardiopulmonary bypass, 100 patients were randomized to sedation with sevoflurane via the anesthetic conserving device or propofol. Study drugs were administered for a minimum of 2 hours until criteria for extubation were met. Primary endpoints were time from drug stop to extubation and to adequate verbal response. Secondary endpoints were adverse recovery events, memories reported in the ICU Memory Tool test, and ICU/hospital stay. Median time from drug stop to extubation (interquartile range/total range) was shorter after sevoflurane compared to propofol sedation; 10 (10/100) minutes versus 25 (21/240) minutes (p sedation after cardiac surgery leads to shorter wake-up times and quicker cooperation compared to propofol. No differences were seen in ICU-stay, adverse memories or recovery events in our short-term sedation.

  18. Evolution of perturbed dynamical systems: analytical computation with time independent accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Gurzadyan, A.V. [Russian-Armenian (Slavonic) University, Department of Mathematics and Mathematical Modelling, Yerevan (Armenia); Kocharyan, A.A. [Monash University, School of Physics and Astronomy, Clayton (Australia)

    2016-12-15

    An analytical method for investigation of the evolution of dynamical systems with independent on time accuracy is developed for perturbed Hamiltonian systems. The error-free estimation using of computer algebra enables the application of the method to complex multi-dimensional Hamiltonian and dissipative systems. It also opens principal opportunities for the qualitative study of chaotic trajectories. The performance of the method is demonstrated on perturbed two-oscillator systems. It can be applied to various non-linear physical and astrophysical systems, e.g. to long-term planetary dynamics. (orig.)

  19. Smoking Topography among Korean Smokers: Intensive Smoking Behavior with Larger Puff Volume and Shorter Interpuff Interval.

    Science.gov (United States)

    Kim, Sungroul; Yu, Sol

    2018-05-18

    The difference of smoker's topography has been found to be a function many factors, including sex, personality, nicotine yield, cigarette type (i.e., flavored versus non-flavored) and ethnicity. We evaluated the puffing behaviors of Korean smokers and its association with smoking-related biomarker levels. A sample of 300 participants was randomly recruited from metropolitan areas in South Korea. Topography measures during a 24-hour period were obtained using a CReSS pocket device. Korean male smokers smoked two puffs less per cigarette compared to female smokers (15.0 (13.0⁻19.0) vs. 17.5 (15.0⁻21.0) as the median (Interquartile range)), but had a significantly larger puff volume (62.7 (52.7⁻75.5) mL vs. 53.5 (42.0⁻64.2) mL); p = 0.012). The interpuff interval was similar between men and women (8.9 (6.5⁻11.2) s vs. 8.3 (6.2⁻11.0) s; p = 0.122) but much shorter than other study results. A dose-response association ( p = 0.0011) was observed between daily total puff volumes and urinary cotinine concentrations, after controlling for sex, age, household income level and nicotine addiction level. An understanding of the difference of topography measures, particularly the larger puff volume and shorter interpuff interval of Korean smokers, may help to overcome a potential underestimation of internal doses of hazardous byproducts of smoking.

  20. "Internet of Things" Real-Time Free Flap Monitoring.

    Science.gov (United States)

    Kim, Sang Hun; Shin, Ho Seong; Lee, Sang Hwan

    2018-01-01

    Free flaps are a common treatment option for head and neck reconstruction in plastic reconstructive surgery, and monitoring of the free flap is the most important factor for flap survival. In this study, the authors performed real-time free flap monitoring based on an implanted Doppler system and "internet of things" (IoT)/wireless Wi-Fi, which is a convenient, accurate, and efficient approach for surgeons to monitor a free flap. Implanted Doppler signals were checked continuously until the patient was discharged by the surgeon and residents using their own cellular phone or personal computer. If the surgeon decided that a revision procedure or exploration was required, the authors checked the consumed time (positive signal-to-operating room time) from the first notification when the flap's status was questioned to the determination for revision surgery according to a chart review. To compare the efficacy of real-time monitoring, the authors paired the same number of free flaps performed by the same surgeon and monitored the flaps using conventional methods such as a physical examination. The total survival rate was greater in the real-time monitoring group (94.7% versus 89.5%). The average time for the real-time monitoring group was shorter than that for the conventional group (65 minutes versus 86 minutes). Based on this study, real-time free flap monitoring using IoT technology is a method that surgeon and reconstruction team can monitor simultaneously at any time in any situation.

  1. hctsa: A Computational Framework for Automated Time-Series Phenotyping Using Massive Feature Extraction.

    Science.gov (United States)

    Fulcher, Ben D; Jones, Nick S

    2017-11-22

    Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Computer use in school: its effect on posture and discomfort in schoolchildren.

    LENUS (Irish Health Repository)

    Kelly, Grace

    2009-01-01

    The aim of the study was to investigate the posture and musculoskeletal discomfort of secondary school students while working at computers in school. Students (n = 40) were observed while working at a computer during their designated computer class. The Rapid Upper Limb Assessment Tool (RULA) was used to assess posture. A Body Discomfort Chart (BDC) and Visual Analogue Scale (VAS) were used to record the area(s) and intensity of musculoskeletal discomfort, if any, experienced by the students at the beginning and end of the computer class. None of the students\\' posture was in the acceptable range (Action Level 1) according to RULA. The majority (65%) were in Action Level 2, 30% were in Action Level 3, and 5% were in Action Level 4. There was a statistically significant increase in reported discomfort from the beginning to the end of the computer class. Longer class length (80 minutes) did not result in greater reporting of discomfort than shorter class length (40 minutes).

  3. Could We Realize the Fully Flexible System by Real-Time Computing with Thin-Film Transistors?

    Directory of Open Access Journals (Sweden)

    Qin Li

    2017-11-01

    Full Text Available Flexible electronic devices, such as the typical thin-film transistors, are widely adopted in the area of sensors, displayers, wearable equipment, and such large-area applications, for their features of bending and stretching; additionally, in some applications of lower-resolution data converters recently, where a trend appears that implementing more parts of system with flexible devices to realize the fully flexible system. Nevertheless, relatively fewer works on the computation parts with flexible electronic devices are reported, due to their poor carrier mobility, which blocks the way to realize the fully flexible systems with uniform manufacturing process. In this paper, a novel circuit architecture for image processing accelerator using Oxide Thin-film transistor (TFT, which could realize real-time image pre-processing and classification in the analog domain, is proposed, where the performance and fault-tolerance of image signal processing is exploited. All of the computation is done in the analog signal domain and no clock signal is needed. Therefore, certain weaknesses of flexible electronic devices, such as low carrier mobility, could be remedied dramatically. In this paper, Simulations based on Oxide TFT device model have demonstrated that the flexible computing parts could perform 5 × 5 Gaussian convolution operation at a speed of 3.3 MOPS/s with the energy efficiency of 1.83 TOPS/J, and realize image classification at a speed of 10 k fps, with the energy efficiency of 5.25 GOPS/J, which means that the potential applications to realize real-time computing parts of complex algorithms with flexible electronic devices, as well as the future fully flexible systems containing sensors, data converters, energy suppliers, and real-time signal processing modules, all with flexible devices.

  4. Design and development of a diversified real time computer for future FBRs

    International Nuclear Information System (INIS)

    Sujith, K.R.; Bhattacharyya, Anindya; Behera, R.P.; Murali, N.

    2014-01-01

    The current safety related computer system of Prototype Fast Breeder Reactor (PFBR) under construction in Kalpakkam consists of two redundant Versa Module Europa (VME) bus based Real Time Computer system with a Switch Over Logic Circuit (SOLC). Since both the VME systems are identical, the dual redundant system is prone to common cause failure (CCF). The probability of CCF can be reduced by adopting diversity. Design diversity has long been used to protect redundant systems against common-mode failures. The conventional notion of diversity relies on 'independent' generation of 'different' implementations. This paper discusses the design and development of a diversified Real Time Computer which will replace one of the computer system in the dual redundant architecture. Compact PCI (cPCI) bus systems are widely used in safety critical applications such as avionics, railways, defence and uses diverse electrical signaling and logical specifications, hence was chosen for development of the diversified system. Towards the initial development a CPU card based on an ARM-9 processor, 16 channel Relay Output (RO) card and a 30 channel Analog Input (AI) card was developed. All the cards mentioned supports hot-swap and geographic addressing capability. In order to mitigate the component obsolescence problem the 32 bit PCI target controller and associated glue logic for the slave I/O cards was indigenously developed using VHDL. U-boot was selected as the boot loader and arm Linux 2.6 as the preliminary operating system for the CPU card. Board specific initialization code for the CPU card was written in ARM assembly language and serial port initialization was written in C language. Boot loader along with Linux 2.6 kernel and jffs2 file system was flashed into the CPU card. Test applications written in C language were used to test the various peripherals of the CPU card. Device driver for the AI and RO card was developed as Linux kernel modules and application library was also

  5. Reemission spectra and inelastic processes at interaction of attosecond and shorter duration electromagnetic pulses with atoms

    International Nuclear Information System (INIS)

    Makarov, D.N.; Matveev, V.I.

    2017-01-01

    Inelastic processes and the reemission of attosecond and shorter electromagnetic pulses by atoms have been considered within the analytical solution of the Schrödinger equation in the sudden perturbation approximation. A method of calculations with the exact inclusion of spatial inhomogeneity of the field of an ultrashort pulse and the momenta of photons in the reemission processes has been developed. The probabilities of inelastic processes and spectra of reemission of ultrashort electromagnetic pulses by one- and many-electron atoms have been calculated. The results have been presented in the form of analytical formulas.

  6. Control bandwidth improvements in GRAVITY fringe tracker by switching to a synchronous real time computer architecture

    Science.gov (United States)

    Abuter, Roberto; Dembet, Roderick; Lacour, Sylvestre; di Lieto, Nicola; Woillez, Julien; Eisenhauer, Frank; Fedou, Pierre; Phan Duc, Than

    2016-08-01

    The new VLTI (Very Large Telescope Interferometer) 1 instrument GRAVITY5, 22, 23 is equipped with a fringe tracker16 able to stabilize the K-band fringes on six baselines at the same time. It has been designed to achieve a performance for average seeing conditions of a residual OPD (Optical Path Difference) lower than 300 nm with objects brighter than K = 10. The control loop implementing the tracking is composed of a four stage real time computer system compromising: a sensor where the detector pixels are read in and the OPD and GD (Group Delay) are calculated; a controller receiving the computed sensor quantities and producing commands for the piezo actuators; a concentrator which combines both the OPD commands with the real time tip/tilt corrections offloading them to the piezo actuator; and finally a Kalman15 parameter estimator. This last stage is used to monitor current measurements over a window of few seconds and estimate new values for the main Kalman15 control loop parameters. The hardware and software implementation of this design runs asynchronously and communicates the four computers for data transfer via the Reflective Memory Network3. With the purpose of improving the performance of the GRAVITY5, 23 fringe tracking16, 22 control loop, a deviation from the standard asynchronous communication mechanism has been proposed and implemented. This new scheme operates the four independent real time computers involved in the tracking loop synchronously using the Reflective Memory Interrupts2 as the coordination signal. This synchronous mechanism had the effect of reducing the total pure delay of the loop from 3.5 [ms] to 2.0 [ms] which then translates on a better stabilization of the fringes as the bandwidth of the system is substantially improved. This paper will explain in detail the real time architecture of the fringe tracker in both is synchronous and synchronous implementation. The achieved improvements on reducing the delay via this mechanism will be

  7. Prospective and retrospective time perception are related to mental time travel: evidence from Alzheimer's disease.

    Science.gov (United States)

    El Haj, Mohamad; Moroni, Christine; Samson, Séverine; Fasotti, Luciano; Allain, Philippe

    2013-10-01

    Unlike prospective time perception paradigms, in which participants are aware that they have to estimate forthcoming time, little is known about retrospective time perception in normal aging and Alzheimer's disease (AD). Our paper addresses this shortcoming by comparing prospective and retrospective time estimation in younger adults, older adults, and AD patients. In four prospective tasks (lasting 30s, 60s, 90s, or 120s) participants were asked to read a series of numbers and to provide a verbal estimation of the reading time. In four other retrospective tasks, they were not informed about time judgment until they were asked to provide a verbal estimation of four elapsed time intervals (lasting 30s, 60s, 90s, or 120s). AD participants gave shorter verbal time estimations than older adults and younger participants did, suggesting that time is perceived to pass quickly in these patients. For all participants, the duration of the retrospective tasks was underestimated as compared to the prospective tasks and both estimations were shorter than the real time interval. Prospective time estimation was further correlated with mental time travel, as measured with the Remember/Know paradigm. Mental time travel was even higher correlated with retrospective time estimation. Our findings shed light on the relationship between time perception and the ability to mentally project oneself into time, two skills contributing to human memory functioning. Finally, time perception deficits, as observed in AD patients, can be interpreted in terms of dramatic changes occurring in frontal lobes and hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Photonic Design: From Fundamental Solar Cell Physics to Computational Inverse Design

    Science.gov (United States)

    Miller, Owen Dennis

    Photonic innovation is becoming ever more important in the modern world. Optical systems are dominating shorter and shorter communications distances, LED's are rapidly emerging for a variety of applications, and solar cells show potential to be a mainstream technology in the energy space. The need for novel, energy-efficient photonic and optoelectronic devices will only increase. This work unites fundamental physics and a novel computational inverse design approach towards such innovation. The first half of the dissertation is devoted to the physics of high-efficiency solar cells. As solar cells approach fundamental efficiency limits, their internal physics transforms. Photonic considerations, instead of electronic ones, are the key to reaching the highest voltages and efficiencies. Proper photon management led to Alta Device's recent dramatic increase of the solar cell efficiency record to 28.3%. Moreover, approaching the Shockley-Queisser limit for any solar cell technology will require light extraction to become a part of all future designs. The second half of the dissertation introduces inverse design as a new computational paradigm in photonics. An assortment of techniques (FDTD, FEM, etc.) have enabled quick and accurate simulation of the "forward problem" of finding fields for a given geometry. However, scientists and engineers are typically more interested in the inverse problem: for a desired functionality, what geometry is needed? Answering this question breaks from the emphasis on the forward problem and forges a new path in computational photonics. The framework of shape calculus enables one to quickly find superior, non-intuitive designs. Novel designs for optical cloaking and sub-wavelength solar cell applications are presented.

  9. Computer/Mobile Device Screen Time of Children and Their Eye Care Behavior: The Roles of Risk Perception and Parenting.

    Science.gov (United States)

    Chang, Fong-Ching; Chiu, Chiung-Hui; Chen, Ping-Hung; Miao, Nae-Fang; Chiang, Jeng-Tung; Chuang, Hung-Yi

    2018-03-01

    This study assessed the computer/mobile device screen time and eye care behavior of children and examined the roles of risk perception and parental practices. Data were obtained from a sample of 2,454 child-parent dyads recruited from 30 primary schools in Taipei city and New Taipei city, Taiwan, in 2016. Self-administered questionnaires were collected from students and parents. Fifth-grade students spend more time on new media (computer/smartphone/tablet: 16 hours a week) than on traditional media (television: 10 hours a week). The average daily screen time (3.5 hours) for these children exceeded the American Academy of Pediatrics recommendations (≤2 hours). Multivariate analysis results showed that after controlling for demographic factors, the parents with higher levels of risk perception and parental efficacy were more likely to mediate their child's eye care behavior. Children who reported lower academic performance, who were from non-intact families, reported lower levels of risk perception of mobile device use, had parents who spent more time using computers and mobile devices, and had lower levels of parental mediation were more likely to spend more time using computers and mobile devices; whereas children who reported higher academic performance, higher levels of risk perception, and higher levels of parental mediation were more likely to engage in higher levels of eye care behavior. Risk perception by children and parental practices are associated with the amount of screen time that children regularly engage in and their level of eye care behavior.

  10. Job tasks, computer use, and the decreasing part-time pay penalty for women in the UK

    OpenAIRE

    Elsayed, A.E.A.; de Grip, A.; Fouarge, D.

    2014-01-01

    Using data from the UK Skills Surveys, we show that the part-time pay penalty for female workers within low- and medium-skilled occupations decreased significantly over the period 1997-2006. The convergence in computer use between part-time and full-time workers within these occupations explains a large share of the decrease in the part-time pay penalty. However, the lower part-time pay penalty is also related to lower wage returns to reading and writing which are performed more intensively b...

  11. Development of a real-time monitoring system and integration of different computer system in LHD experiments using IP multicast

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Nakamura, Yukio; Teramachi, Yasuaki; Okumura, Haruhiko; Yamaguchi, Satarou

    2002-01-01

    There are several different computer systems in LHD (Large Helical Device) experiment, and therefore the coalition of these computers is a key to perform the experiment. Real-time monitoring system is also important because the long discharge is needed in the LHD experiment. In order to achieve these two requirements, the technique of IP multicast is adopted. The authors have developed three new systems, the first one is the real-time monitoring system, the next one is the delivery system of the shot number and the last one is the real-time notification system of the plasma data registration. The first system can deliver the real-time monitoring data to the LHD experimental LAN through the firewall of the LHD control LAN in NIFS. The other two systems are used to realize high coalition of the different computers in the LHD plasma experiment. We can conclude that IP multicast is very useful both in the LHD experiment and a future large plasma experiment from various experiences. (author)

  12. A cascadic monotonic time-discretized algorithm for finite-level quantum control computation

    Science.gov (United States)

    Ditz, P.; Borzi`, A.

    2008-03-01

    A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes. Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals. Program summaryProgram title: CNMS Catalogue identifier: ADEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 770 No. of bytes in distributed program, including test data, etc.: 7098 Distribution format: tar.gz Programming language: MATLAB 6 Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAM Operating system: Microsoft Windows XP Word size: 32 Classification: 4.9 Nature of problem: Quantum control Solution method: Iterative Running time: 60-600 sec

  13. Use of a novel shorter minimum caliber needle for creating endoscopic tattoos for preoperative localization: a comparative ex vivo study.

    Science.gov (United States)

    Imai, Kenichiro; Hotta, Kinichi; Ito, Sayo; Yamaguchi, Yuichiro; Kawakami, Takeshi; Wada, Takuya; Igarashi, Kimihiro; Kishida, Yoshihiro; Kinugasa, Yusuke; Kawata, Noboru; Tanaka, Masaki; Kakushima, Naomi; Takizawa, Kohei; Ishiwatari, Hirotoshi; Matsubayashi, Hiroyuki; Ono, Hiroyuki

    2017-06-01

    In colorectal cancer surgery, inadvertent deep injections during endoscopic tattooing can cause India ink leakage into the peritoneum, leading to complications or to poor visualization of the surgical plane. This ex vivo animal study compared the use of novel shorter, minimum caliber needles versus conventional injection needles for endoscopic tattooing. Four endoscopists used the novel needles and conventional needles to make ten endoscopic tattoos (five tattoos/needle type/endoscopist) in harvested porcine rectum using a saline test-injection method. India ink leakage and the success of the tattoo (i. e. visible, tattoos but for none of the novel needle tattoos ( P  = 0.02). Tattoos created using the novel needles were more successful than those made with the conventional needles: 18/20 (90 %) vs. 11/20 (55 %); P  = 0.01. The use of novel shorter minimum caliber needles may be safe and effective for endoscopic tattooing for preoperative localization prior to colorectal cancer surgery.

  14. MO-E-BRD-02: Accelerated Partial Breast Irradiation in Brachytherapy: Is Shorter Better?

    International Nuclear Information System (INIS)

    Todor, D.

    2015-01-01

    Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant. A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current

  15. MO-E-BRD-02: Accelerated Partial Breast Irradiation in Brachytherapy: Is Shorter Better?

    Energy Technology Data Exchange (ETDEWEB)

    Todor, D. [Virginia Commonwealth University (United States)

    2015-06-15

    Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant. A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current

  16. An Energy Efficient Neuromorphic Computing System Using Real Time Sensing Method

    DEFF Research Database (Denmark)

    Farkhani, Hooman; Tohidi, Mohammad; Farkhani, Sadaf

    2017-01-01

    In spintronic-based neuromorphic computing systems (NCS), the switching of magnetic moment in a magnetic tunnel junction (MTJ) is used to mimic neuron firing. However, the stochastic switching behavior of the MTJ and process variations effect leads to extra stimulation time. This leads to extra...... energy consumption and delay of such NCSs. In this paper, a new real-time sensing (RTS) circuit is proposed to track the MTJ state and terminate stimulation phase immediately after MTJ switching. This leads to significant degradation in energy consumption and delay of NCS. The simulation results using...... a 65-nm CMOS technology and a 40-nm MTJ technology confirm that the energy consumption of a RTS-based NCS is improved by 50% in comparison with a typical NCS. Moreover, utilizing RTS circuit improves the overall speed of an NCS by 2.75x....

  17. TIMED: a computer program for calculating cumulated activity of a radionuclide in the organs of the human body at a given time, t, after deposition

    International Nuclear Information System (INIS)

    Watson, S.B.; Snyder, W.S.; Ford, M.R.

    1976-12-01

    TIMED is a computer program designed to calculate cumulated radioactivity in the various source organs at various times after radionuclide deposition. TIMED embodies a system of differential equations which describes activity transfer in the lungs, gastrointestinal tract, and other organs of the body. This system accounts for delay of transfer of activity between compartments of the body and radioactive daughters

  18. A characterization of persistence at short times in the WFC3/IR detector

    Science.gov (United States)

    Gennaro, M.; Bajaj, V.; Long, K.

    2018-05-01

    Persistence in the WFC3/IR detector appears to decay as a power law as a function of time elapsed since the end of a stimulus. In this report we study departures from the power law at times shorter than a few hundreds seconds after the stimulus. In order to have better short-time cadence, we use the Multiaccum (.ima) files, which trace the accumulated charge in the pixels as function of time, rather than the final pipeline products (.flt files), which instead report the electron rate estimated via a linear fit to the accumulated charge vs. time relation. We note that at short times after the stimulus, the absolute change in persistence is the strongest, thus a linear fit to the accumulated signal (the .flt values) can be a poor representation of the strongly varying persistence signal. The already observed power-law decay of the persistence signal, still holds at shorter times, with typical values of the power law index, gamma in [-0.8,-1] for stimuli that saturate the WFC3 pixels. To a good degree of approximation, a single power law is a good fit to the persistence signal decay from 100 to 5000 seconds. We also detect a tapering-off in the power-law decay at increasingly shorter times. This change in behavior is of the order of Delta Gamma 0.02 - 0.05 when comparing power-law fits performed to the persistence signal from 0 up to 250 seconds and from 0 up to 4000 seconds after the stimulus, indicating that persistence decays slightly more rapidly as time progresses. Our results may suggest that for even shorter times, not probed by our study, the WFC3 persistence signal might deviate from a single power-law model.

  19. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    Science.gov (United States)

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  20. GLOA: A New Job Scheduling Algorithm for Grid Computing

    Directory of Open Access Journals (Sweden)

    Zahra Pooranian

    2013-03-01

    Full Text Available The purpose of grid computing is to produce a virtual supercomputer by using free resources available through widespread networks such as the Internet. This resource distribution, changes in resource availability, and an unreliable communication infrastructure pose a major challenge for efficient resource allocation. Because of the geographical spread of resources and their distributed management, grid scheduling is considered to be a NP-complete problem. It has been shown that evolutionary algorithms offer good performance for grid scheduling. This article uses a new evaluation (distributed algorithm inspired by the effect of leaders in social groups, the group leaders' optimization algorithm (GLOA, to solve the problem of scheduling independent tasks in a grid computing system. Simulation results comparing GLOA with several other evaluation algorithms show that GLOA produces shorter makespans.

  1. Chemistry, physics and time: the computer modelling of glassmaking.

    Science.gov (United States)

    Martlew, David

    2003-01-01

    A decade or so ago the remains of an early flat glass furnace were discovered in St Helens. Continuous glass production only became feasible after the Siemens Brothers demonstrated their continuous tank furnace at Dresden in 1870. One manufacturer of flat glass enthusiastically adopted the new technology and secretly explored many variations on this theme during the next fifteen years. Study of the surviving furnace remains using today's computer simulation techniques showed how, in 1887, that technology was adapted to the special demands of window glass making. Heterogeneous chemical reactions at high temperatures are required to convert the mixture of granular raw materials into the homogeneous glass needed for windows. Kinetics (and therefore the economics) of glassmaking is dominated by heat transfer and chemical diffusion as refractory grains are converted to highly viscous molten glass. Removal of gas bubbles in a sufficiently short period of time is vital for profitability, but the glassmaker must achieve this in a reaction vessel which is itself being dissolved by the molten glass. Design and operational studies of today's continuous tank furnaces need to take account of these factors, and good use is made of computer simulation techniques to shed light on the way furnaces behave and how improvements may be made. This paper seeks to show how those same techniques can be used to understand how the early Siemens continuous tank furnaces were designed and operated, and how the Victorian entrepreneurs succeeded in managing the thorny problems of what was, in effect, a vulnerable high temperature continuous chemical reactor.

  2. Shorter daily dwelling time in peritoneal dialysis attenuates the epithelial-to-mesenchymal transition of mesothelial cells

    Science.gov (United States)

    2014-01-01

    Background Peritoneal dialysis (PD) therapy is known to induce morphological and functional changes in the peritoneal membrane. Long-term exposure to conventional bio-incompatible dialysate and peritonitis is the main etiology of inflammation. Consequently, the peritoneal membrane undergoes structural changes, including angiogenesis, fibrosis, and hyalinizing vasculopathy, which ultimately results in technique failure. The epithelial-to-mesenchymal transition (EMT) of mesothelial cells (MCs) plays an important role during the above process; however, the clinical parameters associated with the EMT process of MCs remain to be explored. Methods To investigate the parameters impacting EMT during PD therapy, 53 clinical stable PD patients were enrolled. EMT assessments were conducted through human peritoneal MCs cultured from dialysate effluent with one consistent standard criterion (MC morphology and the expression of an epithelial marker, cytokeratin 18). The factors potentially associated with EMT were analyzed using logistic regression analysis. Primary MCs derived from the omentum were isolated for the in vitro study. Results Forty-seven percent of the patients presented with EMT, 28% with non-EMT, and 15% with a mixed presentation. Logistic regression analysis showed that patients who received persistent PD therapy (dwelling time of 24 h/day) had significantly higher EMT tendency. These results were consistent in vitro. Conclusions Dwelling time had a significant effect on the occurrence of EMT on MCs. PMID:24555732

  3. Person-related determinants of TV viewing and computer time in a cohort of young Dutch adults: Who sits the most?

    Science.gov (United States)

    Uijtdewilligen, L; Singh, A S; Chinapaw, M J M; Twisk, J W R; van Mechelen, W

    2015-10-01

    We aimed to assess the associations of person-related factors with leisure time television (TV) viewing and computer time among young adults. We analyzed self-reported TV viewing (h/week) and leisure computer time (h/week) from 475 Dutch young adults (47% male) who had participated in the Amsterdam Growth and Health Longitudinal Study at the age of 32 and 36 years. Sociodemographic factors (i.e., marital and employment status), physical factors (i.e., skin folds, aerobic fitness, neuromotor fitness, back problems), psychological factors (i.e., problem- and emotion-focused coping, personality), lifestyle (i.e., alcohol consumption, smoking, energy intake, physical activity), and self-rated health (i.e., general health status, mild health complaints) were assessed. Univariable and multivariable generalized estimating equations were performed. Male gender, higher sum of skin folds, lower values of aerobic fitness, higher rigidity, higher self-sufficiency/recalcitrance, and smoking were positively associated with TV time. Male gender, higher sum of skin folds, higher scores on self-esteem, low energy intake, and a not so good general health status were significantly associated with higher computer time. Determinants of TV viewing and computer time were not identical, suggesting that both behaviors (a) have different at-risk populations and (b) should be targeted differently. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Real-Time Evaluation of Breast Self-Examination Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Eman Mohammadi

    2014-01-01

    Full Text Available Breast cancer is the most common cancer among women worldwide and breast self-examination (BSE is considered as the most cost-effective approach for early breast cancer detection. The general objective of this paper is to design and develop a computer vision algorithm to evaluate the BSE performance in real-time. The first stage of the algorithm presents a method for detecting and tracking the nipples in frames while a woman performs BSE; the second stage presents a method for localizing the breast region and blocks of pixels related to palpation of the breast, and the third stage focuses on detecting the palpated blocks in the breast region. The palpated blocks are highlighted at the time of BSE performance. In a correct BSE performance, all blocks must be palpated, checked, and highlighted, respectively. If any abnormality, such as masses, is detected, then this must be reported to a doctor to confirm the presence of this abnormality and proceed to perform other confirmatory tests. The experimental results have shown that the BSE evaluation algorithm presented in this paper provides robust performance.

  5. Real-time evaluation of breast self-examination using computer vision.

    Science.gov (United States)

    Mohammadi, Eman; Dadios, Elmer P; Gan Lim, Laurence A; Cabatuan, Melvin K; Naguib, Raouf N G; Avila, Jose Maria C; Oikonomou, Andreas

    2014-01-01

    Breast cancer is the most common cancer among women worldwide and breast self-examination (BSE) is considered as the most cost-effective approach for early breast cancer detection. The general objective of this paper is to design and develop a computer vision algorithm to evaluate the BSE performance in real-time. The first stage of the algorithm presents a method for detecting and tracking the nipples in frames while a woman performs BSE; the second stage presents a method for localizing the breast region and blocks of pixels related to palpation of the breast, and the third stage focuses on detecting the palpated blocks in the breast region. The palpated blocks are highlighted at the time of BSE performance. In a correct BSE performance, all blocks must be palpated, checked, and highlighted, respectively. If any abnormality, such as masses, is detected, then this must be reported to a doctor to confirm the presence of this abnormality and proceed to perform other confirmatory tests. The experimental results have shown that the BSE evaluation algorithm presented in this paper provides robust performance.

  6. Self-motion perception: assessment by real-time computer-generated animations

    Science.gov (United States)

    Parker, D. E.; Phillips, J. O.

    2001-01-01

    We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.

  7. [Construction and analysis of a monitoring system with remote real-time multiple physiological parameters based on cloud computing].

    Science.gov (United States)

    Zhu, Lingyun; Li, Lianjie; Meng, Chunyan

    2014-12-01

    There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.

  8. Buffering Capacity of Fast-Growing Species and Curing Time of UF Resin Modified With Zinc Borate and Monoammonium Phosphate

    OpenAIRE

    Izran Kamal; Koh M. Poh; Tan Y. Eng; Xue J. Ren; Zaidon Ashaari; Faizah Abood; Guenter Beyer; Khairul Masseat

    2010-01-01

    Problem statement: Occupying a suitable hot pressing time for particleboard fabrication seems very tricky for manufacturers of the wood-based panel. Longer or shorter pressing times can affect physical and mechanical properties of the produced particleboards and that is why extra care should be given on this matter. Longer pressing time can cause resin in a particleboard to over-cure whereas shorter pressing time can cause insufficient curing of the resin. Determination of hot pressing time i...

  9. Alternative promoter usage generates novel shorter MAPT mRNA transcripts in Alzheimer's disease and progressive supranuclear palsy brains.

    Science.gov (United States)

    Huin, Vincent; Buée, Luc; Behal, Hélène; Labreuche, Julien; Sablonnière, Bernard; Dhaenens, Claire-Marie

    2017-10-03

    Alternative promoter usage is an important mechanism for transcriptome diversity and the regulation of gene expression. Indeed, this alternative usage may influence tissue/subcellular specificity, protein translation and function of the proteins. The existence of an alternative promoter for MAPT gene was considered for a long time to explain differential tissue specificity and differential response to transcription and growth factors between mRNA transcripts. The alternative promoter usage could explain partly the different tau proteins expression patterns observed in tauopathies. Here, we report on our discovery of a functional alternative promoter for MAPT, located upstream of the gene's second exon (exon 1). By analyzing genome databases and brain tissue from control individuals and patients with Alzheimer's disease or progressive supranuclear palsy, we identified novel shorter transcripts derived from this alternative promoter. These transcripts are increased in patients' brain tissue as assessed by 5'RACE-PCR and qPCR. We suggest that these new MAPT isoforms can be translated into normal or amino-terminal-truncated tau proteins. We further suggest that activation of MAPT's alternative promoter under pathological conditions leads to the production of truncated proteins, changes in protein localization and function, and thus neurodegeneration.

  10. Biofeedback effectiveness to reduce upper limb muscle activity during computer work is muscle specific and time pressure dependent

    DEFF Research Database (Denmark)

    Vedsted, Pernille; Søgaard, Karen; Blangsted, Anne Katrine

    2011-01-01

    trapezius (TRA) can reduce bilateral TRA activity but not extensor digitorum communis (EDC) activity; (2) biofeedback from EDC can reduce activity in EDC but not in TRA; (3) biofeedback is more effective in no time constraint than in the time constraint working condition. Eleven healthy women performed......Continuous electromyographic (EMG) activity level is considered a risk factor in developing muscle disorders. EMG biofeedback is known to be useful in reducing EMG activity in working muscles during computer work. The purpose was to test the following hypotheses: (1) unilateral biofeedback from...... computer work during two different working conditions (time constraint/no time constraint) while receiving biofeedback. Biofeedback was given from right TRA or EDC through two modes (visual/auditory) by the use of EMG or mechanomyography as biofeedback source. During control sessions (no biofeedback), EMG...

  11. Cleavage of SNAP25 and its shorter versions by the protease domain of serotype A botulinum neurotoxin.

    Directory of Open Access Journals (Sweden)

    Rahman M Mizanur

    Full Text Available Various substrates, catalysts, and assay methods are currently used to screen inhibitors for their effect on the proteolytic activity of botulinum neurotoxin. As a result, significant variation exists in the reported results. Recently, we found that one source of variation was the use of various catalysts, and have therefore evaluated its three forms. In this paper, we characterize three substrates under near uniform reaction conditions using the most active catalytic form of the toxin. Bovine serum albumin at varying optimum concentrations stimulated enzymatic activity with all three substrates. Sodium chloride had a stimulating effect on the full length synaptosomal-associated protein of 25 kDa (SNAP25 and its 66-mer substrates but had an inhibitory effect on the 17-mer substrate. We found that under optimum conditions, full length SNAP25 was a better substrate than its shorter 66-mer or 17-mer forms both in terms of kcat, Km, and catalytic efficiency kcat/Km. Assay times greater than 15 min introduced large variations and significantly reduced the catalytic efficiency. In addition to characterizing the three substrates, our results identify potential sources of variations in previous published results, and underscore the importance of using well-defined reaction components and assay conditions.

  12. Usefulness of measurement of circulation time using MgSO4 : correlation with time-density curve using electron beam computed tomography

    International Nuclear Information System (INIS)

    Kim, Byung Ki; Lee, Hui Joong; Lee, Jong Min; Kim, Yong Joo; Kang, Duck Sik

    1999-01-01

    To determine the usefulness of MgSO 4 for measuring the systemic circulation time. Systemic circulation time, defined as elapsed time from the injection of MgSO 4 solution to the point of pharyngeal burning sensation, was measured in 63 volunteers. MgSO 4 was injected into a superficial vein of an upper extremity. Using dynamic electron beam computed tomography at the level of the abdominal aorta and celiac axis, a time-intensity curve was plotted, and for these two locations, maximal enhancement time was compared. For 60 of the 63 subjects, both systemic circulation time and maximal enhancement time were determined. Average systemic circulation time was 17.4 (SD:3.6) secs. and average maximal enhancement times at the level of the abdominal aorta and celiac axis were 17.5 (SD:3.0) secs. and 18.5 (SD:3.2) secs., respectively. Correlation coefficients between systemic circulation time and maximal enhancement time for the abdominal aorta and celiac axis were 0.73 (p 4 injection and maximal enhancement time for the abdominal aorta showed significant correlation. Thus, to determine the appropriate scanning time in contrast-enhanced radiological studies, MgSO 4 can be used instead of a test bolus study

  13. Computer performance evaluation of FACOM 230-75 computer system, (2)

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1980-08-01

    In this report are described computer performance evaluations for FACOM230-75 computers in JAERI. The evaluations are performed on following items: (1) Cost/benefit analysis of timesharing terminals, (2) Analysis of the response time of timesharing terminals, (3) Analysis of throughout time for batch job processing, (4) Estimation of current potential demands for computer time, (5) Determination of appropriate number of card readers and line printers. These evaluations are done mainly from the standpoint of cost reduction of computing facilities. The techniques adapted are very practical ones. This report will be useful for those people who are concerned with the management of computing installation. (author)

  14. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  15. Discrimination in waiting times by insurance type and financial soundness of German acute care hospitals.

    Science.gov (United States)

    Schwierz, Christoph; Wübker, Achim; Wübker, Ansgar; Kuchinke, Björn A

    2011-10-01

    This paper shows that patients with private health insurance (PHI) are being offered significantly shorter waiting times than patients with statutory health insurance (SHI) in German acute hospital care. This behavior may be driven by the higher expected profitability of PHI relative to SHI holders. Further, we find that hospitals offering private insurees shorter waiting times when compared with SHI holders have a significantly better financial performance than those abstaining from or with less discrimination.

  16. Towards shorter wavelength x-ray lasers using a high power, short pulse pump laser

    International Nuclear Information System (INIS)

    Tighe, W.; Krushelnick, K.; Valeo, E.; Suckewer, S.

    1991-05-01

    A near-terawatt, KrF* laser system, focussable to power densities >10 18 W/cm 2 has been constructed for use as a pump laser in various schemes aimed at the development of x-ray lasing below 5nm. The laser system along with output characteristics such as the pulse duration, the focal spot size, and the percentage of amplified spontaneous emission (ASE) emitted along with the laser pulse will be presented. Schemes intended to lead to shorter wavelength x-ray emission will be described. The resultant requirements on the pump laser characteristics and the target design will be outlined. Results from recent solid target experiments and two-laser experiments, showing the interaction of a high-power, short pulse laser with a preformed plasma, will be presented. 13 refs., 5 figs

  17. Application of a Statistical Linear Time-Varying System Model of High Grazing Angle Sea Clutter for Computing Interference Power

    Science.gov (United States)

    2017-12-08

    STATISTICAL LINEAR TIME-VARYING SYSTEM MODEL OF HIGH GRAZING ANGLE SEA CLUTTER FOR COMPUTING INTERFERENCE POWER 1. INTRODUCTION Statistical linear time...beam. We can approximate one of the sinc factors using the Dirichlet kernel to facilitate computation of the integral in (6) as follows: ∣∣∣∣sinc(WB...plotted in Figure 4. The resultant autocorrelation can then be found by substituting (18) into (28). The Python code used to generate Figures 1-4 is found

  18. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-05-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunchspacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  19. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-01-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunch spacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  20. A computationally efficient electricity price forecasting model for real time energy markets

    International Nuclear Information System (INIS)

    Feijoo, Felipe; Silva, Walter; Das, Tapas K.

    2016-01-01

    Highlights: • A fast hybrid forecast model for electricity prices. • Accurate forecast model that combines K-means and machine learning techniques. • Low computational effort by elimination of feature selection techniques. • New benchmark results by using market data for year 2012 and 2015. - Abstract: Increased significance of demand response and proliferation of distributed energy resources will continue to demand faster and more accurate models for forecasting locational marginal prices. This paper presents such a model (named K-SVR). While yielding prediction accuracy comparable with the best known models in the literature, K-SVR requires a significantly reduced computational time. The computational reduction is attained by eliminating the use of a feature selection process, which is commonly used by the existing models in the literature. K-SVR is a hybrid model that combines clustering algorithms, support vector machine, and support vector regression. K-SVR is tested using Pennsylvania–New Jersey–Maryland market data from the periods 2005–6, 2011–12, and 2014–15. Market data from 2006 has been used to measure performance of many of the existing models. Authors chose these models to compare performance and demonstrate strengths of K-SVR. Results obtained from K-SVR using the market data from 2012 and 2015 are new, and will serve as benchmark for future models.

  1. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    Science.gov (United States)

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. © The Author(s) 2015.

  2. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems

    Science.gov (United States)

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  3. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  4. Multislice computed tomography: angiographic emulation versus standard assessment for detection of coronary stenoses

    Energy Technology Data Exchange (ETDEWEB)

    Schnapauff, Dirk; Hamm, Bernd; Dewey, Marc [Humboldt-Universitaet zu Berlin, Department of Radiology, Charite - Universitaetsmedizin Berlin, Chariteplatz 1, P.O. Box 10098, Berlin (Germany); Duebel, Hans-Peter; Baumann, Gert [Charite - Universitaetsmedizin Berlin, Department of Cardiology, Berlin (Germany); Scholze, Juergen [Charite - Universitaetsmedizin Berlin, Charite Outpatient Centre, Berlin (Germany)

    2007-07-15

    The present study investigated angiographic emulation of multislice computed tomography (MSCT) (catheter-like visualization) as an alternative approach of analyzing and visualizing findings in comparison with standard assessment. Thirty patients (120 coronary arteries) were randomly selected from 90 prospectively investigated patients with suspected coronary artery disease who underwent MSCT (16-slice scanner, 0.5 mm collimation, 400 ms rotation time) prior to conventional coronary angiography for comparison of both approaches. Sensitivity and specificity of angiographic emulation [81% (26/32) and 93% (82/88)] were not significantly different from those of standard assessment [88% (28/32) and 99% (87/88)], while the per-case analysis time was significantly shorter for angiographic emulation than for standard assessment (3.4 {+-} 1.5 vs 7.0 {+-} 2.5 min, P < 0.001). Both interventional and referring cardiologists preferred angiographic emulation over standard curved multiplanar reformations of MSCT coronary angiography for illustration, mainly because of improved overall lucidity and depiction of sidebranches (P < 0.001). In conclusion, angiographic emulation of MSCT reduces analysis time, yields a diagnostic accuracy comparable to that of standard assessment, and is preferred by cardiologists for visualization of results. (orig.)

  5. Computational derivation of quantum relativist electromagnetic systems with forward-backward space-time shifts

    International Nuclear Information System (INIS)

    Dubois, Daniel M.

    2000-01-01

    This paper is a continuation of our preceding paper dealing with computational derivation of the Klein-Gordon quantum relativist equation and the Schroedinger quantum equation with forward and backward space-time shifts. The first part introduces forward and backward derivatives for discrete and continuous systems. Generalized complex discrete and continuous derivatives are deduced. The second part deduces the Klein-Gordon equation from the space-time complex continuous derivatives. These derivatives take into account forward-backward space-time shifts related to an internal phase velocity u. The internal group velocity v is related to the speed of light u.v=c 2 and to the external group and phase velocities u.v=v g .v p . Without time shift, the Schroedinger equation is deduced, with a supplementary term, which could represent a reference potential. The third part deduces the Quantum Relativist Klein-Gordon equation for a particle in an electromagnetic field

  6. Postoperative radiotherapy in squamous cell carcinoma of the oral cavity: the importance of the overall treatment time

    NARCIS (Netherlands)

    Langendijk, J. A.; de Jong, M. A.; Leemans, C. R.; de Bree, R.; Smeele, L. E.; Doornaert, P.; Slotman, B. J.

    2003-01-01

    To test the hypothesis that (1) the distinction between intermediate- and high-risk patients by clustering different prognostic factors results in a significant difference in treatment outcome and (2) a shorter interval between surgery and radiotherapy and shorter overall treatment times of

  7. Timing organization of a real-time multicore processor

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Sparsø, Jens

    2017-01-01

    Real-time systems need a time-predictable computing platform. Computation, communication, and access to shared resources needs to be time-predictable. We use time division multiplexing to statically schedule all computation and communication resources, such as access to main memory or message...... passing over a network-on-chip. We use time-driven communication over an asynchronous network-on-chip to enable time division multiplexing even in a globally asynchronous, locally synchronous multicore architecture. Using time division multiplexing at all levels of the architecture yields in a time...

  8. Continuous Positive Airway Pressure Device Time to Procurement in a Disadvantaged Population

    Directory of Open Access Journals (Sweden)

    Lourdes M. DelRosso

    2015-01-01

    Full Text Available Introduction. The management of obstructive sleep apnea (OSA in patients who cannot afford a continuous positive airway pressure (CPAP device is challenging. In this study we compare time to CPAP procurement in three groups of patients diagnosed with OSA: uninsured subsidized by a humanitarian grant (Group 1, uninsured unsubsidized (Group 2, and those with Medicare or Medicaid (Group 3. We evaluate follow-up and adherence in Group 1. We hypothesize that additional factors, rather than just the ability to obtain CPAP, may uniquely affect follow-up and adherence in uninsured patients. Methods. 30 patients were in Groups 1 and 2, respectively. 12 patients were in Group 3. Time of CPAP procurement from OSA diagnosis to CPAP initiation was assessed in all groups. CPAP adherence data was collected for Group 1 patients at 1, 3, 6, and 9 months. Results. There were no significant differences between groups in gender, age, body mass index, or apnea hypopnea index. The mean time to procurement in Group 1 was shorter compared to Group 2 but not significant. Compared to both Group 1 and Group 2, Group 3 patients had significantly shorter times to device procurement. Conclusion. Time to procurement of CPAP was significantly shorter in those with Medicaid/Medicare insurance compared to the uninsured.

  9. CFD (Computational Fluid Dynamics) simulators and thermal cracking of heavy oil and ultraheavy residues using microreactor

    Energy Technology Data Exchange (ETDEWEB)

    Jardini, Andre L.; Bineli, Aulus R.R.; Viadana, Adriana M.; Maciel, Maria Regina Wolf; Maciel Filho, Rubens [State University of Campinas (UNICAMP), SP (Brazil). School of Chemical Engineering; Medina, Lilian C.; Gomes, Alexandre de O. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES); Barros, Ricardo S. [University Foundation Jose Bonifacio (FUJB), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    In this paper, the design of microreactor with microfluidics channels has been carried out in Computer Aided Design Software (CAD) and constructed in rapid prototyping system to be used in chemical reaction processing of the heavy oil fractions. The flow pattern properties of microreactor (fluid dynamics, mixing behavior) have been considered through CFD (computational fluid dynamics) simulations. CFD calculations are also used to study the design and specification of new microreactor developments. The potential advantages of using a microreactor include better control of reaction conditions, improved safety and portability. A more detailed crude assay of the raw national oil, whose importance was evidenced by PETROBRAS/CENPES allows establishing the optimum strategies and processing conditions, aiming at a maximum utilization of the heavy oil fractions, towards valuable products. These residues are able to be processed in microreactor, in which conventional process like as hydrotreating, catalytic and thermal cracking may be carried out in a much more intensified fashion. The whole process development involves a prior thermal study to define the possible operating conditions for a particular task, the microreactor design through computational fluid dynamics and construction using rapid prototyping. This gives high flexibility for process development, shorter time, and costumer/task oriented process/product development. (author)

  10. Kajian dan Implementasi Real Time Operating System pada Single Board Computer Berbasis Arm

    Directory of Open Access Journals (Sweden)

    Wiedjaja A

    2014-06-01

    Full Text Available Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC ARM-based, namely Pandaboard ES with the Dual-core ARM Cortex-A9, TI OMAP 4460 type. Research was conducted by the method of implementation of the General Purpose OS Ubuntu 12:04 OMAP4-armhf-RTOS and Linux 3.4.0-rt17 + on PandaBoard ES. Then research compared the latency value of each OS on no-load and with full-load condition. The results obtained show the maximum latency value of RTOS on full load condition is at 45 uS, much smaller than the maximum value of GPOS at full-load at 17.712 uS. The lower value of latency demontrates that the RTOS has ability to run the process in a certain period of time much better than the GPOS.

  11. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  12. Discrimination of Urban Spaces with Different Level of Restorativeness Based on the Original and on a Shorter Version of Hartig et al.’s Perceived Restorativeness Scale

    Directory of Open Access Journals (Sweden)

    Fátima Negrín

    2017-10-01

    Full Text Available Restorativeness is defined as the potential of the environment to re-establish certain cognitive capacities related to human information processing. The most frequently used instrument for evaluating the restorativeness of places is the Perceived Restorativeness Scale, proposed by Hartig et al. (1991. Later on, shorter versions of the Perceived Restorativeness Scale were proposed. The aim of this work is to evaluate the discriminatory capacity of the original and of a shorter Spanish version of the PRS, considering urban settings previously selected for having different level of restorativeness, according to expert’s criteria. The study involved 244 students and used a 3 × 2 mixed experimental design, with two independent variables: Restorativeness of a place (between-subjects, which was manipulated by showing pictures of settings selected with varying levels of restorativeness (high, medium, low, and length of the scale (within-subjects, which was manipulated by asking subjects to fill in both the original and a shorter version of the PRS. The order of presentation of the two scales was counterbalanced. Results show an appropriate reliability for both version of the scale. Items of being-away, fascination, and coherence of the shorter scale correlate more strongly with the corresponding factor of the original scale, compared to the others factors. Both scales produce similar values for the perceived restorativeness of the different places, except for places with low restorativeness.

  13. Cluster Computing For Real Time Seismic Array Analysis.

    Science.gov (United States)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by

  14. Geographic Location of a Computer Node Examining a Time-to-Location Algorithm and Multiple Autonomous System Networks

    National Research Council Canada - National Science Library

    Sorgaard, Duane

    2004-01-01

    .... A time-to-location algorithm can successfully resolve a geographic location of a computer node using only latency information from known sites and mathematically calculating the Euclidean distance...

  15. Risky family processes prospectively forecast shorter telomere length mediated through negative emotions.

    Science.gov (United States)

    Brody, Gene H; Yu, Tianyi; Shalev, Idan

    2017-05-01

    This study was designed to examine prospective associations of risky family environments with subsequent levels of negative emotions and peripheral blood mononuclear cell telomere length (TL), a marker of cellular aging. A second purpose was to determine whether negative emotions mediate the hypothesized link between risky family processes and diminished telomere length. Participants were 293 adolescents (age 17 years at the first assessment) and their primary caregivers. Caregivers provided data on risky family processes when the youths were age 17 years, youths reported their negative emotions at age 18 years, and youths' TL was assayed from a blood sample at age 22 years. The results revealed that (a) risky family processes forecast heightened negative emotions (β = .316, p emotions forecast shorter TL (β = -.187, p = .012), and (c) negative emotions served as a mediator connecting risky family processes with diminished TL (indirect effect = -0.012, 95% CI [-0.036, -0.002]). These findings are consistent with the hypothesis that risky family processes presage premature cellular aging through effects on negative emotions, with potential implications for lifelong health. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Development of a computer program for drop time and impact velocity of the rod cluster control assembly

    International Nuclear Information System (INIS)

    Choi, K.-S.; Yim, J.-S.; Kim, I.-K.; Kim, K.-T.

    1993-01-01

    In PWR the rod cluster control assembly (RCCA) for shutdown is released upon the action of the control drive mechanism and falls down through the guide thimble by its weight. Drop time and impact velocity of the RCCA are two key parameters with respect to reactivity insertion time and the mechanical integrity of fuel assembly. Therefore, the precise control of the drop time and impact velocity is prerequisite to modifying the existing design features of the RCCA and guide thimble or newly designing them. During its falling down into the core, the RCCA is retarded by various forces acting on it such as flow resistance and friction caused by the RCCA movement, buoyancy mechanical friction caused by contacting inner surface of the guide thimble, etc. However, complicated coupling of the various forces makes it difficult to derive an analytical dynamic equation for the drop time and impact velocity. This paper deals with the development of a computer program containing an analytical dynamic equation applicable to the Korean Fuel Assembly (KOFA) loaded in the Korean nuclear power plants. The computer program is benchmarked with an available single control rod drop tests. Since the predicted values are in good agreements with the test results, the computer program developed in this paper can be employed to modify the existing design features of the RCCA and guide thimble and to develop their new design features for advanced nuclear reactors. (author)

  17. Face to phase: pitfalls in time delay estimation from coherency phase

    NARCIS (Netherlands)

    Campfens, S.F.; van der Kooij, Herman; Schouten, Alfred Christiaan

    2014-01-01

    Coherency phase is often interpreted as a time delay reflecting a transmission delay between spatially separated neural populations. However, time delays estimated from corticomuscular coherency are conflicting and often shorter than expected physiologically. Recent work suggests that

  18. Using Just-in-Time Information to Support Scientific Discovery Learning in a Computer-Based Simulation

    Science.gov (United States)

    Hulshof, Casper D.; de Jong, Ton

    2006-01-01

    Students encounter many obstacles during scientific discovery learning with computer-based simulations. It is hypothesized that an effective type of support, that does not interfere with the scientific discovery learning process, should be delivered on a "just-in-time" base. This study explores the effect of facilitating access to…

  19. submitter A model for the accurate computation of the lateral scattering of protons in water

    CERN Document Server

    Bellinzona, EV; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-01-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  20. Copyright and Computer Generated Materials – Is it Time to Reboot the Discussion About Authorship?

    Directory of Open Access Journals (Sweden)

    Anne Fitzgerald

    2013-12-01

    Full Text Available Computer generated materials are ubiquitous and we encounter them on a daily basis, even though most people are unaware that this is the case. Blockbuster movies, television weather reports and telephone directories all include material that is produced by utilising computer technologies. Copyright protection for materials generated by a programmed computer was considered by the Federal Court and Full Court of the Federal Court in Telstra Corporation Limited v Phone Directories Company Pty Ltd.  The court held that the White and Yellow pages telephone directories produced by Telstra and its subsidiary, Sensis, were not protected by copyright because they were computer-generated works which lacked the requisite human authorship.The Copyright Act 1968 (Cth does not contain specific provisions on the subsistence of copyright in computer-generated materials. Although the issue of copyright protection for computer-generated materials has been examined in Australia on two separate occasions by independently-constituted Copyright Law Review Committees over a period of 10 years (1988 to 1998, the Committees’ recommendations for legislative clarification by the enactment of specific amendments to the Copyright Act have not yet been implemented and the legal position remains unclear. In the light of the decision of the Full Federal Court in Telstra v Phone Directories it is timely to consider whether specific provisions should be enacted to clarify the position of computer-generated works under copyright law and, in particular, whether the requirement of human authorship for original works protected under Part III of the Copyright Act should now be reconceptualised to align with the realities of how copyright materials are created in the digital era.

  1. The investigation and implementation of real-time face pose and direction estimation on mobile computing devices

    Science.gov (United States)

    Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae

    2012-04-01

    The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.

  2. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  3. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    Science.gov (United States)

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  4. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    Directory of Open Access Journals (Sweden)

    Alkın Yurtkuran

    2014-01-01

    Full Text Available The traveling salesman problem with time windows (TSPTW is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle’s boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  5. The effect of tonal changes on voice onset time in Mandarin esophageal speech.

    Science.gov (United States)

    Liu, Hanjun; Ng, Manwa L; Wan, Mingxi; Wang, Supin; Zhang, Yi

    2008-03-01

    The present study investigated the effect of tonal changes on voice onset time (VOT) between normal laryngeal (NL) and superior esophageal (SE) speakers of Mandarin Chinese. VOT values were measured from the syllables /pha/, /tha/, and /kha/ produced at four tone levels by eight NL and seven SE speakers who were native speakers of Mandarin. Results indicated that Mandarin tones were associated with significantly different VOT values for NL speakers, in which high-falling tone was associated with significantly shorter VOT values than mid-rising tone and falling-rising tone. Regarding speaker group, SE speakers showed significantly shorter VOT values than NL speakers across all tone levels. This may be related to their use of pharyngoesophageal (PE) segment as another sound source. SE speakers appear to take a shorter time to start PE segment vibration compared to NL speakers using the vocal folds for vibration.

  6. Review of quantum computation

    International Nuclear Information System (INIS)

    Lloyd, S.

    1992-01-01

    Digital computers are machines that can be programmed to perform logical and arithmetical operations. Contemporary digital computers are ''universal,'' in the sense that a program that runs on one computer can, if properly compiled, run on any other computer that has access to enough memory space and time. Any one universal computer can simulate the operation of any other; and the set of tasks that any such machine can perform is common to all universal machines. Since Bennett's discovery that computation can be carried out in a non-dissipative fashion, a number of Hamiltonian quantum-mechanical systems have been proposed whose time-evolutions over discrete intervals are equivalent to those of specific universal computers. The first quantum-mechanical treatment of computers was given by Benioff, who exhibited a Hamiltonian system with a basis whose members corresponded to the logical states of a Turing machine. In order to make the Hamiltonian local, in the sense that its structure depended only on the part of the computation being performed at that time, Benioff found it necessary to make the Hamiltonian time-dependent. Feynman discovered a way to make the computational Hamiltonian both local and time-independent by incorporating the direction of computation in the initial condition. In Feynman's quantum computer, the program is a carefully prepared wave packet that propagates through different computational states. Deutsch presented a quantum computer that exploits the possibility of existing in a superposition of computational states to perform tasks that a classical computer cannot, such as generating purely random numbers, and carrying out superpositions of computations as a method of parallel processing. In this paper, we show that such computers, by virtue of their common function, possess a common form for their quantum dynamics

  7. Real-time management (RTM) by cloud computing system dynamics (CCSD) for risk analysis of Fukushima nuclear power plant (NPP) accident

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyo Sung [Yonsei Univ., Wonju Gangwon-do (Korea, Republic of). Dept. of Radiation Convergence Engineering; Woo, Tae Ho [Yonsei Univ., Wonju Gangwon-do (Korea, Republic of). Dept. of Radiation Convergence Engineering; The Cyber Univ. of Korea, Seoul (Korea, Republic of). Dept. of Mechanical and Control Engineering

    2017-03-15

    The earthquake and tsunami induced accident of nuclear power plant (NPP) in Fukushima disaster is investigated by the real-time management (RTM) method. This non-linear logic of the safety management is applied to enhance the methodological confidence in the NPP reliability. The case study of the earthquake is modeled for the fast reaction characteristics of the RTM. The system dynamics (SD) modeling simulations and cloud computing are applied for the RTM method where the real time simulation has the fast and effective communication for the accident remediation and prevention. Current tablet computing system can improve the safety standard of the NPP. Finally, the procedure of the cloud computing system dynamics (CCSD) modeling is constructed.

  8. Real-time management (RTM) by cloud computing system dynamics (CCSD) for risk analysis of Fukushima nuclear power plant (NPP) accident

    International Nuclear Information System (INIS)

    Cho, Hyo Sung; Woo, Tae Ho; The Cyber Univ. of Korea, Seoul

    2017-01-01

    The earthquake and tsunami induced accident of nuclear power plant (NPP) in Fukushima disaster is investigated by the real-time management (RTM) method. This non-linear logic of the safety management is applied to enhance the methodological confidence in the NPP reliability. The case study of the earthquake is modeled for the fast reaction characteristics of the RTM. The system dynamics (SD) modeling simulations and cloud computing are applied for the RTM method where the real time simulation has the fast and effective communication for the accident remediation and prevention. Current tablet computing system can improve the safety standard of the NPP. Finally, the procedure of the cloud computing system dynamics (CCSD) modeling is constructed.

  9. Defibrillator charging before rhythm analysis significantly reduces hands-off time during resuscitation

    DEFF Research Database (Denmark)

    Hansen, L. K.; Folkestad, L.; Brabrand, M.

    2013-01-01

    BACKGROUND: Our objective was to reduce hands-off time during cardiopulmonary resuscitation as increased hands-off time leads to higher mortality. METHODS: The European Resuscitation Council (ERC) 2005 and ERC 2010 guidelines were compared with an alternative sequence (ALT). Pulseless ventricular...... physicians were included. All had prior experience in advanced life support. Chest compressions were shorter interrupted using ALT (mean, 6.7 vs 13.0 seconds). Analyzing data for ventricular tachycardia scenarios only, hands-off time was shorter using ALT (mean, 7.1 vs 18.2 seconds). In ERC 2010 vs ALT, 12...... physicians were included. Two physicians had not prior experience in advanced life support. Hands-off time was reduced using ALT (mean, 3.9 vs 5.6 seconds). Looking solely at ventricular tachycardia scenarios, hands-off time was shortened using ALT (mean, 4.5 vs 7.6 seconds). No significant reduction...

  10. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    International Nuclear Information System (INIS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-01-01

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical

  11. Hard real-time quick EXAFS data acquisition with all open source software on a commodity personal computer

    International Nuclear Information System (INIS)

    So, I.; Siddons, D.P.; Caliebe, W.A.; Khalid, S.

    2007-01-01

    We describe here the data acquisition subsystem of the Quick EXAFS (QEXAFS) experiment at the National Synchrotron Light Source of Brookhaven National Laboratory. For ease of future growth and flexibility, almost all software components are open source with very active maintainers. Among them, Linux running on x86 desktop computer, RTAI for real-time response, COMEDI driver for the data acquisition hardware, Qt and PyQt for graphical user interface, PyQwt for plotting, and Python for scripting. The signal (A/D) and energy-reading (IK220 encoder) devices in the PCI computer are also EPICS enabled. The control system scans the monochromator energy through a networked EPICS motor. With the real-time kernel, the system is capable of deterministic data-sampling period of tens of micro-seconds with typical timing-jitter of several micro-seconds. At the same time, Linux is running in other non-real-time processes handling the user-interface. A modern Qt-based controls-frontend enhances productivity. The fast plotting and zooming of data in time or energy coordinates let the experimenters verify the quality of the data before detailed analysis. Python scripting is built-in for automation. The typical data-rate for continuous runs are around 10 M bytes/min

  12. The role of real-time in biomedical science: a meta-analysis on computational complexity, delay and speedup.

    Science.gov (United States)

    Faust, Oliver; Yu, Wenwei; Rajendra Acharya, U

    2015-03-01

    The concept of real-time is very important, as it deals with the realizability of computer based health care systems. In this paper we review biomedical real-time systems with a meta-analysis on computational complexity (CC), delay (Δ) and speedup (Sp). During the review we found that, in the majority of papers, the term real-time is part of the thesis indicating that a proposed system or algorithm is practical. However, these papers were not considered for detailed scrutiny. Our detailed analysis focused on papers which support their claim of achieving real-time, with a discussion on CC or Sp. These papers were analyzed in terms of processing system used, application area (AA), CC, Δ, Sp, implementation/algorithm (I/A) and competition. The results show that the ideas of parallel processing and algorithm delay were only recently introduced and journal papers focus more on Algorithm (A) development than on implementation (I). Most authors compete on big O notation (O) and processing time (PT). Based on these results, we adopt the position that the concept of real-time will continue to play an important role in biomedical systems design. We predict that parallel processing considerations, such as Sp and algorithm scaling, will become more important. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Effect of exposure time reduction towards sensitivity and SNR for computed radiography (CR) application in NDT

    International Nuclear Information System (INIS)

    Sapizah Rahim; Khairul Anuar Mohd Salleh; Noorhazleena Azaman; Shaharudin Sayuti; Siti Madiha Muhammad Amir; Arshad Yassin; Abdul Razak Hamzah

    2010-01-01

    Signal-to-noise ratio (SNR) and sensitivity study of Computed Radiography (CR) system with reduction of exposure time is presented. The purposes of this research are to determine the behavior of SNR toward three different thicknesses (step wedge; 5, 10 and 15 mm) and the ability of CR system to recognize hole type penetrameter when the exposure time decreased up to 80 % according to the exposure chart (D7; ISOVOLT Titan E). It is shown that the SNR is decreased with decreasing of exposure time percentage but the high quality image is achieved until 80 % reduction of exposure time. (author)

  14. Urdu translation and validation of shorter version of Positive Affect and Negative Affect Schedule (PANAS) on Pakistani bank employees.

    Science.gov (United States)

    Akhter, Noreen

    2017-10-01

    To translate, adapt and validate shorter version of positive affect and negative affect scale on Pakistani corporate employees. This cross-sectional study was conducted in the twin cities of Islamabad and Rawalpindi from October 2014 to December 2015. The study was completed into two independent parts. In part one, the scale was translated by forward translation. Then it was pilot-tested and administered on customer services employees from commercial banks and the telecommunication sector. Data of the pilot study was analysed by using exploratory factor analysis to extract the initial factor of positive affect and negative affect scale. Part two comprised the main study. Commercial bank employees were included in the sample using convenient sampling technique. Data of the main study was analysed using confirmatory factor analysis in order to establish construct validity of positive affect and negative affect scale. There were145 participants in the first part of the study and 495 in the second. Results of confirmatory factor analysis confirmed the two-factor structure of positive affect and negative affect scale suggesting that the scale has two distinct domains, i.e. positive affect and negative affect. The shorter version of positive affect and negative affect scale was found to be a valid and reliable measure.

  15. A revised method to calculate the concentration time integral of atmospheric pollutants

    International Nuclear Information System (INIS)

    Voelz, E.; Schultz, H.

    1980-01-01

    It is possible to calculate the spreading of a plume in the atmosphere under nonstationary and nonhomogeneous conditions by introducing the ''particle-in-cell'' method (PIC). This is a numerical method by which the transport of and the diffusion in the plume is reproduced in such a way, that particles representing the concentration are moved time step-wise in restricted regions (cells) and separately with the advection velocity and the diffusion velocity. This has a systematical advantage over the steady state Gaussian plume model usually used. The fixed-point concentration time integral is calculated directly instead of being substituted by the locally integrated concentration at a constant time as is done in the Gaussian model. In this way inaccuracies due to the above mentioned computational techniques may be avoided for short-time emissions, as may be seen by the fact that both integrals do not lead to the same results. Also the PIC method enables one to consider the height-dependent wind speed and its variations while the Gaussian model can be used only with averaged wind data. The concentration time integral calculated by the PIC method results in higher maximum values in shorter distances to the source. This is an effect often observed in measurements. (author)

  16. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    Science.gov (United States)

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  17. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    Science.gov (United States)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  18. Computational time-resolved and resonant x-ray scattering of strongly correlated materials

    Energy Technology Data Exchange (ETDEWEB)

    Bansil, Arun [Northeastern Univ., Boston, MA (United States)

    2016-11-09

    Basic-Energy Sciences of the Department of Energy (BES/DOE) has made large investments in x-ray sources in the U.S. (NSLS-II, LCLS, NGLS, ALS, APS) as powerful enabling tools for opening up unprecedented new opportunities for exploring properties of matter at various length and time scales. The coming online of the pulsed photon source, literally allows us to see and follow the dynamics of processes in materials at their natural timescales. There is an urgent need therefore to develop theoretical methodologies and computational models for understanding how x-rays interact with matter and the related spectroscopies of materials. The present project addressed aspects of this grand challenge of x-ray science. In particular, our Collaborative Research Team (CRT) focused on developing viable computational schemes for modeling x-ray scattering and photoemission spectra of strongly correlated materials in the time-domain. The vast arsenal of formal/numerical techniques and approaches encompassed by the members of our CRT were brought to bear through appropriate generalizations and extensions to model the pumped state and the dynamics of this non-equilibrium state, and how it can be probed via x-ray absorption (XAS), emission (XES), resonant and non-resonant x-ray scattering, and photoemission processes. We explored the conceptual connections between the time-domain problems and other second-order spectroscopies, such as resonant inelastic x-ray scattering (RIXS) because RIXS may be effectively thought of as a pump-probe experiment in which the incoming photon acts as the pump, and the fluorescent decay is the probe. Alternatively, when the core-valence interactions are strong, one can view K-edge RIXS for example, as the dynamic response of the material to the transient presence of a strong core-hole potential. Unlike an actual pump-probe experiment, here there is no mechanism for adjusting the time-delay between the pump and the probe. However, the core hole

  19. Transmission time of a particle in the reflectionless Sech-squared potential: Quantum clock approach

    International Nuclear Information System (INIS)

    Park, Chang-Soo

    2011-01-01

    We investigate the time for a particle to pass through the reflectionless Sech-squared potential. Using the Salecker-Wigner and Peres quantum clock an average transmission time of a Gaussian wave packet representing the particle is explicitly evaluated in terms of average momentum and travel distance. The average transmission time is shown to be shorter than the time of free-particle motion and very close to the classical time for wave packets with well-localized momentum states. Since the clock measures the duration of scattering process the average transmission time can be interpreted as the average dwell time. -- Highlights: → We examine the scattering of a particle in the Sech-squared potential. → We use quantum clock to find an average transmission time. → It is very close to the classical time. → It is shorter than the time of free particle. → It is interpreted as the average dwell time.

  20. Computer simulation of the time evolution of a quenched model alloy in the nucleation region

    International Nuclear Information System (INIS)

    Marro, J.; Lebowitz, J.L.; Kalos, M.H.

    1979-01-01

    The time evolution of the structure function and of the cluster (or grain) distribution following quenching in a model binary alloy with a small concentration of minority atoms is obtained from computer simulations. The structure function S-bar (k,t) obeys a simple scaling relation, S-bar (k,t) = K -3 F (k/K) with K (t) proportional t/sup -a/, a approx. = 0.25, during the latter and larger part of the evolution. During the same period, the mean cluster size grows approximately linearly with time

  1. Fault tolerant distributed real time computer systems for I and C of prototype fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    Manimaran, M., E-mail: maran@igcar.gov.in; Shanmugam, A.; Parimalam, P.; Murali, N.; Satya Murty, S.A.V.

    2014-03-15

    Highlights: • Architecture of distributed real time computer system (DRTCS) used in I and C of PFBR is explained. • Fault tolerant (hot standby) architecture, fault detection and switch over are detailed. • Scaled down model was used to study functional and performance requirements of DRTCS. • Quality of service parameters for scaled down model was critically studied. - Abstract: Prototype fast breeder reactor (PFBR) is in the advanced stage of construction at Kalpakkam, India. Three-tier architecture is adopted for instrumentation and control (I and C) of PFBR wherein bottom tier consists of real time computer (RTC) systems, middle tier consists of process computers and top tier constitutes of display stations. These RTC systems are geographically distributed and networked together with process computers and display stations. Hot standby architecture comprising of dual redundant RTC systems with switch over logic system is deployed in order to achieve fault tolerance. Fault tolerant dual redundant network connectivity is provided in each RTC system and TCP/IP protocol is selected for network communication. In order to assess the performance of distributed RTC systems, scaled down model was developed with 9 representative systems and nearly 15% of I and C signals of PFBR were connected and monitored. Functional and performance testing were carried out for each RTC system and the fault tolerant characteristics were studied by creating various faults into the system and observed the performance. Various quality of service parameters like connection establishment delay, priority parameter, transit delay, throughput, residual error ratio, etc., are critically studied for the network.

  2. Time-Domain Techniques for Computation and Reconstruction of One-Dimensional Profiles

    Directory of Open Access Journals (Sweden)

    M. Rahman

    2005-01-01

    Full Text Available This paper presents a time-domain technique to compute the electromagnetic fields and to reconstruct the permittivity profile within a one-dimensional medium of finite length. The medium is characterized by a permittivity as well as conductivity profile which vary only with depth. The discussed scattering problem is thus one-dimensional. The modeling tool is divided into two different schemes which are named as the forward solver and the inverse solver. The task of the forward solver is to compute the internal fields of the specimen which is performed by Green’s function approach. When a known electromagnetic wave is incident normally on the media, the resulting electromagnetic field within the media can be calculated by constructing a Green’s operator. This operator maps the incident field on either side of the medium to the field at an arbitrary observation point. It is nothing but a matrix of integral operators with kernels satisfying known partial differential equations. The reflection and transmission behavior of the medium is also determined from the boundary values of the Green's operator. The inverse solver is responsible for solving an inverse scattering problem by reconstructing the permittivity profile of the medium. Though it is possible to use several algorithms to solve this problem, the invariant embedding method, also known as the layer-stripping method, has been implemented here due to the advantage that it requires a finite time trace of reflection data. Here only one round trip of reflection data is used, where one round trip is defined by the time required by the pulse to propagate through the medium and back again. The inversion process begins by retrieving the reflection kernel from the reflected wave data by simply using a deconvolution technique. The rest of the task can easily be performed by applying a numerical approach to determine different profile parameters. Both the solvers have been found to have the

  3. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  4. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  5. Efficient Computation of Multiscale Entropy over Short Biomedical Time Series Based on Linear State-Space Models

    Directory of Open Access Journals (Sweden)

    Luca Faes

    2017-01-01

    Full Text Available The most common approach to assess the dynamical complexity of a time series across multiple temporal scales makes use of the multiscale entropy (MSE and refined MSE (RMSE measures. In spite of their popularity, MSE and RMSE lack an analytical framework allowing their calculation for known dynamic processes and cannot be reliably computed over short time series. To overcome these limitations, we propose a method to assess RMSE for autoregressive (AR stochastic processes. The method makes use of linear state-space (SS models to provide the multiscale parametric representation of an AR process observed at different time scales and exploits the SS parameters to quantify analytically the complexity of the process. The resulting linear MSE (LMSE measure is first tested in simulations, both theoretically to relate the multiscale complexity of AR processes to their dynamical properties and over short process realizations to assess its computational reliability in comparison with RMSE. Then, it is applied to the time series of heart period, arterial pressure, and respiration measured for healthy subjects monitored in resting conditions and during physiological stress. This application to short-term cardiovascular variability documents that LMSE can describe better than RMSE the activity of physiological mechanisms producing biological oscillations at different temporal scales.

  6. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Thomas J. Marlowe

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants.

  7. Using real-time fMRI brain-computer interfacing to treat eating disorders.

    Science.gov (United States)

    Sokunbi, Moses O

    2018-05-15

    Real-time functional magnetic resonance imaging based brain-computer interfacing (fMRI neurofeedback) has shown encouraging outcomes in the treatment of psychiatric and behavioural disorders. However, its use in the treatment of eating disorders is very limited. Here, we give a brief overview of how to design and implement fMRI neurofeedback intervention for the treatment of eating disorders, considering the basic and essential components. We also attempt to develop potential adaptations of fMRI neurofeedback intervention for the treatment of anorexia nervosa, bulimia nervosa and binge eating disorder. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. 42 CFR 137.78 - May a Self-Governance Tribe negotiate a funding agreement for a term longer or shorter than one...

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false May a Self-Governance Tribe negotiate a funding agreement for a term longer or shorter than one year? 137.78 Section 137.78 Public Health PUBLIC HEALTH... SERVICES TRIBAL SELF-GOVERNANCE Funding General § 137.78 May a Self-Governance Tribe negotiate a funding...

  9. Computer scientist looks at reliability computations

    International Nuclear Information System (INIS)

    Rosenthal, A.

    1975-01-01

    Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored

  10. Computer Virus and Trends

    OpenAIRE

    Tutut Handayani; Soenarto Usna,Drs.MMSI

    2004-01-01

    Since its appearance the first time in the mid-1980s, computer virus has invited various controversies that still lasts to this day. Along with the development of computer systems technology, viruses komputerpun find new ways to spread itself through a variety of existing communications media. This paper discusses about some things related to computer viruses, namely: the definition and history of computer viruses; the basics of computer viruses; state of computer viruses at this time; and ...

  11. Some selection criteria for computers in real-time systems for high energy physics

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1980-01-01

    The right choice of program source is for the organization of real-time systems of great importance as cost and reliability are decisive factors. Some selection criteria for program sources for high energy physics multiwire chamber spectrometers (MWCS) are considered in this report. MWCS's accept bits of information from event pattens. Large and small computers, microcomputers and intelligent controllers in CAMAC crates are compared with respect to the following characteristics: data exchange speed, number of addresses for peripheral devices, cost of interfacing a peripheral device, sizes of buffer and mass memory, configuration costs, and the mean time between failures (MTBF). The results of comparisons are shown by plots and histograms which allow the selection of program sources according to the above criteria. (Auth.)

  12. MUSIDH, multiple use of simulated demographic histories, a novel method to reduce computation time in microsimulation models of infectious diseases.

    Science.gov (United States)

    Fischer, E A J; De Vlas, S J; Richardus, J H; Habbema, J D F

    2008-09-01

    Microsimulation of infectious diseases requires simulation of many life histories of interacting individuals. In particular, relatively rare infections such as leprosy need to be studied in very large populations. Computation time increases disproportionally with the size of the simulated population. We present a novel method, MUSIDH, an acronym for multiple use of simulated demographic histories, to reduce computation time. Demographic history refers to the processes of birth, death and all other demographic events that should be unrelated to the natural course of an infection, thus non-fatal infections. MUSIDH attaches a fixed number of infection histories to each demographic history, and these infection histories interact as if being the infection history of separate individuals. With two examples, mumps and leprosy, we show that the method can give a factor 50 reduction in computation time at the cost of a small loss in precision. The largest reductions are obtained for rare infections with complex demographic histories.

  13. Effect of Group Exercising and Adjusting the Brace at Shorter Intervals on Cobb Angle and Quality of Life of Patients with Idiopathic Scoliosis

    Directory of Open Access Journals (Sweden)

    Zahra Hedayati

    2016-01-01

    Full Text Available Objective: Bracing along with exercising is the most effective protocol in patients with idiopathic scoliosis which have Cobb angles of 25 to 45 degrees. However, since the psychological aspects of scoliosis treatment may affect the quality of life, and the exact time for adjusting the pads of Milwaukee brace is unknown; Therefore the aim of this study was evaluating the effect of exercising in a group, with adjusting the brace in shorter intervals, in compare to routine protocol, in the treatment of idiopathic scoliosis. Matterials & Methods: Thirty-four patients with idiopathic scoliosis which had Cobb angles of 50 to 15 degrees were included in this study and were divided into experimental and control groups. The patients of two groups participated in an eleven-week treatment program, differ between the two groups. Quality of life scores of both groups were evaluated before and after intervention using SRS-22 questionnaire, as well as scoliosis angles before and after the intervention according to the primary and secondary radiographic X-rays. Results: Statistical analysis was performed using Paired T-Test in each group, and Independent T-Test between the two groups before and after treatment. The severity of scoliosis curvature and satisfaction domain of the experimental group was reduced significantly in compared with the control group, after intervention (P=0.04. Moreover in the case of  the quality of life in patients with Cobb angles less than 30 degrees, compared with patients with Cobb angles greater than 31 degrees, in the domains of self-image, satisfaction, and total score, the difference was significant (P<0.05. Conclusion: Adjusting the brace at shorter intervals along with exercising as a group, during the eleven weeks of treatment, has increased satisfaction and reduced the scoliosis Cobb angles of patients.

  14. A Distributed Computing Framework for Real-Time Detection of Stress and of Its Propagation in a Team.

    Science.gov (United States)

    Pandey, Parul; Lee, Eun Kyung; Pompili, Dario

    2016-11-01

    Stress is one of the key factor that impacts the quality of our daily life: From the productivity and efficiency in the production processes to the ability of (civilian and military) individuals in making rational decisions. Also, stress can propagate from one individual to other working in a close proximity or toward a common goal, e.g., in a military operation or workforce. Real-time assessment of the stress of individuals alone is, however, not sufficient, as understanding its source and direction in which it propagates in a group of people is equally-if not more-important. A continuous near real-time in situ personal stress monitoring system to quantify level of stress of individuals and its direction of propagation in a team is envisioned. However, stress monitoring of an individual via his/her mobile device may not always be possible for extended periods of time due to limited battery capacity of these devices. To overcome this challenge a novel distributed mobile computing framework is proposed to organize the resources in the vicinity and form a mobile device cloud that enables offloading of computation tasks in stress detection algorithm from resource constrained devices (low residual battery, limited CPU cycles) to resource rich devices. Our framework also supports computing parallelization and workflows, defining how the data and tasks divided/assigned among the entities of the framework are designed. The direction of propagation and magnitude of influence of stress in a group of individuals are studied by applying real-time, in situ analysis of Granger Causality. Tangible benefits (in terms of energy expenditure and execution time) of the proposed framework in comparison to a centralized framework are presented via thorough simulations and real experiments.

  15. Hybrid automata models of cardiac ventricular electrophysiology for real-time computational applications.

    Science.gov (United States)

    Andalam, Sidharta; Ramanna, Harshavardhan; Malik, Avinash; Roop, Parthasarathi; Patel, Nitish; Trew, Mark L

    2016-08-01

    Virtual heart models have been proposed for closed loop validation of safety-critical embedded medical devices, such as pacemakers. These models must react in real-time to off-the-shelf medical devices. Real-time performance can be obtained by implementing models in computer hardware, and methods of compiling classes of Hybrid Automata (HA) onto FPGA have been developed. Models of ventricular cardiac cell electrophysiology have been described using HA which capture the complex nonlinear behavior of biological systems. However, many models that have been used for closed-loop validation of pacemakers are highly abstract and do not capture important characteristics of the dynamic rate response. We developed a new HA model of cardiac cells which captures dynamic behavior and we implemented the model in hardware. This potentially enables modeling the heart with over 1 million dynamic cells, making the approach ideal for closed loop testing of medical devices.

  16. "Taller and Shorter": Human 3-D Spatial Memory Distorts Familiar Multilevel Buildings.

    Directory of Open Access Journals (Sweden)

    Thomas Brandt

    Full Text Available Animal experiments report contradictory findings on the presence of a behavioural and neuronal anisotropy exhibited in vertical and horizontal capabilities of spatial orientation and navigation. We performed a pointing experiment in humans on the imagined 3-D direction of the location of various invisible goals that were distributed horizontally and vertically in a familiar multilevel hospital building. The 21 participants were employees who had worked for years in this building. The hypothesis was that comparison of the experimentally determined directions and the true directions would reveal systematic inaccuracy or dimensional anisotropy of the localizations. The study provides first evidence that the internal representation of a familiar multilevel building was distorted compared to the dimensions of the true building: vertically 215% taller and horizontally 51% shorter. This was not only demonstrated in the mathematical reconstruction of the mental model based on the analysis of the pointing experiments but also by the participants' drawings of the front view and the ground plan of the building. Thus, in the mental model both planes were altered in different directions: compressed for the horizontal floor plane and stretched for the vertical column plane. This could be related to human anisotropic behavioural performance of horizontal and vertical navigation in such buildings.

  17. "Taller and Shorter": Human 3-D Spatial Memory Distorts Familiar Multilevel Buildings.

    Science.gov (United States)

    Brandt, Thomas; Huber, Markus; Schramm, Hannah; Kugler, Günter; Dieterich, Marianne; Glasauer, Stefan

    2015-01-01

    Animal experiments report contradictory findings on the presence of a behavioural and neuronal anisotropy exhibited in vertical and horizontal capabilities of spatial orientation and navigation. We performed a pointing experiment in humans on the imagined 3-D direction of the location of various invisible goals that were distributed horizontally and vertically in a familiar multilevel hospital building. The 21 participants were employees who had worked for years in this building. The hypothesis was that comparison of the experimentally determined directions and the true directions would reveal systematic inaccuracy or dimensional anisotropy of the localizations. The study provides first evidence that the internal representation of a familiar multilevel building was distorted compared to the dimensions of the true building: vertically 215% taller and horizontally 51% shorter. This was not only demonstrated in the mathematical reconstruction of the mental model based on the analysis of the pointing experiments but also by the participants' drawings of the front view and the ground plan of the building. Thus, in the mental model both planes were altered in different directions: compressed for the horizontal floor plane and stretched for the vertical column plane. This could be related to human anisotropic behavioural performance of horizontal and vertical navigation in such buildings.

  18. TIA model is attainable in Wistar rats by intraluminal occlusion of the MCA for 10min or shorter.

    Science.gov (United States)

    Durukan Tolvanen, A; Tatlisumak, E; Pedrono, E; Abo-Ramadan, U; Tatlisumak, T

    2017-05-15

    Transient ischemic attack (TIA) has received only little attention in the experimental research field. Recently, we introduced a TIA model for mice, and here we set similar principles for simulating this human condition in Wistar rats. In the model: 1) transient nature of the event is ensured, and 2) 24h after the event animals are free from any sensorimotor deficit and from any detectable lesion by magnetic resonance imaging (MRI). Animals experienced varying durations of ischemia (5, 10, 12.5, 15, 25, and 30min, n=6-8pergroup) by intraluminal middle cerebral artery occlusion (MCAO). Ischemia severity and reperfusion rates were controlled by cerebral blood flow measurements. Sensorimotor neurological evaluations and MRI at 24h differentiated between TIA and ischemic stroke. Hematoxylin and eosin staining and apoptotic cell counts revealed pathological correlates of the event. We found that already 12.5min of ischemia was long enough to induce ischemic stroke in Wistar rats. Ten min or shorter durations induced neither gross neurological deficits nor infarcts visible on MRI, but histologically caused selective neuronal necrosis. A separate group of animals with 10min of ischemia followed up to 1week after reperfusion remained free of infarction and any MRI signal change. Thus, 10min or shorter focal cerebral ischemia induced by intraluminal MCAO in Wistar rats provides a clinically relevant TIA the rat. This model is useful for studying molecular correlates of TIA. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Toward a web-based real-time radiation treatment planning system in a cloud computing environment.

    Science.gov (United States)

    Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei

    2013-09-21

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are

  20. Toward a web-based real-time radiation treatment planning system in a cloud computing environment

    International Nuclear Information System (INIS)

    Na, Yong Hum; Kapp, Daniel S; Xing, Lei; Suh, Tae-Suk

    2013-01-01

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm 2 ) from the Varian TrueBeam TM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are

  1. Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing.

    Science.gov (United States)

    Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee

    2012-05-01

    Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.

  2. Shorter epilepsy duration is associated with better seizure outcome in temporal lobe epilepsy surgery

    Directory of Open Access Journals (Sweden)

    Lucas Crociati Meguins

    2015-03-01

    Full Text Available Objective To investigate the influence of patient’s age and seizure onset on surgical outcome of temporal lobe epilepsy (TLE. Method A retrospective observational investigation performed from a cohort of patients from 2000 to 2012. Results A total of 229 patients were included. One-hundred and eleven of 179 patients (62% were classified as Engel I in the group with < 50 years old, whereas 33 of 50 (66% in the group with ≥ 50 years old group (p = 0.82. From those Engel I, 88 (61% reported epilepsy duration inferior to 10 years and 56 (39% superior to 10 years (p < 0.01. From the total of patients not seizure free, 36 (42% reported epilepsy duration inferior to 10 years and 49 (58% superior to 10 years (p < 0.01. Conclusion Patients with shorter duration of epilepsy before surgery had better postoperative seizure control than patients with longer duration of seizures.

  3. EIAGRID: In-field optimization of seismic data acquisition by real-time subsurface imaging using a remote GRID computing environment.

    Science.gov (United States)

    Heilmann, B. Z.; Vallenilla Ferrara, A. M.

    2009-04-01

    The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The

  4. Shorter Versus Longer Shift Durations to Mitigate Fatigue and Fatigue-Related Risks in Emergency Medical Services Personnel and Related Shift Workers: A Systematic Review

    Science.gov (United States)

    2018-01-11

    Background: This study comprehensively reviewed the literature on the impact of shorter versus longer shifts on critical and important outcomes for Emergency Medical Services (EMS) personnel and related shift worker groups. Methods: Six databases (e....

  5. Exploring the quantum speed limit with computer games

    Science.gov (United States)

    Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.

    2016-04-01

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  6. Contrast timing in computed tomography: Effect of different contrast media concentrations on bolus geometry

    International Nuclear Information System (INIS)

    Mahnken, Andreas H.; Jost, Gregor; Seidensticker, Peter; Kuhl, Christiane; Pietsch, Hubertus

    2012-01-01

    Objective: To assess the effect of low-osmolar, monomeric contrast media with different iodine concentrations on bolus shape in aortic CT angiography. Materials and methods: Repeated sequential computed tomography scanning of the descending aorta of eight beagle dogs (5 male, 12.7 ± 3.1 kg) was performed without table movement with a standardized CT scan protocol. Iopromide 300 (300 mg I/mL), iopromide 370 (370 mg I/mL) and iomeprol 400 (400 mg I/mL) were administered via a foreleg vein with an identical iodine delivery rate of 1.2 g I/s and a total iodine dose of 300 mg I/kg body weight. Time-enhancement curves were computed and analyzed. Results: Iopromide 300 showed the highest peak enhancement (445.2 ± 89.1 HU), steepest up-slope (104.2 ± 17.5 HU/s) and smallest full width at half maximum (FWHM; 5.8 ± 1.0 s). Peak enhancement, duration of FWHM, enhancement at FWHM and up-slope differed significantly between iopromide 300 and iomeprol 400 (p 0.05). Conclusions: Low viscous iopromide 300 results in a better defined bolus with a significantly higher peak enhancement, steeper up-slope and smaller FWHM when compared to iomeprol 400. These characteristics potentially affect contrast timing.

  7. Time profile of type 3 bursts in decameter and hectometer range

    Science.gov (United States)

    Takakura, T.; Naito, Y.; Ohki, K.

    1973-01-01

    The following new hypothesis is proposed. The decay time of plasma waves is much shorter than the time scale of type 3 bursts especially at low frequencies. Accordingly, the time variation of radio flux at a given frequency merely corresponds to the flux of fast electrons passing through the corresponding plasma layer.

  8. A new system of computer-assisted navigation leading to reduction in operating time in uncemented total hip replacement in a matched population.

    Science.gov (United States)

    Chaudhry, Fouad A; Ismail, Sanaa Z; Davis, Edward T

    2018-05-01

    Computer-assisted navigation techniques are used to optimise component placement and alignment in total hip replacement. It has developed in the last 10 years but despite its advantages only 0.3% of all total hip replacements in England and Wales are done using computer navigation. One of the reasons for this is that computer-assisted technology increases operative time. A new method of pelvic registration has been developed without the need to register the anterior pelvic plane (BrainLab hip 6.0) which has shown to improve the accuracy of THR. The purpose of this study was to find out if the new method reduces the operating time. This was a retrospective analysis of comparing operating time in computer navigated primary uncemented total hip replacement using two methods of registration. Group 1 included 128 cases that were performed using BrainLab versions 2.1-5.1. This version relied on the acquisition of the anterior pelvic plane for registration. Group 2 included 128 cases that were performed using the newest navigation software, BrainLab hip 6.0 (registration possible with the patient in the lateral decubitus position). The operating time was 65.79 (40-98) minutes using the old method of registration and was 50.87 (33-74) minutes using the new method of registration. This difference was statistically significant. The body mass index (BMI) was comparable in both groups. The study supports the use of new method of registration in improving the operating time in computer navigated primary uncemented total hip replacements.

  9. Prenatal paracetamol exposure is associated with shorter anogenital distance in male infants

    Science.gov (United States)

    Fisher, B.G.; Thankamony, A.; Hughes, I.A.; Ong, K.K.; Dunger, D.B.; Acerini, C.L.

    2016-01-01

    STUDY QUESTION What is the relationship between maternal paracetamol intake during the masculinisation programming window (MPW, 8–14 weeks of gestation) and male infant anogenital distance (AGD), a biomarker for androgen action during the MPW? SUMMARY ANSWER Intrauterine paracetamol exposure during 8–14 weeks of gestation is associated with shorter AGD from birth to 24 months of age. WHAT IS ALREADY KNOWN The increasing prevalence of male reproductive disorders may reflect environmental influences on foetal testicular development during the MPW. Animal and human xenograft studies have demonstrated that paracetamol reduces foetal testicular testosterone production, consistent with reported epidemiological associations between prenatal paracetamol exposure and cryptorchidism. STUDY DESIGN, SIZE, DURATION Prospective cohort study (Cambridge Baby Growth Study), with recruitment of pregnant women at ~12 post-menstrual weeks of gestation from a single UK maternity unit between 2001 and 2009, and 24 months of infant follow-up. Of 2229 recruited women, 1640 continued with the infancy study after delivery, of whom 676 delivered male infants and completed a medicine consumption questionnaire. PARTICIPANTS/MATERIALS, SETTING, METHOD Mothers self-reported medicine consumption during pregnancy by a questionnaire administered during the perinatal period. Infant AGD (measured from 2006 onwards), penile length and testicular descent were assessed at 0, 3, 12, 18 and 24 months of age, and age-specific Z scores were calculated. Associations between paracetamol intake during three gestational periods (14 weeks) and these outcomes were tested by linear mixed models. Two hundred and twenty-five (33%) of six hundred and eighty-one male infants were exposed to paracetamol during pregnancy, of whom sixty-eight were reported to be exposed during 8–14 weeks. AGD measurements were available for 434 male infants. MAIN RESULTS AND THE ROLE OF CHANCE Paracetamol exposure during 8–14

  10. LUCKY-TD code for solving the time-dependent transport equation with the use of parallel computations

    Energy Technology Data Exchange (ETDEWEB)

    Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2016-12-15

    An algorithm for solving the time-dependent transport equation in the P{sub m}S{sub n} group approximation with the use of parallel computations is presented. The algorithm is implemented in the LUCKY-TD code for supercomputers employing the MPI standard for the data exchange between parallel processes.

  11. Memristive Computational Architecture of an Echo State Network for Real-Time Speech Emotion Recognition

    Science.gov (United States)

    2015-05-28

    recognition is simpler and requires less computational resources compared to other inputs such as facial expressions . The Berlin database of Emotional ...Processing Magazine, IEEE, vol. 18, no. 1, pp. 32– 80, 2001. [15] K. R. Scherer, T. Johnstone, and G. Klasmeyer, “Vocal expression of emotion ...Network for Real-Time Speech- Emotion Recognition 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S) Q

  12. Balancing Exploration, Uncertainty Representation and Computational Time in Many-Objective Reservoir Policy Optimization

    Science.gov (United States)

    Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.

    2016-12-01

    As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS

  13. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    Science.gov (United States)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  14. Calculation of brain atrophy using computed tomography and a new atrophy measurement tool

    Science.gov (United States)

    Bin Zahid, Abdullah; Mikheev, Artem; Yang, Andrew Il; Samadani, Uzma; Rusinek, Henry

    2015-03-01

    Purpose: To determine if brain atrophy can be calculated by performing volumetric analysis on conventional computed tomography (CT) scans in spite of relatively low contrast for this modality. Materials & Method: CTs for 73 patients from the local Veteran Affairs database were selected. Exclusion criteria: AD, NPH, tumor, and alcohol abuse. Protocol: conventional clinical acquisition (Toshiba; helical, 120 kVp, X-ray tube current 300mA, slice thickness 3-5mm). Locally developed, automatic algorithm was used to segment intracranial cavity (ICC) using (a) white matter seed (b) constrained growth, limited by inner skull layer and (c) topological connectivity. ICC was further segmented into CSF and brain parenchyma using a threshold of 16 Hu. Results: Age distribution: 25-95yrs; (Mean 67+/-17.5yrs.). Significant correlation was found between age and CSF/ICC(r=0.695, pautomated software and conventional CT. Compared to MRI, CT is more widely available, cheaper, and less affected by head motion due to ~100 times shorter scan time. Work is in progress to improve the precision of the measurements, possibly leading to assessment of longitudinal changes within the patient.

  15. Cloud Computing: A model Construct of Real-Time Monitoring for Big Dataset Analytics Using Apache Spark

    Science.gov (United States)

    Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer

    2018-01-01

    The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.

  16. Feasibility of a shorter Goal Attainment Scaling method for a pediatric spasticity clinic - The 3-milestones GAS.

    Science.gov (United States)

    Krasny-Pacini, A; Pauly, F; Hiebel, J; Godon, S; Isner-Horobeti, M-E; Chevignard, M

    2017-07-01

    Goal Attainment Scaling (GAS) is a method for writing personalized evaluation scales to quantify progress toward defined goals. It is useful in rehabilitation but is hampered by the experience required to adequately "predict" the possible outcomes relating to a particular goal before treatment and the time needed to describe all 5 levels of the scale. Here we aimed to investigate the feasibility of using GAS in a clinical setting of a pediatric spasticity clinic with a shorter method, the "3-milestones" GAS (goal setting with 3 levels and goal rating with the classical 5 levels). Secondary aims were to (1) analyze the types of goals children's therapists set for botulinum toxin treatment and (2) compare the score distribution (and therefore the ability to predict outcome) by goal type. Therapists were trained in GAS writing and prepared GAS scales in the regional spasticity-management clinic they attended with their patients and families. The study included all GAS scales written during a 2-year period. GAS score distribution across the 5 GAS levels was examined to assess whether the therapist could reliably predict outcome and whether the 3-milestones GAS yielded similar distributions as the original GAS method. In total, 541 GAS scales were written and showed the expected score distribution. Most scales (55%) referred to movement quality goals and fewer (29%) to family goals and activity domains. The 3-milestones GAS method was feasible within the time constraints of the spasticity clinic and could be used by local therapists in cooperation with the hospital team. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  17. Time Synchronization Strategy Between On-Board Computer and FIMS on STSAT-1

    Directory of Open Access Journals (Sweden)

    Seong Woo Kwak

    2004-06-01

    Full Text Available STSAT-1 was launched on sep. 2003 with the main payload of Far Ultra-violet Imaging Spectrograph(FIMS. The mission of FIMS is to observe universe and aurora. In this paper, we suggest a simple and reliable strategy adopted in STSAT-1 to synchronize time between On-board Computer(OBC and FIMS. For the characteristics of STSAT-1, this strategy is devised to maintain reliability of satellite system and to reduce implementation cost by using minimized electronic circuits. We suggested two methods with different synchronization resolutions to cope with unexpected faults in space. The backup method with low resolution can be activated when the main has some problems.

  18. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  19. Prognostic value of incidental hypervascular micronodules detected on cone-beam computed tomography angiography of patients with liver metastasis

    Energy Technology Data Exchange (ETDEWEB)

    Odisio, Bruno C.; Mahvash, Armeen; Gupta, Sanjay; Tam, Alda L.; Murthy, Ravi [The University of Texas MD Anderson Cancer Center, Department of Interventional Radiology, Division of Diagnostic Imaging, Houston, TX (United States); Cox, Veronica L.; Faria, Silvana C. [The University of Texas MD Anderson Cancer Center, Diagnostic Radiology, Houston, TX (United States); Yamashita, Suguru; Vauthey, Jean-Nicolas [The University of Texas MD Anderson Cancer Center, Surgical Oncology, Houston, TX (United States); Shi, Xiao [Baylor College of Medicine, Department of Diagnostic Radiology, Houston, TX (United States); Ensor, Joe [Biostatistics of the Houston Methodist Cancer Center, Houston, TX (United States); Jones, Aaron K. [The University of Texas MD Anderson Cancer Center, Imaging Physics, Houston, TX (United States)

    2017-11-15

    To determine the clinical relevance of incidentally-found hypervascular micronodules (IHM) on cone-beam computed tomography angiography (CBCTA) in patients with liver metastasis undergoing transarterial (chemo)embolization (TACE/TAE). This was a HIPAA-compliant institutional review board-approved single-institution retrospective review of 95 non-cirrhotic patients (52 men; mean age, 60 years) who underwent CBCTA prior to (chemo)embolic delivery. IHM were defined by the presence of innumerable subcentimetre hepatic parenchymal hypevascular foci not detected on pre-TACE/TAE contrast-enhanced cross-sectional imaging. Multivariate analysis was performed to compare time to tumour progression (TTP) between patients with and without IHM. IHM were present in 21 (22%) patients. Patients with IHM had a significantly shorter intrahepatic TTP determined by a higher frequency of developing new liver metastasis (hazard ratio [HR]: 1.99; 95% confidence interval [CI] 1.08-3.67, P= 0.02). Patients with IHM trended towards a shorter TTP of the tumour(s) treated with TACE/TAE (HR: 1.72; 95% CI: 0.98-3.01, P= 0.056). Extrahepatic TTP was not significantly different between the two cohorts (P= 0.27). Patients with IHM on CBCTA have worse prognosis due to a significantly higher risk of developing new hepatic tumours. Further work is needed to elucidate its underlying mechanisms of pathogenesis. (orig.)

  20. Accuracy and computational time of a hierarchy of growth rate definitions for breeder reactor fuel

    International Nuclear Information System (INIS)

    Maudlin, P.J.; Borg, R.C.; Ott, K.O.

    1979-01-01

    For a hierarchy of four logically different definitions for calculating the asymptotic growth of fast breeder reactor fuel, an investigation is performed concerning the comparative accuracy and computational effort associated with each definition. The definition based on detailed calculation of the accumulating fuel in an expanding park of reactors asymptotically yields the most accurate value of the infinite time growth rate, γ/sup infinity/, which is used as a reference value. The computational effort involved with the park definition is very large. The definition based on the single reactor calculation of the equilibrium surplus production rate and fuel inventory gives a value for γ/sup infinity of comparable accuracy to the park definition and uses significantly less central processor unit (CPU) time. The third definition is based on a continuous treatment of the reactor fuel cycle for a single reactor and gives a value for γ/sup infinity/ that accurately approximates the second definition. The continuous definition requires very little CPU time. The fourth definition employs the isotopic breeding worths, w/sub i//sup */, for a projection of the asymptotic growth rate. The CPU time involved in this definition is practically nil if its calculation is based on the few-cycle depletion calculation normally performed for core design and critical enrichment evaluations. The small inaccuracy (approx. = 1%) of the breeding-worth-based definition is well within the inaccuracy range that results unavoidably from other sources such as nuclear cross sections, group constants, and flux calculations. This fully justifies the use of this approach in routine calculations

  1. Call-to-balloon time dashboard in patients with ST-segment elevation myocardial infarction results in significant improvement in the logistic chain.

    Science.gov (United States)

    Hermans, Maaike P J; Velders, Matthijs A; Smeekes, Martin; Drexhage, Olivier S; Hautvast, Raymond W M; Ytsma, Timon; Schalij, Martin J; Umans, Victor A W M

    2017-08-04

    Timely reperfusion with primary percutaneous coronary intervention (pPCI) in ST-segment elevation myocardial infarction (STEMI) patients is associated with superior clinical outcomes. Aiming to reduce ischaemic time, an innovative system for home-to-hospital (H2H) time monitoring was implemented, which enabled real-time evaluation of ischaemic time intervals, regular feedback and improvements in the logistic chain. The objective of this study was to assess the results after implementation of the H2H dashboard for monitoring and evaluation of ischaemic time in STEMI patients. Ischaemic time in STEMI patients transported by emergency medical services (EMS) and treated with pPCI in the Noordwest Ziekenhuis, Alkmaar before (2008-2009; n=495) and after the implementation of the H2H dashboard (2011-2014; n=441) was compared. Median time intervals were significantly shorter in the H2H group (door-to-balloon time 32 [IQR 25-43] vs. 40 [IQR 28-55] minutes, p-value dashboard was independently associated with shorter time delays. Real-time monitoring and feedback on time delay with the H2H dashboard improves the logistic chain in STEMI patients, resulting in shorter ischaemic time intervals.

  2. A sub-cubic time algorithm for computing the quartet distance between two general trees

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas

    2011-01-01

    Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...

  3. The application of digital computers to near-real-time processing of flutter test data

    Science.gov (United States)

    Hurley, S. R.

    1976-01-01

    Procedures used in monitoring, analyzing, and displaying flight and ground flutter test data are presented. These procedures include three digital computer programs developed to process structural response data in near real time. Qualitative and quantitative modal stability data are derived from time history response data resulting from rapid sinusoidal frequency sweep forcing functions, tuned-mode quick stops, and pilot induced control pulses. The techniques have been applied to both fixed and rotary wing aircraft, during flight, whirl tower rotor systems tests, and wind tunnel flutter model tests. An hydraulically driven oscillatory aerodynamic vane excitation system utilized during the flight flutter test programs accomplished during Lockheed L-1011 and S-3A development is described.

  4. Spike-timing computation properties of a feed-forward neural network model

    Directory of Open Access Journals (Sweden)

    Drew Benjamin Sinha

    2014-01-01

    Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.

  5. A Karaoke System with Real-Time Media Merging and Sharing Functions for a Cloud-Computing-Integrated Mobile Device

    Directory of Open Access Journals (Sweden)

    Her-Tyan Yeh

    2013-01-01

    Full Text Available Mobile devices such as personal digital assistants (PDAs, smartphones, and tablets have increased in popularity and are extremely efficient for work-related, social, and entertainment uses. Popular entertainment services have also attracted substantial attention. Thus, relevant industries have exerted considerable efforts in establishing a method by which mobile devices can be used to develop excellent and convenient entertainment services. Because cloud-computing technology is mature and possesses a strong computing processing capacity, integrating this technology into the entertainment service function in mobile devices can reduce the data load on a system and maintain mobile device performances. This study combines cloud computing with a mobile device to design a karaoke system that contains real-time media merging and sharing functions. This system enables users to download music videos (MVs from their mobile device and sing and record their singing by using the device. They can upload the recorded song to the cloud server where it is merged with real-time media. Subsequently, by employing a media streaming technology, users can store their personal MVs in their mobile device or computer and instantaneously share these videos with others on the Internet. Through this process, people can instantly watch shared videos, enjoy the leisure and entertainment effects of mobile devices, and satisfy their desire for singing.

  6. CERN School of Computing enriches its portfolio of events: first thematic CSC next spring

    CERN Multimedia

    2013-01-01

    tCSC2013 is a new concept prototyped for the first time in 2013. It aims at complementing the existing portfolio of CSC events: the historical main summer school, organised since 1970, the inverted CSCs (iCSCs) organized since 2005, and the special schools, as organised in 2006 in Bombay.   Shorter, smaller, focused are the three distinguishing features of the thematic CSC (tCSC). But, though different from the main CSCs, the tCSCs maintain the same guiding principles:    Academic dimension on advanced topic    Theory and practice    Networking and socialization.   The first thematic CSC will take place in Split, Croatia, from 3 to 7 June. All applicants are welcome, including former and future CSC participants in the main summer school.   The theme is "Mastering state-of-the-art computing", covering: Data-oriented design: Designing for data, data-inten...

  7. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  8. Nation-Scale Adoption of Shorter Breast Radiation Therapy Schedules Can Increase Survival in Resource Constrained Economies: Results From a Markov Chain Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Khan, Atif J., E-mail: atif.j.khan@rutgers.edu [Department of Radiation Oncology, Robert Wood Johnson Medical School/Cancer Institute of New Jersey, New Brunswick, New Jersey (United States); Rafique, Raza [Suleman Dawood School of Business, Lahore University of Management Sciences, Lahore (Pakistan); Zafar, Waleed [Shaukat Khanum Memorial Cancer Hospital and Research Centre, Lahore (Pakistan); Shah, Chirag [Department of Radiation Oncology, Cleveland Clinic, Cleveland, Ohio (United States); Haffty, Bruce G. [Department of Radiation Oncology, Robert Wood Johnson Medical School/Cancer Institute of New Jersey, New Brunswick, New Jersey (United States); Vicini, Frank [Michigan HealthCare Professionals, Farmington Hills, Michigan (United States); Jamshed, Arif [Shaukat Khanum Memorial Cancer Hospital and Research Centre, Lahore (Pakistan); Zhao, Yao [Rutgers University School of Business, Newark, New Jersey (United States)

    2017-02-01

    Purpose: Hypofractionated whole breast irradiation and accelerated partial breast irradiation (APBI) offer women options for shorter courses of breast radiation therapy. The impact of these shorter schedules on the breast cancer populations of emerging economies with limited radiation therapy resources is unknown. We hypothesized that adoption of these schedules would improve throughput in the system and, by allowing more women access to life-saving treatments, improve patient survival within the system. Methods and Materials: We designed a Markov chain model to simulate the different health states that a postlumpectomy or postmastectomy patient could enter over the course of a 20-year follow-up period. Transition rates between health states were adapted from published data on recurrence rates. We used primary data from a tertiary care hospital in Lahore, Pakistan, to populate the model with proportional use of mastectomy versus breast conservation and to estimate the proportion of patients suitable for APBI. Sensitivity analyses on the use of APBI and relative efficacy of APBI were conducted to study the impact on the population. Results: The shorter schedule resulted in more women alive and more women remaining without evidence of disease (NED) compared with the conventional schedule, with an absolute difference of about 4% and 7% at 15 years, respectively. Among women who had lumpectomies, the chance of remaining alive and with an intact breast was 62% in the hypofractionation model and 54% in the conventional fractionation model. Conclusions: Increasing throughput in the system can result in improved survival, improved chances of remaining without evidence of disease, and improved chances of remaining alive with a breast. These findings are significant and suggest that adoption of hypofractionation in emerging economies is not simply a question of efficiency and cost but one of access to care and patient survivorship.

  9. Nation-Scale Adoption of Shorter Breast Radiation Therapy Schedules Can Increase Survival in Resource Constrained Economies: Results From a Markov Chain Analysis

    International Nuclear Information System (INIS)

    Khan, Atif J.; Rafique, Raza; Zafar, Waleed; Shah, Chirag; Haffty, Bruce G.; Vicini, Frank; Jamshed, Arif; Zhao, Yao

    2017-01-01

    Purpose: Hypofractionated whole breast irradiation and accelerated partial breast irradiation (APBI) offer women options for shorter courses of breast radiation therapy. The impact of these shorter schedules on the breast cancer populations of emerging economies with limited radiation therapy resources is unknown. We hypothesized that adoption of these schedules would improve throughput in the system and, by allowing more women access to life-saving treatments, improve patient survival within the system. Methods and Materials: We designed a Markov chain model to simulate the different health states that a postlumpectomy or postmastectomy patient could enter over the course of a 20-year follow-up period. Transition rates between health states were adapted from published data on recurrence rates. We used primary data from a tertiary care hospital in Lahore, Pakistan, to populate the model with proportional use of mastectomy versus breast conservation and to estimate the proportion of patients suitable for APBI. Sensitivity analyses on the use of APBI and relative efficacy of APBI were conducted to study the impact on the population. Results: The shorter schedule resulted in more women alive and more women remaining without evidence of disease (NED) compared with the conventional schedule, with an absolute difference of about 4% and 7% at 15 years, respectively. Among women who had lumpectomies, the chance of remaining alive and with an intact breast was 62% in the hypofractionation model and 54% in the conventional fractionation model. Conclusions: Increasing throughput in the system can result in improved survival, improved chances of remaining without evidence of disease, and improved chances of remaining alive with a breast. These findings are significant and suggest that adoption of hypofractionation in emerging economies is not simply a question of efficiency and cost but one of access to care and patient survivorship.

  10. Less is more: latent learning is maximized by shorter training sessions in auditory perceptual learning.

    Science.gov (United States)

    Molloy, Katharine; Moore, David R; Sohoglu, Ediz; Amitay, Sygal

    2012-01-01

    The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session. We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (∼8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (∼4 hr in total) relative to a group that trained for a single session (∼1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased. Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and optimized spacing between sessions.

  11. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    Science.gov (United States)

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  12. Effects of Relativity Lead to 'Warp Speed' Computations

    International Nuclear Information System (INIS)

    Vay, J.-L.

    2007-01-01

    A scientist at Lawrence Berkeley National Laboratory has discovered that a previously unnoticed consequence of Einstein's special theory of relativity can lead to speedup of computer calculations by orders of magnitude when applied to the computer modeling of a certain class of physical systems. This new finding offers the possibility of tackling some problems in a much shorter time and with far more precision than was possible before, as well as studying some configurations in every detail for the first time. The basis of Einstein's theory is the principle of relativity, which states that the laws of physics are the same for all observers, whether the 'observer' is a turtle 'racing' with a rabbit, or a beam of particles moving at near light speed. From the invariance of the laws of physics, one may be tempted to infer that the complexity of a system is independent of the motion of the observer, and consequently, a computer simulation will require the same number of mathematical operations, independently of the reference frame that is used for the calculation. Length contraction and time dilation are well known consequences of the special theory of relativity which lead to very counterintuitive effects. An alien observing human activity through a telescope in a spaceship traveling in the Vicinity of the earth near the speed of light would see everything flattened in the direction of propagation of its spaceship (for him, the earth would have the shape of a pancake), while all motions on earth would appear extremely slow, slowed almost to a standstill. Conversely, a space scientist observing the alien through a telescope based on earth would see a flattened alien almost to a standstill in a flattened spaceship. Meanwhile, an astronaut sitting in a spaceship moving at some lower velocity than the alien spaceship with regard to earth might see both the alien spaceship and the earth flattened in the same proportion and the motion unfolding in each of them at the same

  13. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    Science.gov (United States)

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.

  14. A Real-Time Computation Model of the Electromagnetic Force and Torque for a Maglev Planar Motor with the Concentric Winding

    Directory of Open Access Journals (Sweden)

    Baoquan Kou

    2017-01-01

    Full Text Available The traditional model of the electromagnetic force and torque does not take the coil corners into account, which is the major cause for the motor fluctuation. To reduce the fluctuation, a more accurate real-time computation model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems respectively for the stator, the mover, and the corner are established. The first harmonic of the magnetic flux density distribution of a Halbach magnet array is taken into account in this model. The coil is divided into the straight coil segment and the corner coil segment based on its structure. For the straight coil segment, the traditional Lorenz force method can be used to compute its electromagnetic force and torque, which is a function of the mover position. For the corner coil segment, however, the numerical calculation method can be used to get its respective electromagnetic force and torque. Based on the above separate analysis, an electromagnetic model can be derived, which is suitable for practical application. Compared with the well-known harmonic model, the proposed real-time computation model is found to have less model inaccuracy. Additionally, the real-time ability of the maglev planar motor model and the decoupling computation is validated by NI PXI platform (Austin, TX, USA.

  15. Direct Measurements of Smartphone Screen-Time: Relationships with Demographics and Sleep.

    Science.gov (United States)

    Christensen, Matthew A; Bettencourt, Laura; Kaye, Leanne; Moturu, Sai T; Nguyen, Kaylin T; Olgin, Jeffrey E; Pletcher, Mark J; Marcus, Gregory M

    2016-01-01

    Smartphones are increasingly integrated into everyday life, but frequency of use has not yet been objectively measured and compared to demographics, health information, and in particular, sleep quality. The aim of this study was to characterize smartphone use by measuring screen-time directly, determine factors that are associated with increased screen-time, and to test the hypothesis that increased screen-time is associated with poor sleep. We performed a cross-sectional analysis in a subset of 653 participants enrolled in the Health eHeart Study, an internet-based longitudinal cohort study open to any interested adult (≥ 18 years). Smartphone screen-time (the number of minutes in each hour the screen was on) was measured continuously via smartphone application. For each participant, total and average screen-time were computed over 30-day windows. Average screen-time specifically during self-reported bedtime hours and sleeping period was also computed. Demographics, medical information, and sleep habits (Pittsburgh Sleep Quality Index-PSQI) were obtained by survey. Linear regression was used to obtain effect estimates. Total screen-time over 30 days was a median 38.4 hours (IQR 21.4 to 61.3) and average screen-time over 30 days was a median 3.7 minutes per hour (IQR 2.2 to 5.5). Younger age, self-reported race/ethnicity of Black and "Other" were associated with longer average screen-time after adjustment for potential confounders. Longer average screen-time was associated with shorter sleep duration and worse sleep-efficiency. Longer average screen-times during bedtime and the sleeping period were associated with poor sleep quality, decreased sleep efficiency, and longer sleep onset latency. These findings on actual smartphone screen-time build upon prior work based on self-report and confirm that adults spend a substantial amount of time using their smartphones. Screen-time differs across age and race, but is similar across socio-economic strata suggesting that

  16. Direct Measurements of Smartphone Screen-Time: Relationships with Demographics and Sleep.

    Directory of Open Access Journals (Sweden)

    Matthew A Christensen

    Full Text Available Smartphones are increasingly integrated into everyday life, but frequency of use has not yet been objectively measured and compared to demographics, health information, and in particular, sleep quality.The aim of this study was to characterize smartphone use by measuring screen-time directly, determine factors that are associated with increased screen-time, and to test the hypothesis that increased screen-time is associated with poor sleep.We performed a cross-sectional analysis in a subset of 653 participants enrolled in the Health eHeart Study, an internet-based longitudinal cohort study open to any interested adult (≥ 18 years. Smartphone screen-time (the number of minutes in each hour the screen was on was measured continuously via smartphone application. For each participant, total and average screen-time were computed over 30-day windows. Average screen-time specifically during self-reported bedtime hours and sleeping period was also computed. Demographics, medical information, and sleep habits (Pittsburgh Sleep Quality Index-PSQI were obtained by survey. Linear regression was used to obtain effect estimates.Total screen-time over 30 days was a median 38.4 hours (IQR 21.4 to 61.3 and average screen-time over 30 days was a median 3.7 minutes per hour (IQR 2.2 to 5.5. Younger age, self-reported race/ethnicity of Black and "Other" were associated with longer average screen-time after adjustment for potential confounders. Longer average screen-time was associated with shorter sleep duration and worse sleep-efficiency. Longer average screen-times during bedtime and the sleeping period were associated with poor sleep quality, decreased sleep efficiency, and longer sleep onset latency.These findings on actual smartphone screen-time build upon prior work based on self-report and confirm that adults spend a substantial amount of time using their smartphones. Screen-time differs across age and race, but is similar across socio-economic strata

  17. Adjusting patients streaming initiated by a wait time threshold in emergency department for minimizing opportunity cost.

    Science.gov (United States)

    Kim, Byungjoon B J; Delbridge, Theodore R; Kendrick, Dawn B

    2017-07-10

    Purpose Two different systems for streaming patients were considered to improve efficiency measures such as waiting times (WTs) and length of stay (LOS) for a current emergency department (ED). A typical fast track area (FTA) and a fast track with a wait time threshold (FTW) were designed and compared effectiveness measures from the perspective of total opportunity cost of all patients' WTs in the ED. The paper aims to discuss these issues. Design/methodology/approach This retrospective case study used computerized ED patient arrival to discharge time logs (between July 1, 2009 and June 30, 2010) to build computer simulation models for the FTA and fast track with wait time threshold systems. Various wait time thresholds were applied to stream different acuity-level patients. National average wait time for each acuity level was considered as a threshold to stream patients. Findings The fast track with a wait time threshold (FTW) showed a statistically significant shorter total wait time than the current system or a typical FTA system. The patient streaming management would improve the service quality of the ED as well as patients' opportunity costs by reducing the total LOS in the ED. Research limitations/implications The results of this study were based on computer simulation models with some assumptions such as no transfer times between processes, an arrival distribution of patients, and no deviation of flow pattern. Practical implications When the streaming of patient flow can be managed based on the wait time before being seen by a physician, it is possible for patients to see a physician within a tolerable wait time, which would result in less crowded in the ED. Originality/value A new streaming scheme of patients' flow may improve the performance of fast track system.

  18. Development of a Computer-aided Learning System for Graphical Analysis of Continuous-Time Control Systems

    Directory of Open Access Journals (Sweden)

    J. F. Opadiji

    2010-06-01

    Full Text Available We present the development and deployment process of a computer-aided learning tool which serves as a training aid for undergraduate control engineering courses. We show the process of algorithm construction and implementation of the software which is also aimed at teaching software development at undergraduate level. The scope of this project is limited to graphical analysis of continuous-time control systems.

  19. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  20. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.