WorldWideScience

Sample records for shorter computation time

  1. The risk of shorter fasting time for pediatric deep sedation.

    Science.gov (United States)

    Clark, Mathew; Birisci, Esma; Anderson, Jordan E; Anliker, Christina M; Bryant, Micheal A; Downs, Craig; Dalabih, Abdallah

    2016-01-01

    Current guidelines adopted by the American Academy of Pediatrics calls for prolonged fasting times before performing pediatric procedural sedation and analgesia (PSA). PSA is increasingly provided to children outside of the operating theater by sedation trained pediatric providers and does not require airway manipulation. We investigated the safety of a shorter fasting time compared to a longer and guideline compliant fasting time. We tried to identify the association between fasting time and sedation-related complications. This is a prospective observational study that included children 2 months to 18 years of age and had an American Society of Anesthesiologists physical status classification of I or II, who underwent deep sedation for elective procedures, performed by pediatric critical care providers. Procedures included radiologic imaging studies, electroencephalograms, auditory brainstem response, echocardiograms, Botox injections, and other minor surgical procedures. Subjects were divided into two groups depending on the length of their fasting time (4-6 h and >6 h). Complication rates were calculated and compared between the three groups. In the studied group of 2487 subjects, 1007 (40.5%) had fasting time of 4-6 h and the remaining 1480 (59.5%) subjects had fasted for >6 h. There were no statistically significant differences in any of the studied complications between the two groups. This study found no difference in complication rate in regard to the fasting time among our subjects cohort, which included only healthy children receiving elective procedures performed by sedation trained pediatric critical care providers. This suggests that using shorter fasting time may be safe for procedures performed outside of the operating theater that does not involve high-risk patients or airway manipulation.

  2. YAOPBM-II: extension to higher degrees and to shorter time series

    Energy Technology Data Exchange (ETDEWEB)

    Korzennik, S G [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States)], E-mail: skorzennik@cfa.harvard.edu

    2008-10-15

    In 2005, I presented a new fitting methodology (Yet AnOther Peak Bagging Method -YAOPBM), derived for very-long time series (2088-day-long) and applied it to low degree modes, {iota} {<=} 25. That very-long time series was also sub-divided into shorter segments (728-day-long) that were each fitted over the same range of degrees, to estimate changes with solar activity levels. I present here the extension of this method in several 'directions': a) to substantially higher degrees ({iota} {<=} 125); b) to shorter time series (364- and 182-day-long); and c) to additional 728-day-long segments, covering now some 10 years of observations. I discuss issues with the fitting, namely the leakage matrix, and the f- and p1 mode at very low frequencies, and I present some of the characteristics of the observed temporal changes.

  3. Physical activity during video capsule endoscopy correlates with shorter bowel transit time.

    Science.gov (United States)

    Stanich, Peter P; Peck, Joshua; Murphy, Christopher; Porter, Kyle M; Meyer, Marty M

    2017-09-01

     Video capsule endoscopy (VCE) is limited by reliance on bowel motility for propulsion, and lack of physical activity has been proposed as a cause of incomplete studies. Our aim was to prospectively investigate the association between physical activity and VCE bowel transit.  Ambulatory outpatients receiving VCE were eligible for the study. A pedometer was attached at the time of VCE ingestion and step count was recorded at the end of the procedure. VCE completion was assessed by logistic regression models, which included step count (500 steps as one unit). Total transit time was analyzed by Cox proportional hazards models. The hazard ratios (HR) with 95 % confidence interval (CI) indicated the "hazard" of completion, such that HRs > 1 indicated a reduced transit time.  A total of 100 patients were included. VCE was completed in 93 patients (93 %). The median step count was 2782 steps. Step count was not significantly associated with VCE completion (odds ratio 1.45, 95 %CI 0.84, 2.49). Pedometer step count was significantly associated with shorter total, gastric, and small-bowel transit times (HR 1.09, 95 %CI 1.03, 1.16; HR 1.05, 95 %CI 1.00, 1.11; HR 1.07, 95 %CI 1.01, 1.14, respectively). Higher body mass index (BMI) was significantly associated with VCE completion (HR 1.87, 95 %CI 1.18, 2.97) and shorter bowel transit times (HR 1.05, 95 %CI 1.02, 1.08).  Increased physical activity during outpatient VCE was associated with shorter bowel transit times but not with study completion. In addition, BMI was a previously unreported clinical characteristic associated with VCE completion and should be included as a variable of interest in future studies.

  4. Effects of shorter versus longer storage time of transfused red blood cells in adult ICU patients

    DEFF Research Database (Denmark)

    Rygård, Sofie L; Jonsson, Andreas B; Madsen, Martin B

    2018-01-01

    on the effects of shorter versus longer storage time of transfused RBCs on outcomes in ICU patients. METHODS: We conducted a systematic review with meta-analyses and trial sequential analyses (TSA) of randomised clinical trials including adult ICU patients transfused with fresher versus older or standard issue...... blood. RESULTS: We included seven trials with a total of 18,283 randomised ICU patients; two trials of 7504 patients were judged to have low risk of bias. We observed no effects of fresher versus older blood on death (relative risk 1.04, 95% confidence interval (CI) 0.97-1.11; 7349 patients; TSA......-adjusted CI 0.93-1.15), adverse events (1.26, 0.76-2.09; 7332 patients; TSA-adjusted CI 0.16-9.87) or post-transfusion infections (1.07, 0.96-1.20; 7332 patients; TSA-adjusted CI 0.90-1.27). The results were unchanged by including trials with high risk of bias. TSA confirmed the results and the required...

  5. Shorter Ground Contact Time and Better Running Economy: Evidence From Female Kenyan Runners.

    Science.gov (United States)

    Mooses, Martin; Haile, Diresibachew W; Ojiambo, Robert; Sang, Meshack; Mooses, Kerli; Lane, Amy R; Hackney, Anthony C

    2018-06-25

    Mooses, M, Haile, DW, Ojiambo, R, Sang, M, Mooses, K, Lane, AR, and Hackney, AC. Shorter ground contact time and better running economy: evidence from female Kenyan runners. J Strength Cond Res XX(X): 000-000, 2018-Previously, it has been concluded that the improvement in running economy (RE) might be considered as a key to the continued improvement in performance when no further increase in V[Combining Dot Above]O2max is observed. To date, RE has been extensively studied among male East African distance runners. By contrast, there is a paucity of data on the RE of female East African runners. A total of 10 female Kenyan runners performed 3 × 1,600-m steady-state run trials on a flat outdoor clay track (400-m lap) at the intensities that corresponded to their everyday training intensities for easy, moderate, and fast running. Running economy together with gait characteristics was determined. Participants showed moderate to very good RE at the first (202 ± 26 ml·kg·km) and second (188 ± 12 ml·kg·km) run trials, respectively. Correlation analysis revealed significant relationship between ground contact time (GCT) and RE at the second run (r = 0.782; p = 0.022), which represented the intensity of anaerobic threshold. This study is the first to report the RE and gait characteristics of East African female athletes measured under everyday training settings. We provided the evidence that GCT is associated with the superior RE of the female Kenyan runners.

  6. How do shorter working hours affect employee wellbeing? : Shortening working time in Finland

    OpenAIRE

    Lahdenperä, Netta

    2017-01-01

    The way work is done is dramatically changing due to digital breakthroughs. Generation Y is entering the workforce with a changed attitude towards work as organizations are increasing their focus towards employee wellbeing. Organizations who adopt the new model of work and understand the importance of the wellbeing of their staff are leading the transition to a more efficient business, better working life and a healthier planet. The thesis explores the numerous effects of shorter working...

  7. Making tomorrow's mistakes today: Evolutionary prototyping for risk reduction and shorter development time

    Science.gov (United States)

    Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim

    1993-01-01

    In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the

  8. Making tomorrow's mistakes today: Evolutionary prototyping for risk reduction and shorter development time

    Science.gov (United States)

    Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim

    1993-03-01

    In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the

  9. Shorter Perceived Outpatient MRI Wait Times Associated With Higher Patient Satisfaction.

    Science.gov (United States)

    Holbrook, Anna; Glenn, Harold; Mahmood, Rabia; Cai, Qingpo; Kang, Jian; Duszak, Richard

    2016-05-01

    The aim of this study was to assess differences in perceived versus actual wait times among patients undergoing outpatient MRI examinations and to correlate those times with patient satisfaction. Over 15 weeks, 190 patients presenting for outpatient MR in a radiology department in which "patient experience" is one of the stated strategic priorities were asked to (1) estimate their wait times for various stages in the imaging process and (2) state their satisfaction with their imaging experience. Perceived times were compared with actual electronic time stamps. Perceived and actual times were compared and correlated with standardized satisfaction scores using Kendall τ correlation. The mean actual wait time between patient arrival and examination start was 53.4 ± 33.8 min, whereas patients perceived a mean wait time of 27.8 ± 23.1 min, a statistically significant underestimation of 25.6 min (P perceived wait times at all points during patient encounters were correlated with higher satisfaction scores (P perceived and actual wait times were both correlated with higher satisfaction scores. As satisfaction surveys play a larger role in an environment of metric transparency and value-based payments, better understanding of such factors will be increasingly important. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  10. Why shorter half-times of repair lead to greater damage in pulsed brachytherapy

    International Nuclear Information System (INIS)

    Fowler, J.F.

    1993-01-01

    Pulsed brachytherapy consists of replacing continuous irradiation at low dose-rate with a series of medium dose-rate fractions in the same overall time and to the same total dose. For example, pulses of 1 Gy given every 2 hr or 2 Gy given every 4 hr would deliver the same 70 Gy in 140 hr as continuous irradiation at 0.5 Gy/hr. If higher dose-rates are used, even with gaps between the pulses, the biological effects are always greater. Provided that dose rates in the pulse do not exceed 3 Gy/hr, and provided that pulses are given as often as every 2 hr, the inevitable increases of biological effect are no larger than a few percent (of biologically effective dose or extrapolated response dose). However, these increases are more likely to exceed 10% (and thus become clinically significant) if the half-time of repair of sublethal damage is short (less than 1 hr) rather than long. This somewhat unexpected finding is explained in detail here. The rise and fall of Biologically Effective Dose (and hence of Relative Effectiveness, for a constant dose in each pulse) is calculated during and after single pulses, assuming a range of values of T 1/2 , the half-time of sublethal damage repair. The area under each curve is proportional to Biologically Effective Dose and therefore to log cell kill. Pulses at 3 Gy/hr do yield greater biological effect (dose x integrated Relative Effectiveness) than lower dose-rate pulses or continuous irradiation at 0.5 Gy/hr. The contrast is greater for the short T 1/2 of 0.5 hr than for the longer T 1/2 of 1.5 hr. More biological damage will be done (compared with traditional low dose rate brachytherapy) in tissues with short T 1/2 (0.1-1 hr) than in tissues with longer T 1/2 values. 8 refs., 3 figs

  11. Comprehensive borehole management for shorter drilling time; Umfassendes Bohrfortschrittsmanagement zur Verkuerzung der Bohrprojektdauer

    Energy Technology Data Exchange (ETDEWEB)

    Roehrlich, M. [ExxonMobil Production Deutschland GmbH, Hannover (Germany)

    2007-09-13

    In 2006, the trademarked ExxonMobil Fast Drill Process (FDP) was introduced also in the German ExxonMobil boreholes. The process is to maximize the drilling speed for every meter drilled. The process makes it possible to ensure borehole management on the basis of quantitative data and in consideration of all phases that are relevant for sinking a borehole. The FDP is used world-wide in all ExxonMobil drilling departments. More than 1.35 million meters are drilled annually in many different boreholes with different geological conditions, drilling profiles and international sites. The results were similar in many cases, with a significant increase in ROP and drill bit life, and with less damage caused by vibrations. FDP was developed on the basis of real time monitoring of the specific mechanical energy (MSE) required for drilling. MSE monitoring was found to be an effective tool dor detecting inefficient functioning of the drill bit and the overall system. To make operation more efficient, the causes must be identified and measures must be taken accordingly, taking into account the potential risks involved in such measures. MSE monitoring is a tool while FDPL is a broad management process ensuring that MSE and many other data sources are used effectively for optimisation of the ROP. Consequent implementation of the process resulted in a significant increase of the ROP. The major elements required for achieving this goal are discussed. (orig.)

  12. Moderate Exercise Allows for shorter Recovery Time in Critical Limb Ischemia

    Directory of Open Access Journals (Sweden)

    Anne Lejay

    2017-07-01

    Full Text Available Whether and how moderate exercise might allow for accelerated limb recovery in chronic critical limb ischemia (CLI remains to be determined. Chronic CLI was surgically induced in mice, and the effect of moderate exercise (training five times per week over a 3-week period was investigated. Tissue damages and functional scores were assessed on the 4th, 6th, 10th, 20th, and 30th day after surgery. Mice were sacrificed 48 h after the last exercise session in order to assess muscle structure, mitochondrial respiration, calcium retention capacity, oxidative stress and transcript levels of genes encoding proteins controlling mitochondrial functions (PGC1α, PGC1β, NRF1 and anti-oxidant defenses markers (SOD1, SOD2, catalase. CLI resulted in tissue damages and impaired functional scores. Mitochondrial respiration and calcium retention capacity were decreased in the ischemic limb of the non-exercised group (Vmax = 7.11 ± 1.14 vs. 9.86 ± 0.86 mmol 02/min/g dw, p < 0.001; CRC = 7.01 ± 0.97 vs. 11.96 ± 0.92 microM/mg dw, p < 0.001, respectively. Moderate exercise reduced tissue damages, improved functional scores, and restored mitochondrial respiration and calcium retention capacity in the ischemic limb (Vmax = 9.75 ± 1.00 vs. 9.82 ± 0.68 mmol 02/min/g dw; CRC = 11.36 ± 1.33 vs. 12.01 ± 1.24 microM/mg dw, respectively. Exercise also enhanced the transcript levels of PGC1α, PGC1β, NRF1, as well as SOD1, SOD2, and catalase. Moderate exercise restores mitochondrial respiration and calcium retention capacity, and it has beneficial functional effects in chronic CLI, likely by stimulating reactive oxygen species-induced biogenesis and anti-oxidant defenses. These data support further development of exercise therapy even in advanced peripheral arterial disease.

  13. Optimization of a shorter variable-acquisition time for legs to achieve true whole-body PET/CT images.

    Science.gov (United States)

    Umeda, Takuro; Miwa, Kenta; Murata, Taisuke; Miyaji, Noriaki; Wagatsuma, Kei; Motegi, Kazuki; Terauchi, Takashi; Koizumi, Mitsuru

    2017-12-01

    The present study aimed to qualitatively and quantitatively evaluate PET images as a function of acquisition time for various leg sizes, and to optimize a shorter variable-acquisition time protocol for legs to achieve better qualitative and quantitative accuracy of true whole-body PET/CT images. The diameters of legs to be modeled as phantoms were defined based on data derived from 53 patients. This study analyzed PET images of a NEMA phantom and three plastic bottle phantoms (diameter, 5.68, 8.54 and 10.7 cm) that simulated the human body and legs, respectively. The phantoms comprised two spheres (diameters, 10 and 17 mm) containing fluorine-18 fluorodeoxyglucose solution with sphere-to-background ratios of 4 at a background radioactivity level of 2.65 kBq/mL. All PET data were reconstructed with acquisition times ranging from 10 to 180, and 1200 s. We visually evaluated image quality and determined the coefficient of variance (CV) of the background, contrast and the quantitative %error of the hot spheres, and then determined two shorter variable-acquisition protocols for legs. Lesion detectability and quantitative accuracy determined based on maximum standardized uptake values (SUV max ) in PET images of a patient using the proposed protocols were also evaluated. A larger phantom and a shorter acquisition time resulted in increased background noise on images and decreased the contrast in hot spheres. A visual score of ≥ 1.5 was obtained when the acquisition time was ≥ 30 s for three leg phantoms, and ≥ 120 s for the NEMA phantom. The quantitative %errors of the 10- and 17-mm spheres in the leg phantoms were ± 15 and ± 10%, respectively, in PET images with a high CV (scan mean SUV max of three lesions using the current fixed-acquisition and two proposed variable-acquisition time protocols in the clinical study were 3.1, 3.1 and 3.2, respectively, which did not significantly differ. Leg acquisition time per bed position of even 30-90

  14. The Change of the Family Life Affected by the Shorter Working Time : From the Point of View of the Home Management

    OpenAIRE

    平田, 道憲

    1994-01-01

    In Japan, the working time has been decreasing. However, Japanese working people spend more hours per year to work than those in Western countries. The policy of the shorter working time is conducted by the Japanese Government in order that the working people get more free time. This paper examines whether the shorter working time of working members in the family enrich the time use of the other members of the family. Especially, the effect of the shorter working time of husbands to wives...

  15. Shorter time since inflammatory bowel disease diagnosis in children is associated with lower mental health in parents.

    Science.gov (United States)

    Werner, H; Braegger, Cp; Buehr, P; Koller, R; Nydegger, A; Spalinger, J; Heyland, K; Schibli, S; Landolt, Ma

    2015-01-01

    This study assessed the mental health of parents of children with inflammatory bowel disease (IBD), compared their mental health with age-matched and gender-matched references and examined parental and child predictors for mental health problems. A total of 125 mothers and 106 fathers of 125 children with active and inactive IBD from the Swiss IBD multicentre cohort study were included. Parental mental health was assessed by the Symptom Checklist 27 and child behaviour problems by the Strengths and Difficulties Questionnaire. Child medical data were extracted from hospital records. While the mothers reported lower mental health, the fathers' mental health was similar, or even better, than in age-matched and gender-matched community controls. In both parents, shorter time since the child's diagnosis was associated with poorer mental health. In addition, the presence of their own IBD diagnosis and child behaviour problems predicted maternal mental health problems. Parents of children with IBD may need professional support when their child is diagnosed, to mitigate distress. This, in turn, may help the child to adjust better to IBD. Particular attention should be paid to mothers who have their own IBD diagnosis and whose children display behaviour problems. ©2014 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  16. Spindle assembly checkpoint protein expression correlates with cellular proliferation and shorter time to recurrence in ovarian cancer.

    LENUS (Irish Health Repository)

    McGrogan, Barbara

    2014-07-01

    Ovarian carcinoma (OC) is the most lethal of the gynecological malignancies, often presenting at an advanced stage. Treatment is hampered by high levels of drug resistance. The taxanes are microtubule stabilizing agents, used as first-line agents in the treatment of OC that exert their apoptotic effects through the spindle assembly checkpoint. BUB1-related protein kinase (BUBR1) and mitotic arrest deficient 2 (MAD2), essential spindle assembly checkpoint components, play a key role in response to taxanes. BUBR1, MAD2, and Ki-67 were assessed on an OC tissue microarray platform representing 72 OC tumors of varying histologic subtypes. Sixty-one of these patients received paclitaxel and platinum agents combined; 11 received platinum alone. Overall survival was available for all 72 patients, whereas recurrence-free survival (RFS) was available for 66 patients. Increased BUBR1 expression was seen in serous carcinomas, compared with other histologies (P = .03). Increased BUBR1 was significantly associated with tumors of advanced stage (P = .05). Increased MAD2 and BUBR1 expression also correlated with increased cellular proliferation (P < .0002 and P = .02, respectively). Reduced MAD2 nuclear intensity was associated with a shorter RFS (P = .03), in ovarian tumors of differing histologic subtype (n = 66). In this subgroup, for those women who received paclitaxel and platinum agents combined (n = 57), reduced MAD2 intensity also identified women with a shorter RFS (P < .007). For the entire cohort of patients, irrespective of histologic subtype or treatment, MAD2 nuclear intensity retained independent significance in a multivariate model, with tumors showing reduced nuclear MAD2 intensity identifying patients with a poorer RFS (P = .05).

  17. Driving for shorter outages

    International Nuclear Information System (INIS)

    Tritch, S.

    1996-01-01

    Nuclear plant outages are necessary to complete activities that cannot be completed during the operating cycle, such as steam generator inspection and testing, refueling, installing modifications, and performing maintenance tests. The time devoted to performing outages is normally the largest contributor to plant unavailability. Similarly, outage costs are a sizable portion of the total plant budget. The scope and quality of work done during outages directly affects operating reliability and the number of unplanned outages. Improved management and planning of outages enhances the margin of safety during the outage and results in increased plant reliability. The detailed planning and in-depth preparation that has become a necessity for driving shorter outage durations has also produced safer outages and improved post-outage reliability. Short outages require both plant and vendor management to focus on all aspects of the outage. Short outage durations, such as 26 days at South Texas or 29 days at North Anna, require power plant inter-department and intra-department teamwork and communication and vendor participation. In this paper shorter and safer outage at the 3-loop plants in the United States are explained. (J.P.N.)

  18. Shorter daily dwelling time in peritoneal dialysis attenuates the epithelial-to-mesenchymal transition of mesothelial cells

    Science.gov (United States)

    2014-01-01

    Background Peritoneal dialysis (PD) therapy is known to induce morphological and functional changes in the peritoneal membrane. Long-term exposure to conventional bio-incompatible dialysate and peritonitis is the main etiology of inflammation. Consequently, the peritoneal membrane undergoes structural changes, including angiogenesis, fibrosis, and hyalinizing vasculopathy, which ultimately results in technique failure. The epithelial-to-mesenchymal transition (EMT) of mesothelial cells (MCs) plays an important role during the above process; however, the clinical parameters associated with the EMT process of MCs remain to be explored. Methods To investigate the parameters impacting EMT during PD therapy, 53 clinical stable PD patients were enrolled. EMT assessments were conducted through human peritoneal MCs cultured from dialysate effluent with one consistent standard criterion (MC morphology and the expression of an epithelial marker, cytokeratin 18). The factors potentially associated with EMT were analyzed using logistic regression analysis. Primary MCs derived from the omentum were isolated for the in vitro study. Results Forty-seven percent of the patients presented with EMT, 28% with non-EMT, and 15% with a mixed presentation. Logistic regression analysis showed that patients who received persistent PD therapy (dwelling time of 24 h/day) had significantly higher EMT tendency. These results were consistent in vitro. Conclusions Dwelling time had a significant effect on the occurrence of EMT on MCs. PMID:24555732

  19. Self-produced Time Intervals Are Perceived as More Variable and/or Shorter Depending on Temporal Context in Subsecond and Suprasecond Ranges

    Directory of Open Access Journals (Sweden)

    Keita eMitani

    2016-06-01

    Full Text Available The processing of time intervals is fundamental for sensorimotor and cognitive functions. Perceptual and motor timing are often performed concurrently (e.g., playing a musical instrument. Although previous studies have shown the influence of body movements on time perception, how we perceive self-produced time intervals has remained unclear. Furthermore, it has been suggested that the timing mechanisms are distinct for the sub- and suprasecond ranges. Here, we compared perceptual performances for self-produced and passively presented time intervals in random contexts (i.e., multiple target intervals presented in a session across the sub- and suprasecond ranges (Experiment 1 and within the sub- (Experiment 2 and suprasecond (Experiment 3 ranges, and in a constant context (i.e., a single target interval presented in a session in the sub- and suprasecond ranges (Experiment 4. We show that self-produced time intervals were perceived as shorter and more variable across the sub- and suprasecond ranges and within the suprasecond range but not within the subsecond range in a random context. In a constant context, the self-produced time intervals were perceived as more variable in the suprasecond range but not in the subsecond range. The impairing effects indicate that motor timing interferes with perceptual timing. The dependence of impairment on temporal contexts suggests multiple timing mechanisms for the subsecond and suprasecond ranges. In addition, violation of the scalar property (i.e., a constant variability to target interval ratio was observed between the sub- and suprasecond ranges. The violation was clearer for motor timing than for perceptual timing. This suggests that the multiple timing mechanisms for the sub- and suprasecond ranges overlap more for perception than for motor. Moreover, the central tendency effect (i.e., where shorter base intervals are overestimated and longer base intervals are underestimated disappeared with subsecond

  20. High Numbers of Stromal Cancer-Associated Fibroblasts Are Associated With a Shorter Survival Time in Cats With Oral Squamous Cell Carcinoma.

    Science.gov (United States)

    Klobukowska, H J; Munday, J S

    2016-11-01

    Cancer-associated fibroblasts (CAFs) are fibroblastic cells that express α-smooth muscle actin and have been identified in the stroma of numerous epithelial tumors. The presence of CAFs within the tumor stroma has been associated with a poorer prognosis in some human cancers, including oral squamous cell carcinomas (SCCs). Cats frequently develop oral SCCs, and although these are generally highly aggressive neoplasms, there is currently a lack of prognostic markers for these tumors. The authors investigated the prognostic value of the presence of CAFs within the stroma of oral SCC biopsy specimens from 47 cats. In addition, several epidemiologic, clinical, and histologic variables were also assessed for prognostic significance. A CAF-positive stroma was identified in 35 of 47 SCCs (74.5%), and the median survival time (ST) of cats with CAF-positive SCCs (35 days) was significantly shorter than that of cats with CAF-negative SCCs (48.5 days) (P = .031). ST was also associated with the location of the primary tumor (P = .0018): the median ST for oropharyngeal SCCs (179 days) was significantly longer than for maxillary (43.5 days; P = .047), mandibular (42 days; P = .022), and sublingual SCCs (22.5 days; P = .0005). The median ST of sublingual SCCs was also shorter compared with maxillary SCCs (P = .0017). Furthermore, a significant association was identified between site and the presence of stromal CAFs (P = .025). On the basis of this retrospective study, evaluating the tumor stroma for CAFs in feline oral SCC biopsy specimens may be of potential prognostic value. © The Author(s) 2016.

  1. Family History of Early Infant Death Correlates with Earlier Age at Diagnosis But Not Shorter Time to Diagnosis for Severe Combined Immunodeficiency

    Directory of Open Access Journals (Sweden)

    Anderson Dik Wai Luk

    2017-07-01

    Full Text Available BackgroundSevere combined immunodeficiency (SCID is fatal unless treated with hematopoietic stem cell transplant. Delay in diagnosis is common without newborn screening. Family history of infant death due to infection or known SCID (FH has been associated with earlier diagnosis.ObjectiveThe aim of this study was to identify the clinical features that affect age at diagnosis (AD and time to the diagnosis of SCID.MethodsFrom 2005 to 2016, 147 SCID patients were referred to the Asian Primary Immunodeficiency Network. Patients with genetic diagnosis, age at presentation (AP, and AD were selected for study.ResultsA total of 88 different SCID gene mutations were identified in 94 patients, including 49 IL2RG mutations, 12 RAG1 mutations, 8 RAG2 mutations, 7 JAK3 mutations, 4 DCLRE1C mutations, 4 IL7R mutations, 2 RFXANK mutations, and 2 ADA mutations. A total of 29 mutations were previously unreported. Eighty-three of the 94 patients fulfilled the selection criteria. Their median AD was 4 months, and the time to diagnosis was 2 months. The commonest SCID was X-linked (n = 57. A total of 29 patients had a positive FH. Candidiasis (n = 27 and bacillus Calmette–Guérin (BCG vaccine infection (n = 19 were the commonest infections. The median age for candidiasis and BCG infection documented were 3 months and 4 months, respectively. The median absolute lymphocyte count (ALC was 1.05 × 109/L with over 88% patients below 3 × 109/L. Positive FH was associated with earlier AP by 1 month (p = 0.002 and diagnosis by 2 months (p = 0.008, but not shorter time to diagnosis (p = 0.494. Candidiasis was associated with later AD by 2 months (p = 0.008 and longer time to diagnosis by 0.55 months (p = 0.003. BCG infections were not associated with age or time to diagnosis.ConclusionFH was useful to aid earlier diagnosis but was overlooked by clinicians and not by parents. Similarly, typical clinical features of

  2. In Vitro Comparison of Holmium Lasers: Evidence for Shorter Fragmentation Time and Decreased Retropulsion Using a Modern Variable-pulse Laser.

    Science.gov (United States)

    Bell, John Roger; Penniston, Kristina L; Nakada, Stephen Y

    2017-09-01

    To compare the performance of variable- and fixed-pulse lasers on stone phantoms in vitro. Seven-millimeter stone phantoms were made to simulate calcium oxalate monohydrate stones using BegoStone plus. The in vitro setting was created with a clear polyvinyl chloride tube. For each trial, a stone phantom was placed at the open end of the tubing. The Cook Rhapsody H-30 variable-pulse laser was tested on both long- and short-pulse settings and was compared to the Dornier H-20 fixed-pulse laser; 5 trials were conducted for each trial arm. Fragmentation was accomplished with the use of a flexible ureteroscope and a 273-micron holmium laser fiber using settings of 1 J × 12 Hz. The treatment time (in minute) for complete fragmentation was recorded as was the total retropulsion distance (in centimeter) during treatment. Laser fibers were standardized for all repetitions. The treatment time was significantly shorter with the H-30 vs the H-20 laser (14.3 ± 2.5 vs 33.1 ± 8.9 minutes, P = .008). There was no difference between the treatment times using the long vs short pulse widths of the H-30 laser (14.4 ± 3.4 vs 14.3 ± 1.7 minutes, P = .93). Retropulsion differed by laser type and pulse width, H-30 long pulse (15.8 ± 5.7 cm), H-30 short pulse (54.8 ± 7.1 cm), and H-20 (33.2 ± 12.5 cm) (P laser fragmented stone phantoms in half the time of the H-20 laser regardless of the pulse width. Retropulsion effects differed between the lasers, with the H-30 causing the least retropulsion. Longer pulse widths result in less stone retropulsion. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  4. 12 CFR 1102.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1102.27 Section 1102.27 Banks... for Proceedings § 1102.27 Computing time. (a) General rule. In computing any period of time prescribed... time begins to run is not included. The last day so computed is included, unless it is a Saturday...

  5. 12 CFR 622.21 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Computing time. 622.21 Section 622.21 Banks and... Formal Hearings § 622.21 Computing time. (a) General rule. In computing any period of time prescribed or... run is not to be included. The last day so computed shall be included, unless it is a Saturday, Sunday...

  6. Do Japanese Work Shorter Hours than before?: Measuring Trends in Market Work and Leisure Using 1976-2006 Japanese Time-Use Survey

    OpenAIRE

    Kuroda, Sachiko

    2009-01-01

    Using Japanese time-use data from the Survey on Time Use and Leisure Activities (STULA), this paper measures trends in average hours worked (market work) and leisure for Japanese over the past three decades. OECD reports at least a 15 percent decline in market work for Japan since the 1970s. However, holding demographic changes constant, we found that market work per week increased from the 1970s until mid 1980s, and has been relatively stable for the last two decades for both male and female...

  7. 3D Vision Provides Shorter Operative Time and More Accurate Intraoperative Surgical Performance in Laparoscopic Hiatal Hernia Repair Compared With 2D Vision.

    Science.gov (United States)

    Leon, Piera; Rivellini, Roberta; Giudici, Fabiola; Sciuto, Antonio; Pirozzi, Felice; Corcione, Francesco

    2017-04-01

    The aim of this study is to evaluate if 3-dimensional high-definition (3D) vision in laparoscopy can prompt advantages over conventional 2D high-definition vision in hiatal hernia (HH) repair. Between September 2012 and September 2015, we randomized 36 patients affected by symptomatic HH to undergo surgery; 17 patients underwent 2D laparoscopic HH repair, whereas 19 patients underwent the same operation in 3D vision. No conversion to open surgery occurred. Overall operative time was significantly reduced in the 3D laparoscopic group compared with the 2D one (69.9 vs 90.1 minutes, P = .006). Operative time to perform laparoscopic crura closure did not differ significantly between the 2 groups. We observed a tendency to a faster crura closure in the 3D group in the subgroup of patients with mesh positioning (7.5 vs 8.9 minutes, P = .09). Nissen fundoplication was faster in the 3D group without mesh positioning ( P = .07). 3D vision in laparoscopic HH repair helps surgeon's visualization and seems to lead to operative time reduction. Advantages can result from the enhanced spatial perception of narrow spaces. Less operative time and more accurate surgery translate to benefit for patients and cost savings, compensating the high costs of the 3D technology. However, more data from larger series are needed to firmly assess the advantages of 3D over 2D vision in laparoscopic HH repair.

  8. Structured syncope care pathways based on lean six sigma methodology optimises resource use with shorter time to diagnosis and increased diagnostic yield.

    Science.gov (United States)

    Martens, Leon; Goode, Grahame; Wold, Johan F H; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas

    2014-01-01

    To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield.

  9. Structured syncope care pathways based on lean six sigma methodology optimises resource use with shorter time to diagnosis and increased diagnostic yield.

    Directory of Open Access Journals (Sweden)

    Leon Martens

    Full Text Available To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines.Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1 Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2 Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3 Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four.With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048 and a 75% increase in diagnostic yield (p = 0.007. There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests.Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield.

  10. A Matter of Computer Time

    Science.gov (United States)

    Celano, Donna; Neuman, Susan B.

    2010-01-01

    Many low-income children do not have the opportunity to develop the computer skills necessary to succeed in our technological economy. Their only access to computers and the Internet--school, afterschool programs, and community organizations--is woefully inadequate. Educators must work to close this knowledge gap and to ensure that low-income…

  11. 12 CFR 908.27 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 908.27 Section 908.27 Banks and... PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD General Rules § 908.27 Computing time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event...

  12. Using teaching resources to help students develop team and project skills pays off, both in terms of employability and shorter study time

    DEFF Research Database (Denmark)

    Jensen, Lars Peter

    Since Aalborg University in Denmark was started in 1974 it has been using a special educational model, where Problem Based Learning is the turning point. Each semester the students on the Engineering Educations form groups of approximately 6 persons, which uses half of the study time within...... of the university many students had difficulties with practical issues such as collaboration, communication, and project management. An important aspect of the basic part of the education (first year), has therefore been the development of a course where the students gets tools and tricks for good communication...... report documenting the results of their project, but also an analysis of the working process getting there. Since year 1998 the teachers giving the CLP course have focused very much on these process analyses and as they are a part of the examination the students also have focused more on how they work...

  13. Using teaching resources to help students develop team and project skills pays off, both in terms of employability and shorter study time

    DEFF Research Database (Denmark)

    Jensen, Lars Peter

    2005-01-01

    Since Aalborg University in Denmark was started in 1974 it has been using a special educational model, where Problem Based Learning is the turning point. Each semester the students on the Engineering Educations form groups of approximately 6 persons, which uses half of the study time within...... of the university many students had difficulties with practical issues such as collaboration, communication, and project management. An important aspect of the basic part of the education (first year), has therefore been the development of a course where the students gets tools and tricks for good communication...... report documenting the results of their project, but also an analysis of the working process getting there. Since year 1998 the teachers giving the CLP course have focused very much on these process analyses and as they are a part of the examination the students also have focused more on how they work...

  14. 12 CFR 1780.11 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Computing time. 1780.11 Section 1780.11 Banks... time. (a) General rule. In computing any period of time prescribed or allowed by this subpart, the date of the act or event that commences the designated period of time is not included. The last day so...

  15. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  16. General purpose computers in real time

    International Nuclear Information System (INIS)

    Biel, J.R.

    1989-01-01

    I see three main trends in the use of general purpose computers in real time. The first is more processing power. The second is the use of higher speed interconnects between computers (allowing more data to be delivered to the processors). The third is the use of larger programs running in the computers. Although there is still work that needs to be done, I believe that all indications are that the online need for general purpose computers should be available for the SCC and LHC machines. 2 figs

  17. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  18. Instruction timing for the CDC 7600 computer

    International Nuclear Information System (INIS)

    Lipps, H.

    1975-01-01

    This report provides timing information for all instructions of the Control Data 7600 computer, except for instructions of type 01X, to enable the optimization of 7600 programs. The timing rules serve as background information for timing charts which are produced by a program (TIME76) of the CERN Program Library. The rules that co-ordinate the different sections of the CPU are stated in as much detail as is necessary to time the flow of instructions for a given sequence of code. Instruction fetch, instruction issue, and access to small core memory are treated at length, since details are not available from the computer manuals. Annotated timing charts are given for 24 examples, chosen to display the full range of timing considerations. (Author)

  19. Fast algorithms for computing phylogenetic divergence time.

    Science.gov (United States)

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  20. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  1. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  2. Real-Time Thevenin Impedance Computation

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Jóhannsson, Hjörtur

    2013-01-01

    operating state, and strict time constraints are difficult to adhere to as the complexity of the grid increases. Several suggested approaches for real-time stability assessment require Thevenin impedances to be determined for the observed system conditions. By combining matrix factorization, graph reduction......, and parallelization, we develop an algorithm for computing Thevenin impedances an order of magnitude faster than previous approaches. We test the factor-and-solve algorithm with data from several power grids of varying complexity, and we show how the algorithm allows realtime stability assessment of complex power...

  3. Time reversibility, computer simulation, algorithms, chaos

    CERN Document Server

    Hoover, William Graham

    2012-01-01

    A small army of physicists, chemists, mathematicians, and engineers has joined forces to attack a classic problem, the "reversibility paradox", with modern tools. This book describes their work from the perspective of computer simulation, emphasizing the author's approach to the problem of understanding the compatibility, and even inevitability, of the irreversible second law of thermodynamics with an underlying time-reversible mechanics. Computer simulation has made it possible to probe reversibility from a variety of directions and "chaos theory" or "nonlinear dynamics" has supplied a useful vocabulary and a set of concepts, which allow a fuller explanation of irreversibility than that available to Boltzmann or to Green, Kubo and Onsager. Clear illustration of concepts is emphasized throughout, and reinforced with a glossary of technical terms from the specialized fields which have been combined here to focus on a common theme. The book begins with a discussion, contrasting the idealized reversibility of ba...

  4. Computing Refined Buneman Trees in Cubic Time

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.; Östlin, A.

    2003-01-01

    Reconstructing the evolutionary tree for a set of n species based on pairwise distances between the species is a fundamental problem in bioinformatics. Neighbor joining is a popular distance based tree reconstruction method. It always proposes fully resolved binary trees despite missing evidence...... in the underlying distance data. Distance based methods based on the theory of Buneman trees and refined Buneman trees avoid this problem by only proposing evolutionary trees whose edges satisfy a number of constraints. These trees might not be fully resolved but there is strong combinatorial evidence for each...... proposed edge. The currently best algorithm for computing the refined Buneman tree from a given distance measure has a running time of O(n 5) and a space consumption of O(n 4). In this paper, we present an algorithm with running time O(n 3) and space consumption O(n 2). The improved complexity of our...

  5. Real-Time Accumulative Computation Motion Detectors

    Directory of Open Access Journals (Sweden)

    Saturnino Maldonado-Bascón

    2009-12-01

    Full Text Available The neurally inspired accumulative computation (AC method and its application to motion detection have been introduced in the past years. This paper revisits the fact that many researchers have explored the relationship between neural networks and finite state machines. Indeed, finite state machines constitute the best characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The article shows how to reach real-time performance after using a model described as a finite state machine. This paper introduces two steps towards that direction: (a A simplification of the general AC method is performed by formally transforming it into a finite state machine. (b A hardware implementation in FPGA of such a designed AC module, as well as an 8-AC motion detector, providing promising performance results. We also offer two case studies of the use of AC motion detectors in surveillance applications, namely infrared-based people segmentation and color-based people tracking, respectively.

  6. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  7. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  8. 7 CFR 1.603 - How are time periods computed?

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false How are time periods computed? 1.603 Section 1.603... Licenses General Provisions § 1.603 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2...

  9. 50 CFR 221.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false How are time periods computed? 221.3... Provisions § 221.3 How are time periods computed? (a) General. Time periods are computed as follows: (1) The day of the act or event from which the period begins to run is not included. (2) The last day of the...

  10. 6 CFR 13.27 - Computation of time.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Computation of time. 13.27 Section 13.27 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY PROGRAM FRAUD CIVIL REMEDIES § 13.27 Computation of time. (a) In computing any period of time under this part or in an order issued...

  11. Computational and Empirical Trans-hydrogen Bond Deuterium Isotope Shifts Suggest that N1-N3 A:U Hydrogen Bonds of RNA are Shorter than those of A:T Hydrogen Bonds of DNA

    International Nuclear Information System (INIS)

    Kim, Yong-Ick; Manalo, Marlon N.; Perez, Lisa M.; LiWang, Andy

    2006-01-01

    Density functional theory calculations of isolated Watson-Crick A:U and A:T base pairs predict that adenine 13 C2 trans-hydrogen bond deuterium isotope shifts due to isotopic substitution at the pyrimidine H3, 2h Δ 13 C2, are sensitive to the hydrogen-bond distance between the N1 of adenine and the N3 of uracil or thymine, which supports the notion that 2h Δ 13 C2 is sensitive to hydrogen-bond strength. Calculated 2h Δ 13 C2 values at a given N1-N3 distance are the same for isolated A:U and A:T base pairs. Replacing uridine residues in RNA with 5-methyl uridine and substituting deoxythymidines in DNA with deoxyuridines do not statistically shift empirical 2h Δ 13 C2 values. Thus, we show experimentally and computationally that the C7 methyl group of thymine has no measurable affect on 2h Δ 13 C2 values. Furthermore, 2h Δ 13 C2 values of modified and unmodified RNA are more negative than those of modified and unmodified DNA, which supports our hypothesis that RNA hydrogen bonds are stronger than those of DNA. It is also shown here that 2h Δ 13 C2 is context dependent and that this dependence is similar for RNA and DNA

  12. Investigations of model polymers: Dynamics of melts and statics of a long chain in a dilute melt of shorter chains

    International Nuclear Information System (INIS)

    Bishop, M.; Ceperley, D.; Frisch, H.L.; Kalos, M.H.

    1982-01-01

    We report additional results on a simple model of polymers, namely the diffusion in concentrated polymer systems and the static properties of one long chain in a dilute melt of shorter chains. It is found, for the polymer sizes and time scales amenable to our computer calculations, that there is as yet no evidence for a ''reptation'' regime in a melt. There is some indication of reptation in the case of a single chain moving through fixed obstacles. No statistically significant effect of the change, from excluded volume behavior of the long chain to ideal behavior as the shorter chains grow, is observed

  13. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  14. Real-time data-intensive computing

    Energy Technology Data Exchange (ETDEWEB)

    Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander; MacDowell, Alastair A.; Padmore, Howard A.; Shapiro, David; Tamura, Nobumichi [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Beattie, Keith; Krishnan, Harinarayan; Patton, Simon J.; Perciano, Talita; Stromsness, Rune; Tull, Craig E.; Ushizima, Daniela [Computational Research Division, Lawrence Berkeley National Laboratory Berkeley CA 94720 (United States); Correa, Joaquin; Deslippe, Jack R. [National Energy Research Scientific Computing Center, Berkeley, CA 94720 (United States); Dart, Eli; Tierney, Brian L. [Energy Sciences Network, Berkeley, CA 94720 (United States); Daurer, Benedikt J.; Maia, Filipe R. N. C. [Uppsala University, Uppsala (Sweden); and others

    2016-07-27

    Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficient closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.

  15. Real time computer system with distributed microprocessors

    International Nuclear Information System (INIS)

    Heger, D.; Steusloff, H.; Syrbe, M.

    1979-01-01

    The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de

  16. Spying on real-time computers to improve performance

    International Nuclear Information System (INIS)

    Taff, L.M.

    1975-01-01

    The sampled program-counter histogram, an established technique for shortening the execution times of programs, is described for a real-time computer. The use of a real-time clock allows particularly easy implementation. (Auth.)

  17. Implications of shorter cells in PEP

    International Nuclear Information System (INIS)

    Wiedemann, H.

    1975-01-01

    Further studies on the beam-stay-clear requirements in PEP led to the conclusion that the vertical aperture needed to be enlarged. There are two main reasons for that: Observations at SPEAR indicate that the aperture should be large enough for a fully coupled beam. Full coupling of the horizontal and vertical betatron oscillations occurs not only occasionally when the energy, tune or betatron function at the interaction point is changed but also due to the beam/endash/beam effect of two strong colliding beams. The second reason for an increased aperture requirement is the nonlinear perturbation of the particle trajectories by the sextupoles. This perturbation increases a fully coupled beam by another 50% to 80%. Both effects together with a +-5 mm allowance for closed orbit perturbation result in a vertical beam-stay-clear in the bending magnets of +-4.8 to +-5.6 cm, compared to the present +-2.0 cm. This beam-stay-clear, together with additional space for vacuum chamber, etc., leads to very costly bending magnets. In this note, a shorter cell length is proposed which would reduce considerably the vertical beam-stay-clear requirements in the bending magnets. 7 figs

  18. Pre-hospital electrocardiogram triage with tele-cardiology support is associated with shorter time-to-balloon and higher rates of timely reperfusion even in rural areas: data from the Bari- Barletta/Andria/Trani public emergency medical service 118 registry on primary angioplasty in ST-elevation myocardial infarction.

    Science.gov (United States)

    Brunetti, Natale Daniele; Di Pietro, Gaetano; Aquilino, Ambrogio; Bruno, Angela I; Dellegrottaglie, Giulia; Di Giuseppe, Giuseppe; Lopriore, Claudio; De Gennaro, Luisa; Lanzone, Saverio; Caldarola, Pasquale; Antonelli, Gianfranco; Di Biase, Matteo

    2014-09-01

    We report the preliminary data from a regional registry on ST-elevation myocardial infarction (STEMI) patients treated with primary angioplasty in Apulia, Italy; the region is covered by a single public health-care service, a single public emergency medical service (EMS), and a single tele-medicine service provider. Two hundred and ninety-seven consecutive patients with STEMI transferred by regional free public EMS 1-1-8 for primary-PCI were enrolled in the study; 123 underwent pre-hospital electrocardiograms (ECGs) triage by tele-cardiology support and directly referred for primary-PCI, those remaining were just transferred by 1-1-8 ambulances for primary percutaneous coronary intervention (PCI) (diagnosis not based on tele-medicine ECG; already hospitalised patients, emergency-room without tele-medicine support). Time from first ECG diagnostic for STEMI to balloon was recorded; a time-to-balloon primary-PCI). Pre-hospital triage with tele-cardiology ECG in an EMS registry from an area with more than one and a half million inhabitants was associated with shorter time-to-balloon and higher rates of timely treated patients, even in 'rural' areas. © The European Society of Cardiology 2014.

  19. 29 CFR 1921.22 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Computation of time. 1921.22 Section 1921.22 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... WORKERS' COMPENSATION ACT Miscellaneous § 1921.22 Computation of time. Sundays and holidays shall be...

  20. 43 CFR 45.3 - How are time periods computed?

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false How are time periods computed? 45.3... IN FERC HYDROPOWER LICENSES General Provisions § 45.3 How are time periods computed? (a) General... run is not included. (2) The last day of the period is included. (i) If that day is a Saturday, Sunday...

  1. 5 CFR 890.101 - Definitions; time computations.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Definitions; time computations. 890.101....101 Definitions; time computations. (a) In this part, the terms annuitant, carrier, employee, employee... in section 8901 of title 5, United States Code, and supplement the following definitions: Appropriate...

  2. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  3. Advanced real time radioscopy and computed tomography

    International Nuclear Information System (INIS)

    Sauerwein, Ch.; Nuding, W.; Grimm, R.; Wiacker, H.

    1996-01-01

    The paper describes three x-ray inspection systems. One radioscopic system is designed for the inspection of castings. The next integrates a radioscopic and a tomographic mode. The radioscopy has a high resolution camera and real time image processor. Radiation sources are a 450 kV industrial and a 200 kV microfocus tube. The third system is a tomographic system with 30 scintillation detectors for the inspection of nuclear waste containers. (author)

  4. Real time computer controlled weld skate

    Science.gov (United States)

    Wall, W. A., Jr.

    1977-01-01

    A real time, adaptive control, automatic welding system was developed. This system utilizes the general case geometrical relationships between a weldment and a weld skate to precisely maintain constant weld speed and torch angle along a contoured workplace. The system is compatible with the gas tungsten arc weld process or can be adapted to other weld processes. Heli-arc cutting and machine tool routing operations are possible applications.

  5. Recent achievements in real-time computational seismology in Taiwan

    Science.gov (United States)

    Lee, S.; Liang, W.; Huang, B.

    2012-12-01

    Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information ROS completes a 3D simulation real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).

  6. Computer tomography urography assisted real-time ultrasound-guided percutaneous nephrolithotomy on renal calculus.

    Science.gov (United States)

    Fang, You-Qiang; Wu, Jie-Ying; Li, Teng-Cheng; Zheng, Hao-Feng; Liang, Guan-Can; Chen, Yan-Xiong; Hong, Xiao-Bin; Cai, Wei-Zhong; Zang, Zhi-Jun; Di, Jin-Ming

    2017-06-01

    This study aimed to assess the role of pre-designed route on computer tomography urography (CTU) in the ultrasound-guided percutaneous nephrolithotomy (PCNL) for renal calculus.From August 2013 to May 2016, a total of 100 patients diagnosed with complex renal calculus in our hospital were randomly divided into CTU group and control group (without CTU assistance). CTU was used to design a rational route for puncturing in CTU group. Ultrasound was used in both groups to establish a working trace in the operation areas. Patients' perioperative parameters and postoperative complications were recorded.All operations were successfully performed, without transferring to open surgery. Time of channel establishment in CTU group (6.5 ± 4.3 minutes) was shorter than the control group (10.0 ± 6.7 minutes) (P = .002). In addition, there was shorter operation time, lower rates of blood transfusion, secondary operation, and less establishing channels. The incidence of postoperative complications including residual stones, sepsis, severe hemorrhage, and perirenal hematoma was lower in CTU group than in control group.Pre-designing puncture route on CTU images would improve the puncturing accuracy, lessen establishing channels as well as improve the security in the ultrasound-guided PCNL for complex renal calculus, but at the cost of increased radiation exposure.

  7. Time-of-Flight Cameras in Computer Graphics

    DEFF Research Database (Denmark)

    Kolb, Andreas; Barth, Erhardt; Koch, Reinhard

    2010-01-01

    Computer Graphics, Computer Vision and Human Machine Interaction (HMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real-time geometry...

  8. 29 CFR 4245.8 - Computation of time.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Computation of time. 4245.8 Section 4245.8 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION INSOLVENCY, REORGANIZATION, TERMINATION, AND OTHER RULES APPLICABLE TO MULTIEMPLOYER PLANS NOTICE OF INSOLVENCY § 4245.8 Computation of...

  9. Noise-constrained switching times for heteroclinic computing

    Science.gov (United States)

    Neves, Fabio Schittler; Voit, Maximilian; Timme, Marc

    2017-03-01

    Heteroclinic computing offers a novel paradigm for universal computation by collective system dynamics. In such a paradigm, input signals are encoded as complex periodic orbits approaching specific sequences of saddle states. Without inputs, the relevant states together with the heteroclinic connections between them form a network of states—the heteroclinic network. Systems of pulse-coupled oscillators or spiking neurons naturally exhibit such heteroclinic networks of saddles, thereby providing a substrate for general analog computations. Several challenges need to be resolved before it becomes possible to effectively realize heteroclinic computing in hardware. The time scales on which computations are performed crucially depend on the switching times between saddles, which in turn are jointly controlled by the system's intrinsic dynamics and the level of external and measurement noise. The nonlinear dynamics of pulse-coupled systems often strongly deviate from that of time-continuously coupled (e.g., phase-coupled) systems. The factors impacting switching times in pulse-coupled systems are still not well understood. Here we systematically investigate switching times in dependence of the levels of noise and intrinsic dissipation in the system. We specifically reveal how local responses to pulses coact with external noise. Our findings confirm that, like in time-continuous phase-coupled systems, piecewise-continuous pulse-coupled systems exhibit switching times that transiently increase exponentially with the number of switches up to some order of magnitude set by the noise level. Complementarily, we show that switching times may constitute a good predictor for the computation reliability, indicating how often an input signal must be reiterated. By characterizing switching times between two saddles in conjunction with the reliability of a computation, our results provide a first step beyond the coding of input signal identities toward a complementary coding for

  10. Relativistic Photoionization Computations with the Time Dependent Dirac Equation

    Science.gov (United States)

    2016-10-12

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6795--16-9698 Relativistic Photoionization Computations with the Time Dependent Dirac... Photoionization Computations with the Time Dependent Dirac Equation Daniel F. Gordon and Bahman Hafizi Naval Research Laboratory 4555 Overlook Avenue, SW...Unclassified Unlimited Unclassified Unlimited 22 Daniel Gordon (202) 767-5036 Tunneling Photoionization Ionization of inner shell electrons by laser

  11. Do shorter wavelengths improve contrast in optical mammography?

    International Nuclear Information System (INIS)

    Taroni, P; Pifferi, A; Torricelli, A; Spinelli, L; Danesini, G M; Cubeddu, R

    2004-01-01

    The detection of tumours with time-resolved transmittance imaging relies essentially on blood absorption. Previous theoretical and phantom studies have shown that both contrast and spatial resolution of optical images are affected by the optical properties of the background medium, and high absorption and scattering are generally beneficial. Based on these observations, wavelengths shorter than presently used (680-780 nm) could be profitable for optical mammography. A study was thus performed analysing time-resolved transmittance images at 637, 656, 683 and 785 nm obtained from 26 patients bearing 16 tumours and 15 cysts. The optical contrast proved to increase upon decreasing wavelengths for the detection of cancers in late-gated intensity images, with higher gain in contrast for lesions of smaller size (<1.5 cm diameter). For cysts either a progressive increase or decrease in contrast with wavelength was observed in scattering images

  12. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  13. Computer simulations of long-time tails: what's new?

    NARCIS (Netherlands)

    Hoef, van der M.A.; Frenkel, D.

    1995-01-01

    Twenty five years ago Alder and Wainwright discovered, by simulation, the 'long-time tails' in the velocity autocorrelation function of a single particle in fluid [1]. Since then, few qualitatively new results on long-time tails have been obtained by computer simulations. However, within the

  14. Computation of reactor control rod drop time under accident conditions

    International Nuclear Information System (INIS)

    Dou Yikang; Yao Weida; Yang Renan; Jiang Nanyan

    1998-01-01

    The computational method of reactor control rod drop time under accident conditions lies mainly in establishing forced vibration equations for the components under action of outside forces on control rod driven line and motion equation for the control rod moving in vertical direction. The above two kinds of equations are connected by considering the impact effects between control rod and its outside components. Finite difference method is adopted to make discretization of the vibration equations and Wilson-θ method is applied to deal with the time history problem. The non-linearity caused by impact is iteratively treated with modified Newton method. Some experimental results are used to validate the validity and reliability of the computational method. Theoretical and experimental testing problems show that the computer program based on the computational method is applicable and reliable. The program can act as an effective tool of design by analysis and safety analysis for the relevant components

  15. Scalable space-time adaptive simulation tools for computational electrocardiology

    OpenAIRE

    Krause, Dorian; Krause, Rolf

    2013-01-01

    This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...

  16. Continuous-Time Symmetric Hopfield Nets are Computationally Universal

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 3 (2003), s. 693-733 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : continuous-time Hopfield network * Liapunov function * analog computation * computational power * Turing universality Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  17. Heterogeneous real-time computing in radio astronomy

    Science.gov (United States)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  18. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  19. TimeSet: A computer program that accesses five atomic time services on two continents

    Science.gov (United States)

    Petrakis, P. L.

    1993-01-01

    TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.

  20. Computing return times or return periods with rare event algorithms

    Science.gov (United States)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  1. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  2. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    Science.gov (United States)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  3. Effect of the MCNP model definition on the computation time

    International Nuclear Information System (INIS)

    Šunka, Michal

    2017-01-01

    The presented work studies the influence of the method of defining the geometry in the MCNP transport code and its impact on the computational time, including the difficulty of preparing an input file describing the given geometry. Cases using different geometric definitions including the use of basic 2-dimensional and 3-dimensional objects and theirs combinations were studied. The results indicate that an inappropriate definition can increase the computational time by up to 59% (a more realistic case indicates 37%) for the same results and the same statistical uncertainty. (orig.)

  4. TV time but not computer time is associated with cardiometabolic risk in Dutch young adults.

    Science.gov (United States)

    Altenburg, Teatske M; de Kroon, Marlou L A; Renders, Carry M; Hirasing, Remy; Chinapaw, Mai J M

    2013-01-01

    TV time and total sedentary time have been positively related to biomarkers of cardiometabolic risk in adults. We aim to examine the association of TV time and computer time separately with cardiometabolic biomarkers in young adults. Additionally, the mediating role of waist circumference (WC) is studied. Data of 634 Dutch young adults (18-28 years; 39% male) were used. Cardiometabolic biomarkers included indicators of overweight, blood pressure, blood levels of fasting plasma insulin, cholesterol, glucose, triglycerides and a clustered cardiometabolic risk score. Linear regression analyses were used to assess the cross-sectional association of self-reported TV and computer time with cardiometabolic biomarkers, adjusting for demographic and lifestyle factors. Mediation by WC was checked using the product-of-coefficient method. TV time was significantly associated with triglycerides (B = 0.004; CI = [0.001;0.05]) and insulin (B = 0.10; CI = [0.01;0.20]). Computer time was not significantly associated with any of the cardiometabolic biomarkers. We found no evidence for WC to mediate the association of TV time or computer time with cardiometabolic biomarkers. We found a significantly positive association of TV time with cardiometabolic biomarkers. In addition, we found no evidence for WC as a mediator of this association. Our findings suggest a need to distinguish between TV time and computer time within future guidelines for screen time.

  5. Real-time computational photon-counting LiDAR

    Science.gov (United States)

    Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles

    2018-03-01

    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.

  6. Imprecise results: Utilizing partial computations in real-time systems

    Science.gov (United States)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  7. 36 CFR 223.81 - Shorter advertising periods in emergencies.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 2 2010-07-01 2010-07-01 false Shorter advertising periods... OF AGRICULTURE SALE AND DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER Timber Sale Contracts Advertisement and Bids § 223.81 Shorter advertising periods in emergencies. In emergency situations where prompt...

  8. Real-time brain computer interface using imaginary movements

    DEFF Research Database (Denmark)

    El-Madani, Ahmad; Sørensen, Helge Bjarup Dissing; Kjær, Troels W.

    2015-01-01

    Background: Brain Computer Interface (BCI) is the method of transforming mental thoughts and imagination into actions. A real-time BCI system can improve the quality of life of patients with severe neuromuscular disorders by enabling them to communicate with the outside world. In this paper...

  9. GRAPHIC, time-sharing magnet design computer programs at Argonne

    International Nuclear Information System (INIS)

    Lari, R.J.

    1974-01-01

    This paper describes three magnet design computer programs in use at the Zero Gradient Synchrotron of Argonne National Laboratory. These programs are used in the time sharing mode in conjunction with a Tektronix model 4012 graphic display terminal. The first program in called TRIM, the second MAGNET, and the third GFUN. (U.S.)

  10. Instructional Advice, Time Advice and Learning Questions in Computer Simulations

    Science.gov (United States)

    Rey, Gunter Daniel

    2010-01-01

    Undergraduate students (N = 97) used an introductory text and a computer simulation to learn fundamental concepts about statistical analyses (e.g., analysis of variance, regression analysis and General Linear Model). Each learner was randomly assigned to one cell of a 2 (with or without instructional advice) x 2 (with or without time advice) x 2…

  11. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  12. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  13. Efficient quantum algorithm for computing n-time correlation functions.

    Science.gov (United States)

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  14. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  15. Soft Real-Time PID Control on a VME Computer

    Science.gov (United States)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  16. Effects of computing time delay on real-time control systems

    Science.gov (United States)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  17. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  18. Computational intelligence in time series forecasting theory and engineering applications

    CERN Document Server

    Palit, Ajoy K

    2005-01-01

    Foresight in an engineering enterprise can make the difference between success and failure, and can be vital to the effective control of industrial systems. Applying time series analysis in the on-line milieu of most industrial plants has been problematic owing to the time and computational effort required. The advent of soft computing tools offers a solution. The authors harness the power of intelligent technologies individually and in combination. Examples of the particular systems and processes susceptible to each technique are investigated, cultivating a comprehensive exposition of the improvements on offer in quality, model building and predictive control and the selection of appropriate tools from the plethora available. Application-oriented engineers in process control, manufacturing, production industry and research centres will find much to interest them in this book. It is suitable for industrial training purposes, as well as serving as valuable reference material for experimental researchers.

  19. The Napoleon Complex: When Shorter Men Take More.

    Science.gov (United States)

    Knapen, Jill E P; Blaker, Nancy M; Van Vugt, Mark

    2018-05-01

    Inspired by an evolutionary psychological perspective on the Napoleon complex, we hypothesized that shorter males are more likely to show indirect aggression in resource competitions with taller males. Three studies provide support for our interpretation of the Napoleon complex. Our pilot study shows that men (but not women) keep more resources for themselves when they feel small. When paired with a taller male opponent (Study 1), shorter men keep more resources to themselves in a game in which they have all the power (dictator game) versus a game in which the opponent also has some power (ultimatum game). Furthermore, shorter men are not more likely to show direct, physical aggression toward a taller opponent (Study 2). As predicted by the Napoleon complex, we conclude that (relatively) shorter men show greater behavioral flexibility in securing resources when presented with cues that they are physically less competitive. Theoretical and practical implications are discussed.

  20. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  1. A heterogeneous hierarchical architecture for real-time computing

    Energy Technology Data Exchange (ETDEWEB)

    Skroch, D.A.; Fornaro, R.J.

    1988-12-01

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  2. Modern EMC analysis I time-domain computational schemes

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of contemporary real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, the analysis covers the theory of the finite-difference time-domain, the transmission-line matrix/modeling, and the finite i

  3. Stochastic nonlinear time series forecasting using time-delay reservoir computers: performance and universality.

    Science.gov (United States)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2014-07-01

    Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  5. Climate Data Provenance Tracking for Just-In-Time Computation

    Science.gov (United States)

    Fries, S.; Nadeau, D.; Doutriaux, C.; Williams, D. N.

    2016-12-01

    The "Climate Data Management System" (CDMS) was created in 1996 as part of the Climate Data Analysis Tools suite of software. It provides a simple interface into a wide variety of climate data formats, and creates NetCDF CF-Compliant files. It leverages the NumPy framework for high performance computation, and is an all-in-one IO and computation package. CDMS has been extended to track manipulations of data, and trace that data all the way to the original raw data. This extension tracks provenance about data, and enables just-in-time (JIT) computation. The provenance for each variable is packaged as part of the variable's metadata, and can be used to validate data processing and computations (by repeating the analysis on the original data). It also allows for an alternate solution for sharing analyzed data; if the bandwidth for a transfer is prohibitively expensive, the provenance serialization can be passed in a much more compact format and the analysis rerun on the input data. Data provenance tracking in CDMS enables far-reaching and impactful functionalities, permitting implementation of many analytical paradigms.

  6. A note on computing average state occupation times

    Directory of Open Access Journals (Sweden)

    Jan Beyersmann

    2014-05-01

    Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.

  7. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  8. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  9. In this issue: Time to replace doctors’ judgement with computers

    Directory of Open Access Journals (Sweden)

    Simon de Lusignan

    2015-11-01

    Full Text Available Informaticians continue to rise to the challenge, set by the English Health Minister, of trying to replace doctors’ judgement with computers. This issue describes successes and where there are barriers. However, whilst there is progress this tends to be incremental and there are grand challenges to be overcome before computers can replace clinician. These grand challenges include: (1 improving usability so it is possible to more readily incorporate technology into clinical workflow; (2 rigorous new analytic methods that make use of the mass of available data, ‘Big data’, to create real-world evidence; (3 faster ways of meeting regulatory and legal requirements including ensuring privacy; (4 provision of reimbursement models to fund innovative technology that can substitute for clinical time and (5 recognition that innovations that improve quality also often increase cost. Informatics more is likely to support and augment clinical decision making rather than replace clinicians.

  10. Computation Offloading for Frame-Based Real-Time Tasks under Given Server Response Time Guarantees

    Directory of Open Access Journals (Sweden)

    Anas S. M. Toma

    2014-11-01

    Full Text Available Computation offloading has been adopted to improve the performance of embedded systems by offloading the computation of some tasks, especially computation-intensive tasks, to servers or clouds. This paper explores computation offloading for real-time tasks in embedded systems, provided given response time guarantees from the servers, to decide which tasks should be offloaded to get the results in time. We consider frame-based real-time tasks with the same period and relative deadline. When the execution order of the tasks is given, the problem can be solved in linear time. However, when the execution order is not specified, we prove that the problem is NP-complete. We develop a pseudo-polynomial-time algorithm for deriving feasible schedules, if they exist.  An approximation scheme is also developed to trade the error made from the algorithm and the complexity. Our algorithms are extended to minimize the period/relative deadline of the tasks for performance maximization. The algorithms are evaluated with a case study for a surveillance system and synthesized benchmarks.

  11. Real time simulation of large systems on mini-computer

    International Nuclear Information System (INIS)

    Nakhle, Michel; Roux, Pierre.

    1979-01-01

    Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr

  12. NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation

    Energy Technology Data Exchange (ETDEWEB)

    Nikkel, D J

    2009-07-21

    This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3

  13. FRANTIC: a computer code for time dependent unavailability analysis

    International Nuclear Information System (INIS)

    Vesely, W.E.; Goldberg, F.F.

    1977-03-01

    The FRANTIC computer code evaluates the time dependent and average unavailability for any general system model. The code is written in FORTRAN IV for the IBM 370 computer. Non-repairable components, monitored components, and periodically tested components are handled. One unique feature of FRANTIC is the detailed, time dependent modeling of periodic testing which includes the effects of test downtimes, test overrides, detection inefficiencies, and test-caused failures. The exponential distribution is used for the component failure times and periodic equations are developed for the testing and repair contributions. Human errors and common mode failures can be included by assigning an appropriate constant probability for the contributors. The output from FRANTIC consists of tables and plots of the system unavailability along with a breakdown of the unavailability contributions. Sensitivity studies can be simply performed and a wide range of tables and plots can be obtained for reporting purposes. The FRANTIC code represents a first step in the development of an approach that can be of direct value in future system evaluations. Modifications resulting from use of the code, along with the development of reliability data based on operating reactor experience, can be expected to provide increased confidence in its use and potential application to the licensing process

  14. Variable dead time counters: 2. A computer simulation

    International Nuclear Information System (INIS)

    Hooton, B.W.; Lees, E.W.

    1980-09-01

    A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)

  15. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  16. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  17. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  18. Neural Computations in a Dynamical System with Multiple Time Scales

    Directory of Open Access Journals (Sweden)

    Yuanyuan Mi

    2016-09-01

    Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.

  19. Chemistry, physics and time: the computer modelling of glassmaking.

    Science.gov (United States)

    Martlew, David

    2003-01-01

    A decade or so ago the remains of an early flat glass furnace were discovered in St Helens. Continuous glass production only became feasible after the Siemens Brothers demonstrated their continuous tank furnace at Dresden in 1870. One manufacturer of flat glass enthusiastically adopted the new technology and secretly explored many variations on this theme during the next fifteen years. Study of the surviving furnace remains using today's computer simulation techniques showed how, in 1887, that technology was adapted to the special demands of window glass making. Heterogeneous chemical reactions at high temperatures are required to convert the mixture of granular raw materials into the homogeneous glass needed for windows. Kinetics (and therefore the economics) of glassmaking is dominated by heat transfer and chemical diffusion as refractory grains are converted to highly viscous molten glass. Removal of gas bubbles in a sufficiently short period of time is vital for profitability, but the glassmaker must achieve this in a reaction vessel which is itself being dissolved by the molten glass. Design and operational studies of today's continuous tank furnaces need to take account of these factors, and good use is made of computer simulation techniques to shed light on the way furnaces behave and how improvements may be made. This paper seeks to show how those same techniques can be used to understand how the early Siemens continuous tank furnaces were designed and operated, and how the Victorian entrepreneurs succeeded in managing the thorny problems of what was, in effect, a vulnerable high temperature continuous chemical reactor.

  20. Time-Domain Terahertz Computed Axial Tomography NDE System

    Science.gov (United States)

    Zimdars, David

    2012-01-01

    NASA has identified the need for advanced non-destructive evaluation (NDE) methods to characterize aging and durability in aircraft materials to improve the safety of the nation's airline fleet. 3D THz tomography can play a major role in detection and characterization of flaws and degradation in aircraft materials, including Kevlar-based composites and Kevlar and Zylon fabric covers for soft-shell fan containment where aging and durability issues are critical. A prototype computed tomography (CT) time-domain (TD) THz imaging system has been used to generate 3D images of several test objects including a TUFI tile (a thermal protection system tile used on the Space Shuttle and possibly the Orion or similar capsules). This TUFI tile had simulated impact damage that was located and the depth of damage determined. The CT motion control gan try was designed and constructed, and then integrated with a T-Ray 4000 control unit and motion controller to create a complete CT TD-THz imaging system prototype. A data collection software script was developed that takes multiple z-axis slices in sequence and saves the data for batch processing. The data collection software was integrated with the ability to batch process the slice data with the CT TD-THz image reconstruction software. The time required to take a single CT slice was decreased from six minutes to approximately one minute by replacing the 320 ps, 100-Hz waveform acquisition system with an 80 ps, 1,000-Hz waveform acquisition system. The TD-THZ computed tomography system was built from pre-existing commercial off-the-shelf subsystems. A CT motion control gantry was constructed from COTS components that can handle larger samples. The motion control gantry allows inspection of sample sizes of up to approximately one cubic foot (.0.03 cubic meters). The system reduced to practice a CT-TDTHz system incorporating a COTS 80- ps/l-kHz waveform scanner. The incorporation of this scanner in the system allows acquisition of 3D

  1. Time-of-Flight Sensors in Computer Graphics

    DEFF Research Database (Denmark)

    Kolb, Andreas; Barth, Erhardt; Koch, Reinhard

    2009-01-01

    , including Computer Graphics, Computer Vision and Man Machine Interaction (MMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become “ubiquitous real...

  2. One long chain among shorter chains : the Flory approach revisited

    OpenAIRE

    Raphaël , E.; Fredrickson , G.; Pincus , P.

    1992-01-01

    We consider the mean square end-to-end distance of a long chain immersed in a monodisperse, concentrated solution of shorter, chemically identical chains. In contrast with the earlier work of Flory, no simplifying assumption on the wave vector dependence of the effective potential between segments is made. In order to obtain a closed form expression for the dimension of the long chain, we first derive a general expression for the mean square end-to-end distance of a flexible chain with arbitr...

  3. Cluster Computing For Real Time Seismic Array Analysis.

    Science.gov (United States)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by

  4. First-line intra-arterial versus intravenous chemotherapy in unilateral sporadic group D retinoblastoma: evidence of better visual outcomes, ocular survival and shorter time to success with intra-arterial delivery from retrospective review of 20 years of treatment.

    Science.gov (United States)

    Munier, Francis L; Mosimann, Pascal; Puccinelli, Francesco; Gaillard, Marie-Claire; Stathopoulos, Christina; Houghton, Susan; Bergin, Ciara; Beck-Popovic, Maja

    2017-08-01

    The introduction of intra-arterial chemotherapy (IAC) as salvage treatment has improved the prognosis for eye conservation in group D retinoblastoma. The aim of this study was to compare the outcomes of consecutive patients with advanced unilateral disease treated with either first-line intravenous chemotherapy (IVC) or first-line IAC. This is a retrospective mono-centric comparative review of consecutive patients. Sporadic unilateral retinoblastoma group D cases treated conservatively at Jules-Gonin Eye Hospital and CHUV between 1997 and 2014. From January 1997 to August 2008, IVC, combined with focal treatments, was the primary treatment approach. From September 2008 to October 2014, IAC replaced IVC as first-line therapy. 48 patients met the inclusion criteria, receiving only either IAC or IVC as primary treatment modality. Outcomes of 23 patients treated by IVC were compared with those of 25 treated by IAC; mean follow-up was 105.3 months (range 29.2-218.6) and 41.7 months (range 19.6-89.5), respectively. Treatment duration was significantly shorter in the IAC group (pchemotherapy treatment. Despite this, the results reported here imply that eyes treated with first-line IAC will have shorter treatment period, better ocular survival and visual acuity than first-line IVC. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. 12 CFR 516.10 - How does OTS compute time periods under this part?

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false How does OTS compute time periods under this part? 516.10 Section 516.10 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPLICATION PROCESSING PROCEDURES § 516.10 How does OTS compute time periods under this part? In computing...

  6. How Many Times Should One Run a Computational Simulation?

    DEFF Research Database (Denmark)

    Seri, Raffaello; Secchi, Davide

    2017-01-01

    This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces statisti......This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...

  7. A general algorithm for computing distance transforms in linear time

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS

    2000-01-01

    A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the

  8. Quo vadis? : persuasive computing using real time queue information

    NARCIS (Netherlands)

    Meys, Wouter; Groen, Maarten

    2014-01-01

    By presenting tourists with real-time information an increase in efficiency and satisfaction of their day planning can be achieved. At the same time, real-time information services can offer the municipality the opportunity to spread the tourists throughout the city centre. An important factor for

  9. Reusability Framework for Cloud Computing

    OpenAIRE

    Singh, Sukhpal; Singh, Rishideep

    2012-01-01

    Cloud based development is a challenging task for several software engineering projects, especially for those which needs development with reusability. Present time of cloud computing is allowing new professional models for using the software development. The expected upcoming trend of computing is assumed to be this cloud computing because of speed of application deployment, shorter time to market, and lower cost of operation. Until Cloud Co mputing Reusability Model is considered a fundamen...

  10. Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization

    Science.gov (United States)

    Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton

    As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.

  11. Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation.

    Science.gov (United States)

    Tempel, David G; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.

  12. 5 CFR 831.703 - Computation of annuities for part-time service.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...

  13. Storm blueprints patterns for distributed real-time computation

    CERN Document Server

    Goetz, P Taylor

    2014-01-01

    A blueprints book with 10 different projects built in 10 different chapters which demonstrate the various use cases of storm for both beginner and intermediate users, grounded in real-world example applications.Although the book focuses primarily on Java development with Storm, the patterns are more broadly applicable and the tips, techniques, and approaches described in the book apply to architects, developers, and operations.Additionally, the book should provoke and inspire applications of distributed computing to other industries and domains. Hadoop enthusiasts will also find this book a go

  14. Macroprocessing is the computing design principle for the times

    CERN Multimedia

    2001-01-01

    In a keynote speech, Intel Corporation CEO Craig Barrett emphasized that "macroprocessing" provides innovative and cost effective solutions to companies that they can customize and scale to match their own data needs. Barrett showcased examples of macroprocessing implementations from business, government and the scientific community, which use the power of Intel Architecture and Oracle9i Real Application Clusters to build large complex and scalable database solutions. A testimonial from CERN explained how the need for high performance computing to perform scientific research on sub-atomic particles was accomplished by using clusters of Xeon processor-based servers.

  15. Real-time exposure fusion on a mobile computer

    CSIR Research Space (South Africa)

    Bachoo, AK

    2009-12-01

    Full Text Available information in these scenarios. An image captured using a short exposure time will not saturate bright image re- gions while an image captured with a long exposure time will show more detail in the dark regions. The pixel depth provided by most camera.... The auto exposure also creates strong blown-out highlights in the foreground (the grass patch). The short shutter time (Exposure 1) correctly exposes the grass while the long shutter time (Exposure 3) is able to correctly expose the camouflaged dummy...

  16. Computer-determined assay time based on preset precision

    International Nuclear Information System (INIS)

    Foster, L.A.; Hagan, R.; Martin, E.R.; Wachter, J.R.; Bonner, C.A.; Malcom, J.E.

    1994-01-01

    Most current assay systems for special nuclear materials (SNM) operate on the principle of a fixed assay time which provides acceptable measurement precision without sacrificing the required throughput of the instrument. Waste items to be assayed for SNM content can contain a wide range of nuclear material. Counting all items for the same preset assay time results in a wide range of measurement precision and wastes time at the upper end of the calibration range. A short time sample taken at the beginning of the assay could optimize the analysis time on the basis of the required measurement precision. To illustrate the technique of automatically determining the assay time, measurements were made with a segmented gamma scanner at the Plutonium Facility of Los Alamos National Laboratory with the assay time for each segment determined by counting statistics in that segment. Segments with very little SNM were quickly determined to be below the lower limit of the measurement range and the measurement was stopped. Segments with significant SNM were optimally assays to the preset precision. With this method the total assay time for each item is determined by the desired preset precision. This report describes the precision-based algorithm and presents the results of measurements made to test its validity

  17. Computer-controlled neutron time-of-flight spectrometer. Part II

    International Nuclear Information System (INIS)

    Merriman, S.H.

    1979-12-01

    A time-of-flight spectrometer for neutron inelastic scattering research has been interfaced to a PDP-15/30 computer. The computer is used for experimental data acquisition and analysis and for apparatus control. This report was prepared to summarize the functions of the computer and to act as a users' guide to the software system

  18. Computation and evaluation of scheduled waiting time for railway networks

    DEFF Research Database (Denmark)

    Landex, Alex

    2010-01-01

    Timetables are affected by scheduled waiting time (SWT) that prolongs the travel times for trains and thereby passengers. SWT occurs when a train hinders another train to run with the wanted speed. The SWT affects both the trains and the passengers in the trains. The passengers may be further...... affected due to longer transfer times to other trains. SWT can be estimated analytically for a given timetable or by simulation of timetables and/or plans of operation. The simulation of SWT has the benefit that it is possible to examine the entire network. This makes it possible to improve the future...

  19. A computer program for the estimation of time of death

    DEFF Research Database (Denmark)

    Lynnerup, N

    1993-01-01

    In the 1960s Marshall and Hoare presented a "Standard Cooling Curve" based on their mathematical analyses on the postmortem cooling of bodies. Although fairly accurate under standard conditions, the "curve" or formula is based on the assumption that the ambience temperature is constant and that t......In the 1960s Marshall and Hoare presented a "Standard Cooling Curve" based on their mathematical analyses on the postmortem cooling of bodies. Although fairly accurate under standard conditions, the "curve" or formula is based on the assumption that the ambience temperature is constant...... cooling of bodies is presented. It is proposed that by having a computer program that solves the equation, giving the length of the cooling period in response to a certain rectal temperature, and which allows easy comparison of multiple solutions, the uncertainties related to ambience temperature...

  20. 42 CFR 93.509 - Computation of time.

    Science.gov (United States)

    2010-10-01

    ... holiday observed by the Federal government, in which case it includes the next business day. (b) When the... required or authorized under the rules in this part to be filed for good cause shown. When time permits...

  1. Computational complexity of time-dependent density functional theory

    International Nuclear Information System (INIS)

    Whitfield, J D; Yung, M-H; Tempel, D G; Aspuru-Guzik, A; Boixo, S

    2014-01-01

    Time-dependent density functional theory (TDDFT) is rapidly emerging as a premier method for solving dynamical many-body problems in physics and chemistry. The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn–Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn–Sham system can be efficiently obtained given the time-dependent density. We introduce a V-representability parameter which diverges at the boundary of the existence domain and serves to quantify the numerical difficulty of constructing the Kohn-Sham potential. For bounded values of V-representability, we present a polynomial time quantum algorithm to generate the time-dependent Kohn–Sham potential with controllable error bounds. (paper)

  2. On some methods for improving time of reachability sets computation for the dynamic system control problem

    Science.gov (United States)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  3. A strategy for reducing turnaround time in design optimization using a distributed computer system

    Science.gov (United States)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  4. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  5. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  6. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  7. Invariant set computation for constrained uncertain discrete-time systems

    NARCIS (Netherlands)

    Athanasopoulos, N.; Bitsoris, G.

    2010-01-01

    In this article a novel approach to the determination of polytopic invariant sets for constrained discrete-time linear uncertain systems is presented. First, the problem of stabilizing a prespecified initial condition set in the presence of input and state constraints is addressed. Second, the

  8. 10 CFR 13.27 - Computation of time.

    Science.gov (United States)

    2010-01-01

    ...; and (2) By 11:59 p.m. Eastern Time for a document served by the E-Filing system. [72 FR 49153, Aug. 28... the calculation of additional days when a participant is not entitled to receive an entire filing... same filing and service method, the number of days for service will be determined by the presiding...

  9. 10 CFR 2.306 - Computation of time.

    Science.gov (United States)

    2010-01-01

    ...:59 p.m. Eastern Time for a document served by the E-Filing system. [72 FR 49151, Aug. 28, 2007] ... the calculation of additional days when a participant is not entitled to receive an entire filing... filing and service method, the number of days for service will be determined by the presiding officer...

  10. Real time operating system for a nuclear power plant computer

    International Nuclear Information System (INIS)

    Alger, L.S.; Lala, J.H.

    1986-01-01

    A quadruply redundant synchronous fault tolerant processor (FTP) is now under fabrication at the C.S. Draper Laboratory to be used initially as a trip monitor for the Experimental Breeder Reactor EBR-II operated by the Argonne National Laboratory in Idaho Falls, Idaho. The real time operating system for this processor is described

  11. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  12. 22 CFR 1429.21 - Computation of time for filing papers.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Computation of time for filing papers. 1429.21... MISCELLANEOUS AND GENERAL REQUIREMENTS General Requirements § 1429.21 Computation of time for filing papers. In... subchapter requires the filing of any paper, such document must be received by the Board or the officer or...

  13. Time-ordered product expansions for computational stochastic system biology

    International Nuclear Information System (INIS)

    Mjolsness, Eric

    2013-01-01

    The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie’s stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems. (paper)

  14. Wake force computation in the time domain for long structures

    International Nuclear Information System (INIS)

    Bane, K.; Weiland, T.

    1983-07-01

    One is often interested in calculating the wake potentials for short bunches in long structures using TBCI. For ultra-relativistic particles it is sufficient to solve for the fields only over a window containing the bunch and moving along with it. This technique reduces both the memory and the running time required by a factor that equals the ratio of the structure length to the window length. For example, for a bunch with sigma/sub z/ of one picosecond traversing a single SLAC cell this improvement factor is 15. It is thus possible to solve for the wakefields in very long structures: for a given problem, increasing the structure length will not change the memory required while only adding linearly to the CPU time needed

  15. Real-time FPGA architectures for computer vision

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar

    2000-03-01

    This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.

  16. HOPE: Just-in-time Python compiler for astrophysical computations

    Science.gov (United States)

    Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre

    2014-11-01

    HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

  17. Computational micromagnetics: prediction of time dependent and thermal properties

    International Nuclear Information System (INIS)

    Schrefl, T.; Scholz, W.; Suess, Dieter; Fidler, J.

    2001-01-01

    Finite element modeling treats magnetization processes on a length scale of several nanometers and thus gives a quantitative correlation between the microstructure and the magnetic properties of ferromagnetic materials. This work presents a novel finite element/boundary element micro-magnetics solver that combines a wavelet-based matrix compression technique for magnetostatic field calculations with a BDF/GMRES method for the time integration of the Gilbert equation of motion. The simulations show that metastable energy minima and nonuniform magnetic states within the grains are important factors in the reversal dynamics at finite temperature. The numerical solution of the Gilbert equation shows how reversed domains nucleate and expand. The switching time of submicron magnetic elements depends on the shape of the elements. Elements with slanted ends decrease the overall reversal time, as a transverse demagnetizing field suppresses oscillations of the magnetization. Thermal activated processes can be included adding a random thermal field to the effective magnetic field. Thermally assisted reversal was studied for CoCrPtTa thin-film media

  18. Applications of parallel computer architectures to the real-time simulation of nuclear power systems

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1988-01-01

    In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  20. Homework schedule: an important factor associated with shorter sleep duration among Chinese school-aged children.

    Science.gov (United States)

    Li, Shenghui; Yang, Qian; Chen, Zhe; Jin, Xingming; Jiang, Fan; Shen, Xiaoming

    2014-09-03

    This study was designed to examine the hypothesis that homework schedule has adverse impacts on Chinese children's sleep-wake habits and sleep duration. A random sample of 19,299 children aged 5.08 to 11.99 years old participated in a large, cross-sectional survey. A parent-administered questionnaire was completed to quantify children's homework schedule and sleep behaviors. Generally, it was demonstrated that more homework schedule was significantly associated with later bedtime, later wake time, and shorter sleep duration. Among all sleep variables, bedtime and sleep duration during weekdays appeared to be most affected by homework schedule, especially homework schedule during weekdays.

  1. Computation of a long-time evolution in a Schroedinger system

    International Nuclear Information System (INIS)

    Girard, R.; Kroeger, H.; Labelle, P.; Bajzer, Z.

    1988-01-01

    We compare different techniques for the computation of a long-time evolution and the S matrix in a Schroedinger system. As an application we consider a two-nucleon system interacting via the Yamaguchi potential. We suggest computation of the time evolution for a very short time using Pade approximants, the long-time evolution being obtained by iterative squaring. Within the technique of strong approximation of Moller wave operators (SAM) we compare our calculation with computation of the time evolution in the eigenrepresentation of the Hamiltonian and with the standard Lippmann-Schwinger solution for the S matrix. We find numerical agreement between these alternative methods for time-evolution computation up to half the number of digits of internal machine precision, and fairly rapid convergence of both techniques towards the Lippmann-Schwinger solution

  2. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  3. Representativeness of shorter measurement sessions in long-term indoor air monitoring.

    Science.gov (United States)

    Maciejewska, M; Szczurek, A

    2015-02-01

    Indoor air quality (IAQ) considerably influences health, comfort and the overall performance of people who spend most of their lives in confined spaces. For this reason, there is a strong need to develop methods for IAQ assessment. The fundamental issue in the quantitative determination of IAQ is the duration of measurements. Its inadequate choice may result in providing incorrect information and this potentially leads to wrong conclusions. The most complete information may be acquired through long-term monitoring. However it is typically perceived as impractical due to time and cost load. The aim of this study was to determine whether long-term monitoring can be adequately represented by a shorter measurement session. There were considered three measurable quantities: temperature, relative humidity and carbon dioxide concentration. They are commonly recognized as indicatives for IAQ and may be readily monitored. Scaled Kullback-Leibler divergence, also called relative entropy, was applied as a measure of data representativeness. We considered long-term monitoring in a range from 1 to 9 months. Based on our work, the representative data on CO2 concentration may be acquired while performing measurements during 20% of time dedicated to long-term monitoring. In the case of temperature and relative humidity the respective time demand was 50% of long-term monitoring. From our results, in indoor air monitoring strategies, there could be considered shorter measurement sessions, while still collecting data which are representative for long-term monitoring.

  4. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    Science.gov (United States)

    2015-09-13

    thermo-fluid analysis of a ground vehicle and its tires ST-SI Computational Analysis of a Vertical - Axis Wind Turbine We have successfully...of a vertical - axis wind turbine . Multiscale Compressible-Flow Computation with Particle Tracking We have successfully tested the multiscale...Tezduyar, Spenser McIntyre, Nikolay Kostov, Ryan Kolesar, Casey Habluetzel. Space–time VMS computation of wind - turbine rotor and tower aerodynamics

  5. A Non-Linear Digital Computer Model Requiring Short Computation Time for Studies Concerning the Hydrodynamics of the BWR

    Energy Technology Data Exchange (ETDEWEB)

    Reisch, F; Vayssier, G

    1969-05-15

    This non-linear model serves as one of the blocks in a series of codes to study the transient behaviour of BWR or PWR type reactors. This program is intended to be the hydrodynamic part of the BWR core representation or the hydrodynamic part of the PWR heat exchanger secondary side representation. The equations have been prepared for the CSMP digital simulation language. By using the most suitable integration routine available, the ratio of simulation time to real time is about one on an IBM 360/75 digital computer. Use of the slightly different language DSL/40 on an IBM 7044 computer takes about four times longer. The code has been tested against the Eindhoven loop with satisfactory agreement.

  6. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  7. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  8. Ubiquitous computing technology for just-in-time motivation of behavior change.

    Science.gov (United States)

    Intille, Stephen S

    2004-01-01

    This paper describes a vision of health care where "just-in-time" user interfaces are used to transform people from passive to active consumers of health care. Systems that use computational pattern recognition to detect points of decision, behavior, or consequences automatically can present motivational messages to encourage healthy behavior at just the right time. Further, new ubiquitous computing and mobile computing devices permit information to be conveyed to users at just the right place. In combination, computer systems that present messages at the right time and place can be developed to motivate physical activity and healthy eating. Computational sensing technologies can also be used to measure the impact of the motivational technology on behavior.

  9. Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment

    Science.gov (United States)

    He, Aiguo

    2011-01-01

    Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…

  10. 78 FR 38949 - Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response

    Science.gov (United States)

    2013-06-28

    ... exposed to various forms of cyber attack. In some cases, attacks can be thwarted through the use of...-3383-01] Computer Security Incident Coordination (CSIC): Providing Timely Cyber Incident Response... systems will be successfully attacked. When a successful attack occurs, the job of a Computer Security...

  11. Application verification research of cloud computing technology in the field of real time aerospace experiment

    Science.gov (United States)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  13. Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to demonstrate key elements of feasibility for a high speed automated time domain terahertz computed axial tomography (TD-THz CT) non destructive...

  14. Time Domain Terahertz Axial Computed Tomography Non Destructive Evaluation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — In this Phase 2 project, we propose to develop, construct, and deliver to NASA a computed axial tomography time-domain terahertz (CT TD-THz) non destructive...

  15. Decreasing Transition Times in Elementary School Classrooms: Using Computer-Assisted Instruction to Automate Intervention Components

    Science.gov (United States)

    Hine, Jeffrey F.; Ardoin, Scott P.; Foster, Tori E.

    2015-01-01

    Research suggests that students spend a substantial amount of time transitioning between classroom activities, which may reduce time spent academically engaged. This study used an ABAB design to evaluate the effects of a computer-assisted intervention that automated intervention components previously shown to decrease transition times. We examined…

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  17. CROSAT: A digital computer program for statistical-spectral analysis of two discrete time series

    International Nuclear Information System (INIS)

    Antonopoulos Domis, M.

    1978-03-01

    The program CROSAT computes directly from two discrete time series auto- and cross-spectra, transfer and coherence functions, using a Fast Fourier Transform subroutine. Statistical analysis of the time series is optional. While of general use the program is constructed to be immediately compatible with the ICL 4-70 and H316 computers at AEE Winfrith, and perhaps with minor modifications, with any other hardware system. (author)

  18. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  19. Time expenditure in computer aided time studies implemented for highly mechanized forest equipment

    Directory of Open Access Journals (Sweden)

    Elena Camelia Mușat

    2016-06-01

    Full Text Available Time studies represent important tools that are used in forest operations research to produce empirical models or to comparatively assess the performance of two or more operational alternatives with the general aim to predict the performance of operational behavior, choose the most adequate equipment or eliminate the useless time. There is a long tradition in collecting the needed data in a traditional fashion, but this approach has its limitations, and it is likely that in the future the use of professional software would be extended is such preoccupations as this kind of tools have been already implemented. However, little to no information is available in what concerns the performance of data analyzing tasks when using purpose-built professional time studying software in such research preoccupations, while the resources needed to conduct time studies, including here the time may be quite intensive. Our study aimed to model the relations between the variation of time needed to analyze the video-recorded time study data and the variation of some measured independent variables for a complex organization of a work cycle. The results of our study indicate that the number of work elements which were separated within a work cycle as well as the delay-free cycle time and the software functionalities that were used during data analysis, significantly affected the time expenditure needed to analyze the data (α=0.01, p<0.01. Under the conditions of this study, where the average duration of a work cycle was of about 48 seconds and the number of separated work elements was of about 14, the speed that was usedto replay the video files significantly affected the mean time expenditure which averaged about 273 seconds for half of the real speed and about 192 seconds for an analyzing speed that equaled the real speed. We argue that different study designs as well as the parameters used within the software are likely to produce

  20. VNAP2: a computer program for computation of two-dimensional, time-dependent, compressible, turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Cline, M.C.

    1981-08-01

    VNAP2 is a computer program for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow. VNAP2 solves the two-dimensional, time-dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing-length model, a one-equation model, or the Jones-Launder two-equation model. The geometry may be a single- or a dual-flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference-plane-characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free-jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet-powered afterbodies, airfoils, and free-jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  1. Hereditary angioedema attacks resolve faster and are shorter after early icatibant treatment.

    Directory of Open Access Journals (Sweden)

    Marcus Maurer

    Full Text Available BACKGROUND: Attacks of hereditary angioedema (HAE are unpredictable and, if affecting the upper airway, can be lethal. Icatibant is used for physician- or patient self-administered symptomatic treatment of HAE attacks in adults. Its mode of action includes disruption of the bradykinin pathway via blockade of the bradykinin B(2 receptor. Early treatment is believed to shorten attack duration and prevent severe outcomes; however, evidence to support these benefits is lacking. OBJECTIVE: To examine the impact of timing of icatibant administration on the duration and resolution of HAE type I and II attacks. METHODS: The Icatibant Outcome Survey is an international, prospective, observational study for patients treated with icatibant. Data on timings and outcomes of icatibant treatment for HAE attacks were collected between July 2009-February 2012. A mixed-model of repeated measures was performed for 426 attacks in 136 HAE type I and II patients. RESULTS: Attack duration was significantly shorter in patients treated <1 hour of attack onset compared with those treated ≥ 1 hour (6.1 hours versus 16.8 hours [p<0.001]. Similar significant effects were observed for <2 hours versus ≥ 2 hours (7.2 hours versus 20.2 hours [p<0.001] and <5 hours versus ≥ 5 hours (8.0 hours versus 23.5 hours [p<0.001]. Treatment within 1 hour of attack onset also significantly reduced time to attack resolution (5.8 hours versus 8.8 hours [p<0.05]. Self-administrators were more likely to treat early and experience shorter attacks than those treated by a healthcare professional. CONCLUSION: Early blockade of the bradykinin B(2 receptor with icatibant, particularly within the first hour of attack onset, significantly reduced attack duration and time to attack resolution.

  2. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  4. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  5. Variation in computer time with geometry prescription in monte carlo code KENO-IV

    International Nuclear Information System (INIS)

    Gopalakrishnan, C.R.

    1988-01-01

    In most studies, the Monte Carlo criticality code KENO-IV has been compared with other Monte Carlo codes, but evaluation of its performance with different box descriptions has not been done so far. In Monte Carlo computations, any fractional savings of computing time is highly desirable. Variation in computation time with box description in KENO for two different fast reactor fuel subassemblies of FBTR and PFBR is studied. The K eff of an infinite array of fuel subassemblies is calculated by modelling the subassemblies in two different ways (i) multi-region, (ii) multi-box. In addition to these two cases, excess reactivity calculations of FBTR are also performed in two ways to study this effect in a complex geometry. It is observed that the K eff values calculated by multi-region and multi-box models agree very well. However the increase in computation time from the multi-box to the multi-region is considerable, while the difference in computer storage requirements for the two models is negligible. This variation in computing time arises from the way the neutron is tracked in the two cases. (author)

  6. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  7. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  8. Computation of transit times using the milestoning method with applications to polymer translocation

    Science.gov (United States)

    Hawk, Alexander T.; Konda, Sai Sriharsha M.; Makarov, Dmitrii E.

    2013-08-01

    Milestoning is an efficient approximation for computing long-time kinetics and thermodynamics of large molecular systems, which are inaccessible to brute-force molecular dynamics simulations. A common use of milestoning is to compute the mean first passage time (MFPT) for a conformational transition of interest. However, the MFPT is not always the experimentally observed timescale. In particular, the duration of the transition path, or the mean transit time, can be measured in single-molecule experiments, such as studies of polymers translocating through pores and fluorescence resonance energy transfer studies of protein folding. Here we show how to use milestoning to compute transit times and illustrate our approach by applying it to the translocation of a polymer through a narrow pore.

  9. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    International Nuclear Information System (INIS)

    Lent, Wineke A.M. van; Deetman, Joost W.; Teertstra, H. Jelle; Muller, Sara H.; Hans, Erwin W.; Harten, Wim H. van

    2012-01-01

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  10. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lent, Wineke A.M. van, E-mail: w.v.lent@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands); Deetman, Joost W., E-mail: j.deetman@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Teertstra, H. Jelle, E-mail: h.teertstra@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Muller, Sara H., E-mail: s.muller@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Hans, Erwin W., E-mail: e.w.hans@utwente.nl [University of Twente, School of Management and Governance, Dept. of Industrial Engineering and Business Intelligence Systems, Enschede (Netherlands); Harten, Wim H. van, E-mail: w.v.harten@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands)

    2012-11-15

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  11. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra

    Energy Technology Data Exchange (ETDEWEB)

    Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu [Department of Chemistry, University of Washington, Seattle, Washington 98195 (United States)

    2016-06-21

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.

  12. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  13. Kajian dan Implementasi Real TIME Operating System pada Single Board Computer Berbasis Arm

    OpenAIRE

    A, Wiedjaja; M, Handi; L, Jonathan; Christian, Benyamin; Kristofel, Luis

    2014-01-01

    Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system) which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC) ARM-based, namely Pandaboard ES with ...

  14. Y2K issues for real time computer systems for fast breeder test reactor

    International Nuclear Information System (INIS)

    Swaminathan, P.

    1999-01-01

    Presentation shows the classification of real time systems related to operation, control and monitoring of the fast breeder test reactor. Software life cycle includes software requirement specification, software design description, coding, commissioning, operation and management. A software scheme in supervisory computer of fast breeder test rector is described with the twenty years of experience in design, development, installation, commissioning, operation and maintenance of computer based supervision control system for nuclear installation with a particular emphasis on solving the Y2K problem

  15. Near real-time digital holographic microscope based on GPU parallel computing

    Science.gov (United States)

    Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan

    2018-01-01

    A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,

  16. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)

    2017-06-01

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.

  17. The position of a standard optical computer mouse affects cardiorespiratory responses during the operation of a computer under time constraints.

    Science.gov (United States)

    Sako, Shunji; Sugiura, Hiromichi; Tanoue, Hironori; Kojima, Makoto; Kono, Mitsunobu; Inaba, Ryoichi

    2014-08-01

    This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1) the distal position (DP), in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2) the proximal position (PP), in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses), operating efficiencies (based on word counts), and fatigue levels (based on the visual analog scale - VAS). Oxygen consumption (VO(2)), the ratio of inspiration time to respiration time (T(i)/T(total)), respiratory rate (RR), minute ventilation (VE), and the ratio of expiration to inspiration (Te/T(i)) were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT), carbon dioxide output rates (VCO(2)/VE), and oxygen extraction fractions (VO(2)/VE) were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when operating a computer.

  18. The position of a standard optical computer mouse affects cardiorespiratory responses during the operation of a computer under time constraints

    Directory of Open Access Journals (Sweden)

    Shunji Sako

    2014-08-01

    Full Text Available Objectives: This study investigated the association between task-induced stress and fatigue by examining the cardiovascular responses of subjects using different mouse positions while operating a computer under time constraints. Material and Methods: The study was participated by 16 young, healthy men and examined the use of optical mouse devices affixed to laptop computers. Two mouse positions were investigated: (1 the distal position (DP, in which the subjects place their forearms on the desk accompanied by the abduction and flexion of their shoulder joints, and (2 the proximal position (PP, in which the subjects place only their wrists on the desk without using an armrest. The subjects continued each task for 16 min. We assessed differences in several characteristics according to mouse position, including expired gas values, autonomic nerve activities (based on cardiorespiratory responses, operating efficiencies (based on word counts, and fatigue levels (based on the visual analog scale – VAS. Results: Oxygen consumption (VO2, the ratio of inspiration time to respiration time (Ti/Ttotal, respiratory rate (RR, minute ventilation (VE, and the ratio of expiration to inspiration (Te/Ti were significantly lower when the participants were performing the task in the DP than those obtained in the PP. Tidal volume (VT, carbon dioxide output rates (VCO2/VE, and oxygen extraction fractions (VO2/VE were significantly higher for the DP than they were for the PP. No significant difference in VAS was observed between the positions; however, as the task progressed, autonomic nerve activities were lower and operating efficiencies were significantly higher for the DP than they were for the PP. Conclusions: Our results suggest that the DP has fewer effects on cardiorespiratory functions, causes lower levels of sympathetic nerve activity and mental stress, and produces a higher total workload than the PP. This suggests that the DP is preferable to the PP when

  19. Explicit time marching methods for the time-dependent Euler computations

    International Nuclear Information System (INIS)

    Tai, C.H.; Chiang, D.C.; Su, Y.P.

    1997-01-01

    Four explicit type time marching methods, including one proposed by the authors, are examined. The TVD conditions of this method are analyzed with the linear conservation law as the model equation. Performance of these methods when applied to the Euler equations are numerically tested. Seven examples are tested, the main concern is the performance of the methods when discontinuities with different strengths are encountered. When the discontinuity is getting stronger, spurious oscillation shows up for three existing methods, while the method proposed by the authors always gives the results with satisfaction. The effect of the limiter is also investigated. To put these methods in the same basis for the comparison the same spatial discretization is used. Roe's solver is used to evaluate the fluxes at the cell interface; spatially second-order accuracy is achieved by the MUSCL reconstruction. 19 refs., 8 figs

  20. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  5. Time to punishment: the effects of a shorter criminal procedure on crime rates

    Czech Academy of Sciences Publication Activity Database

    Dušek, Libor

    2015-01-01

    Roč. 43, August (2015), s. 134-147 ISSN 0144-8188 Institutional support: PRVOUK-P23 Keywords : criminal procedure * deterrence * law enforcement Subject RIV: AH - Economics Impact factor: 0.543, year: 2015

  6. Time to punishment: the effects of a shorter criminal procedure on crime rates

    Czech Academy of Sciences Publication Activity Database

    Dušek, Libor

    2015-01-01

    Roč. 43, August (2015), s. 134-147 ISSN 0144-8188 Institutional support: RVO:67985998 Keywords : criminal procedure * deterrence * law enforcement Subject RIV: AH - Economics Impact factor: 0.543, year: 2015

  7. Changing patterns of working time in Germany - from shorter working hours to more flexible work schedules

    OpenAIRE

    Seifert, Hartmut

    2010-01-01

    La reducción del tiempo de trabajo promete sustanciales efectos sobre el empleo. La reducción del tiempo de trabajo, combinada con la flexibilización del tiempo de trabajo se ha mostrado como una buena experiencia porque permite reducir los costes laborales, mejorar la productividad y mantener el empleo. Los incentivos sobre las contribuciones sociales pueden contribuir no sólo a la mejora del empleo, sino también a reducir la carga del gasto público por desempleo. Sin embar...

  8. Accuracy of rate coding: When shorter time window and higher spontaneous activity help

    Czech Academy of Sciences Publication Activity Database

    Leváková, Marie; Tamborrino, M.; Košťál, Lubomír; Lánský, Petr

    2017-01-01

    Roč. 95, č. 2 (2017), č. článku 022310. ISSN 2470-0045 R&D Projects: GA ČR(CZ) GA15-08066S; GA MŠk(CZ) 7AMB17AT048 Institutional support: RVO:67985823 Keywords : rate coding * observation window * spontaneous activity * Fisher information * perfect integrate- and -fire model * Wiener process Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Biology (theoretical, mathematical, thermal, cryobiology, biological rhythm), Evolutionary biology Impact factor: 2.366, year: 2016

  9. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    International Nuclear Information System (INIS)

    Wang, Henry; Ma Yunzhi; Pratx, Guillem; Xing Lei

    2011-01-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  10. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Henry [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Ma Yunzhi; Pratx, Guillem; Xing Lei, E-mail: hwang41@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847 (United States)

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  11. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    Science.gov (United States)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  12. A computationally simple and robust method to detect determinism in a time series

    DEFF Research Database (Denmark)

    Lu, Sheng; Ju, Ki Hwan; Kanters, Jørgen K.

    2006-01-01

    We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals. The IS ......We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals...

  13. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  14. Ultrasonic divergent-beam scanner for time-of-flight tomography with computer evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Glover, G H

    1978-03-02

    The rotatable ultrasonic divergent-beam scanner is designed for time-of-flight tomography with computer evaluation. With it there can be measured parameters that are of importance for the structure of soft tissues, e.g. time as a function of the velocity distribution along a certain path of flight(the method is analogous to the transaxial X-ray tomography). Moreover it permits to perform the quantitative measurement of two-dimensional velocity distributions and may therefore be applied to serial examinations for detecting cancer of the breast. As computers digital memories as well as analog-digital-hybrid systems are suitable.

  15. Computer-games for gravitational wave science outreach: Black Hole Pong and Space Time Quest

    International Nuclear Information System (INIS)

    Carbone, L; Bond, C; Brown, D; Brückner, F; Grover, K; Lodhia, D; Mingarelli, C M F; Fulda, P; Smith, R J E; Unwin, R; Vecchio, A; Wang, M; Whalley, L; Freise, A

    2012-01-01

    We have established a program aimed at developing computer applications and web applets to be used for educational purposes as well as gravitational wave outreach activities. These applications and applets teach gravitational wave physics and technology. The computer programs are generated in collaboration with undergraduates and summer students as part of our teaching activities, and are freely distributed on a dedicated website. As part of this program, we have developed two computer-games related to gravitational wave science: 'Black Hole Pong' and 'Space Time Quest'. In this article we present an overview of our computer related outreach activities and discuss the games and their educational aspects, and report on some positive feedback received.

  16. Computational Procedures for a Class of GI/D/k Systems in Discrete Time

    Directory of Open Access Journals (Sweden)

    Md. Mostafizur Rahman

    2009-01-01

    Full Text Available A class of discrete time GI/D/k systems is considered for which the interarrival times have finite support and customers are served in first-in first-out (FIFO order. The system is formulated as a single server queue with new general independent interarrival times and constant service duration by assuming cyclic assignment of customers to the identical servers. Then the queue length is set up as a quasi-birth-death (QBD type Markov chain. It is shown that this transformed GI/D/1 system has special structures which make the computation of the matrix R simple and efficient, thereby reducing the number of multiplications in each iteration significantly. As a result we were able to keep the computation time very low. Moreover, use of the resulting structural properties makes the computation of the distribution of queue length of the transformed system efficient. The computation of the distribution of waiting time is also shown to be simple by exploiting the special structures.

  17. Marital disruption is associated with shorter salivary telomere length in a probability sample of older adults.

    Science.gov (United States)

    Whisman, Mark A; Robustelli, Briana L; Sbarra, David A

    2016-05-01

    Marital disruption (i.e., marital separation, divorce) is associated with a wide range of poor mental and physical health outcomes, including increased risk for all-cause mortality. One biological intermediary that may help explain the association between marital disruption and poor health is accelerated cellular aging. This study examines the association between marital disruption and salivary telomere length in a United States probability sample of adults ≥50 years of age. Participants were 3526 individuals who participated in the 2008 wave of the Health and Retirement Study. Telomere length assays were performed using quantitative real-time polymerase chain reaction (qPCR) on DNA extracted from saliva samples. Health and lifestyle factors, traumatic and stressful life events, and neuroticism were assessed via self-report. Linear regression analyses were conducted to examine the associations between predictor variables and salivary telomere length. Based on their marital status data in the 2006 wave, people who were separated or divorced had shorter salivary telomeres than people who were continuously married or had never been married, and the association between marital disruption and salivary telomere length was not moderated by gender or neuroticism. Furthermore, the association between marital disruption and salivary telomere length remained statistically significant after adjusting for demographic and socioeconomic variables, neuroticism, cigarette use, body mass, traumatic life events, and other stressful life events. Additionally, results revealed that currently married adults with a history of divorce evidenced shorter salivary telomeres than people who were continuously married or never married. Accelerated cellular aging, as indexed by telomere shortening, may be one pathway through which marital disruption is associated with morbidity and mortality. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A shorter and more specific oral sensitization-based experimental model of food allergy in mice.

    Science.gov (United States)

    Bailón, Elvira; Cueto-Sola, Margarita; Utrilla, Pilar; Rodríguez-Ruiz, Judith; Garrido-Mesa, Natividad; Zarzuelo, Antonio; Xaus, Jordi; Gálvez, Julio; Comalada, Mònica

    2012-07-31

    Cow's milk protein allergy (CMPA) is one of the most prevalent human food-borne allergies, particularly in children. Experimental animal models have become critical tools with which to perform research on new therapeutic approaches and on the molecular mechanisms involved. However, oral food allergen sensitization in mice requires several weeks and is usually associated with unspecific immune responses. To overcome these inconveniences, we have developed a new food allergy model that takes only two weeks while retaining the main characters of allergic response to food antigens. The new model is characterized by oral sensitization of weaned Balb/c mice with 5 doses of purified cow's milk protein (CMP) plus cholera toxin (CT) for only two weeks and posterior challenge with an intraperitoneal administration of the allergen at the end of the sensitization period. In parallel, we studied a conventional protocol that lasts for seven weeks, and also the non-specific effects exerted by CT in both protocols. The shorter protocol achieves a similar clinical score as the original food allergy model without macroscopically affecting gut morphology or physiology. Moreover, the shorter protocol caused an increased IL-4 production and a more selective antigen-specific IgG1 response. Finally, the extended CT administration during the sensitization period of the conventional protocol is responsible for the exacerbated immune response observed in that model. Therefore, the new model presented here allows a reduction not only in experimental time but also in the number of animals required per experiment while maintaining the features of conventional allergy models. We propose that the new protocol reported will contribute to advancing allergy research. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Television viewing, computer use and total screen time in Canadian youth.

    Science.gov (United States)

    Mark, Amy E; Boyce, William F; Janssen, Ian

    2006-11-01

    Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.

  20. Efficient Buffer Capacity and Scheduler Setting Computation for Soft Real-Time Stream Processing Applications

    NARCIS (Netherlands)

    Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef

    2007-01-01

    Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique

  1. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    Science.gov (United States)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  2. Math modeling and computer mechanization for real time simulation of rotary-wing aircraft

    Science.gov (United States)

    Howe, R. M.

    1979-01-01

    Mathematical modeling and computer mechanization for real time simulation of rotary wing aircraft is discussed. Error analysis in the digital simulation of dynamic systems, such as rotary wing aircraft is described. The method for digital simulation of nonlinearities with discontinuities, such as exist in typical flight control systems and rotor blade hinges, is discussed.

  3. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  4. Computer-generated versus nurse-determined strategy for incubator humidity and time to regain birthweight

    NARCIS (Netherlands)

    Helder, Onno K.; Mulder, Paul G. H.; van Goudoever, Johannes B.

    2008-01-01

    To compare effects on premature infants' weight gain of a computer-generated and a nurse-determined incubator humidity strategy. An optimal humidity protocol is thought to reduce time to regain birthweight. Prospective randomized controlled design. Level IIIC neonatal intensive care unit in the

  5. Computing Camps for Girls : A First-Time Experience at the University of Limerick

    NARCIS (Netherlands)

    McInerney, Clare; Lamprecht, A.L.; Margaria, Tiziana

    2018-01-01

    Increasing the number of females in ICT-related university courses has been a major concern for several years. In 2015, we offered a girls-only computing summer camp for the first time, as a new component in our education and outreach activities to foster students’ interest in our discipline. In

  6. A Real-Time Plagiarism Detection Tool for Computer-Based Assessments

    Science.gov (United States)

    Jeske, Heimo J.; Lall, Manoj; Kogeda, Okuthe P.

    2018-01-01

    Aim/Purpose: The aim of this article is to develop a tool to detect plagiarism in real time amongst students being evaluated for learning in a computer-based assessment setting. Background: Cheating or copying all or part of source code of a program is a serious concern to academic institutions. Many academic institutions apply a combination of…

  7. Modeling of requirement specification for safety critical real time computer system using formal mathematical specifications

    International Nuclear Information System (INIS)

    Sankar, Bindu; Sasidhar Rao, B.; Ilango Sambasivam, S.; Swaminathan, P.

    2002-01-01

    Full text: Real time computer systems are increasingly used for safety critical supervision and control of nuclear reactors. Typical application areas are supervision of reactor core against coolant flow blockage, supervision of clad hot spot, supervision of undesirable power excursion, power control and control logic for fuel handling systems. The most frequent cause of fault in safety critical real time computer system is traced to fuzziness in requirement specification. To ensure the specified safety, it is necessary to model the requirement specification of safety critical real time computer systems using formal mathematical methods. Modeling eliminates the fuzziness in the requirement specification and also helps to prepare the verification and validation schemes. Test data can be easily designed from the model of the requirement specification. Z and B are the popular languages used for modeling the requirement specification. A typical safety critical real time computer system for supervising the reactor core of prototype fast breeder reactor (PFBR) against flow blockage is taken as case study. Modeling techniques and the actual model are explained in detail. The advantages of modeling for ensuring the safety are summarized

  8. Green computing: power optimisation of VFI-based real-time multiprocessor dataflow applications (extended version)

    NARCIS (Netherlands)

    Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2015-01-01

    Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor

  9. Green computing: power optimisation of vfi-based real-time multiprocessor dataflow applications

    NARCIS (Netherlands)

    Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2015-01-01

    Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor

  10. Effect of exposure time reduction towards sensitivity and SNR for computed radiography (CR) application in NDT

    International Nuclear Information System (INIS)

    Sapizah Rahim; Khairul Anuar Mohd Salleh; Noorhazleena Azaman; Shaharudin Sayuti; Siti Madiha Muhammad Amir; Arshad Yassin; Abdul Razak Hamzah

    2010-01-01

    Signal-to-noise ratio (SNR) and sensitivity study of Computed Radiography (CR) system with reduction of exposure time is presented. The purposes of this research are to determine the behavior of SNR toward three different thicknesses (step wedge; 5, 10 and 15 mm) and the ability of CR system to recognize hole type penetrameter when the exposure time decreased up to 80 % according to the exposure chart (D7; ISOVOLT Titan E). It is shown that the SNR is decreased with decreasing of exposure time percentage but the high quality image is achieved until 80 % reduction of exposure time. (author)

  11. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    Science.gov (United States)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  12. A computer-based time study system for timber harvesting operations

    Science.gov (United States)

    Jingxin Wang; Joe McNeel; John Baumgras

    2003-01-01

    A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...

  13. Association between TV viewing, computer use and overweight, determinants and competing activities of screen time in 4- to 13-year-old children.

    Science.gov (United States)

    de Jong, E; Visscher, T L S; HiraSing, R A; Heymans, M W; Seidell, J C; Renders, C M

    2013-01-01

    TV viewing and computer use is associated with childhood overweight, but it remains unclear as to how these behaviours could best be targeted. The aim of this study was to determine to what extent the association between TV viewing, computer use and overweight is explained by other determinants of overweight, to find determinants of TV viewing and computer use in the home environment and to investigate competing activities. A cross-sectional study was carried out among 4072 children aged 4-13 years in the city of Zwolle, the Netherlands. Data collection consisted of measured height, weight and waist circumference, and a parental questionnaire on socio-demographic characteristics, child's nutrition, physical activity (PA) and sedentary behaviour. Associations were studied with logistic regression analyses, for older and younger children, boys and girls separately. The odds ratio (OR) of being overweight was 1.70 (95% confidence interval (CI): 1.07-2.72) for viewing TV >1.5 h among 4- to 8-year-old children adjusted for all potential confounders. Computer use was not significantly associated with overweight. Determinants of TV viewing were as follows: having >2 TVs in the household (OR: 2.38; 95% CI: 1.66-3.41), a TV in the child's bedroom and not having rules on TV viewing. TV viewing and computer use were both associated with shorter sleep duration and not with less PA. Association between TV viewing and overweight is not explained by socio-demographic variables, drinking sugared drinks and eating snacks. Factors in the home environment influence children's TV viewing. Parents have a central role as they determine the number of TVs, rules and also their children's bedtime. Therefore, interventions to reduce screen time should support parents in making home environmental changes, especially when the children are young.

  14. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  15. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  16. Computationally determining the salience of decision points for real-time wayfinding support

    Directory of Open Access Journals (Sweden)

    Makoto Takemiya

    2012-06-01

    Full Text Available This study introduces the concept of computational salience to explain the discriminatory efficacy of decision points, which in turn may have applications to providing real-time assistance to users of navigational aids. This research compared algorithms for calculating the computational salience of decision points and validated the results via three methods: high-salience decision points were used to classify wayfinders; salience scores were used to weight a conditional probabilistic scoring function for real-time wayfinder performance classification; and salience scores were correlated with wayfinding-performance metrics. As an exploratory step to linking computational and cognitive salience, a photograph-recognition experiment was conducted. Results reveal a distinction between algorithms useful for determining computational and cognitive saliences. For computational salience, information about the structural integration of decision points is effective, while information about the probability of decision-point traversal shows promise for determining cognitive salience. Limitations from only using structural information and motivations for future work that include non-structural information are elicited.

  17. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo; Abdelaziz, Ibrahim; Aldilaijan, Abdulla; Canini, Marco; Kalnis, Panos

    2017-01-01

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.

  18. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo

    2017-11-27

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers\\' computation time.

  19. Real-time computer treatment of THz passive device images with the high image quality

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  20. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  2. Quantification of Artifact Reduction With Real-Time Cine Four-Dimensional Computed Tomography Acquisition Methods

    International Nuclear Information System (INIS)

    Langner, Ulrich W.; Keall, Paul J.

    2010-01-01

    Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.

  3. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  5. An assessment of the real-time application capabilities of the SIFT computer system

    Science.gov (United States)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  6. Theory and computation of disturbance invariant sets for discrete-time linear systems

    Directory of Open Access Journals (Sweden)

    Kolmanovsky Ilya

    1998-01-01

    Full Text Available This paper considers the characterization and computation of invariant sets for discrete-time, time-invariant, linear systems with disturbance inputs whose values are confined to a specified compact set but are otherwise unknown. The emphasis is on determining maximal disturbance-invariant sets X that belong to a specified subset Γ of the state space. Such d-invariant sets have important applications in control problems where there are pointwise-in-time state constraints of the form χ ( t ∈ Γ . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.

  7. Hardware architecture design of image restoration based on time-frequency domain computation

    Science.gov (United States)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  8. Joint Time-Frequency-Space Classification of EEG in a Brain-Computer Interface Application

    Directory of Open Access Journals (Sweden)

    Molina Gary N Garcia

    2003-01-01

    Full Text Available Brain-computer interface is a growing field of interest in human-computer interaction with diverse applications ranging from medicine to entertainment. In this paper, we present a system which allows for classification of mental tasks based on a joint time-frequency-space decorrelation, in which mental tasks are measured via electroencephalogram (EEG signals. The efficiency of this approach was evaluated by means of real-time experimentations on two subjects performing three different mental tasks. To do so, a number of protocols for visualization, as well as training with and without feedback, were also developed. Obtained results show that it is possible to obtain good classification of simple mental tasks, in view of command and control, after a relatively small amount of training, with accuracies around 80%, and in real time.

  9. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    Science.gov (United States)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  10. Real time computer control of a nonlinear Multivariable System via Linearization and Stability Analysis

    International Nuclear Information System (INIS)

    Raza, K.S.M.

    2004-01-01

    This paper demonstrates that if a complicated nonlinear, non-square, state-coupled multi variable system is smartly linearized and subjected to a thorough stability analysis then we can achieve our design objectives via a controller which will be quite simple (in term of resource usage and execution time) and very efficient (in terms of robustness). Further the aim is to implement this controller via computer in a real time environment. Therefore first a nonlinear mathematical model of the system is achieved. An intelligent work is done to decouple the multivariable system. Linearization and stability analysis techniques are employed for the development of a linearized and mathematically sound control law. Nonlinearities like the saturation in actuators are also been catered. The controller is then discretized using Runge-Kutta integration. Finally the discretized control law is programmed in a computer in a real time environment. The programme is done in RT -Linux using GNU C for the real time realization of the control scheme. The real time processes, like sampling and controlled actuation, and the non real time processes, like graphical user interface and display, are programmed as different tasks. The issue of inter process communication, between real time and non real time task is addressed quite carefully. The results of this research pursuit are presented graphically. (author)

  11. Shorter Fallow Cycles Affect the Availability of Noncrop Plant Resources in a Shifting Cultivation System

    Directory of Open Access Journals (Sweden)

    Sarah Paule. Dalle

    2006-12-01

    Full Text Available Shifting cultivation systems, one of the most widely distributed forms of agriculture in the tropics, provide not only crops of cultural significance, but also medicinal, edible, ritual, fuel, and forage resources, which contribute to the livelihoods, health, and cultural identity of local people. In many regions across the globe, shifting cultivation systems are undergoing important changes, one of the most pervasive being a shortening of the fallow cycle. Although there has been much attention drawn to declines in crop yields in conjunction with reductions in fallow times, little if any research has focused on the dynamics of noncrop plant resources. In this paper, we use a data set of 26 fields of the same age, i.e., ~1.5 yr, but differing in the length and frequency of past fallow cycles, to examine the impact of shorter fallow periods on the availability of noncrop plant resources. The resources examined are collected in shifting cultivation fields by the Yucatec Maya in Quintana Roo, Mexico. These included firewood, which is cut from remnant trees and stumps spared at the time of felling, and 17 forage species that form part of the weed vegetation. Firewood showed an overall decrease in basal area with shorter fallow cycles, which was mostly related to the smaller diameter of the spared stumps and trees in short-fallow milpas. In contrast, forage species showed a mixed response. Species increasing in abundance in short-fallow milpas tended to be short-lived herbs and shrubs often with weedy habits, whereas those declining in abundance were predominantly pioneer trees and animal-dispersed species. Coppicing tree species showed a neutral response to fallow intensity. Within the cultural and ecological context of our study area, we expect that declines in firewood availability will be most significant for livelihoods because of the high reliance on firewood for local fuel needs and the fact that the main alternative source of firewood, forest

  12. Copyright and Computer Generated Materials – Is it Time to Reboot the Discussion About Authorship?

    Directory of Open Access Journals (Sweden)

    Anne Fitzgerald

    2013-12-01

    Full Text Available Computer generated materials are ubiquitous and we encounter them on a daily basis, even though most people are unaware that this is the case. Blockbuster movies, television weather reports and telephone directories all include material that is produced by utilising computer technologies. Copyright protection for materials generated by a programmed computer was considered by the Federal Court and Full Court of the Federal Court in Telstra Corporation Limited v Phone Directories Company Pty Ltd.  The court held that the White and Yellow pages telephone directories produced by Telstra and its subsidiary, Sensis, were not protected by copyright because they were computer-generated works which lacked the requisite human authorship.The Copyright Act 1968 (Cth does not contain specific provisions on the subsistence of copyright in computer-generated materials. Although the issue of copyright protection for computer-generated materials has been examined in Australia on two separate occasions by independently-constituted Copyright Law Review Committees over a period of 10 years (1988 to 1998, the Committees’ recommendations for legislative clarification by the enactment of specific amendments to the Copyright Act have not yet been implemented and the legal position remains unclear. In the light of the decision of the Full Federal Court in Telstra v Phone Directories it is timely to consider whether specific provisions should be enacted to clarify the position of computer-generated works under copyright law and, in particular, whether the requirement of human authorship for original works protected under Part III of the Copyright Act should now be reconceptualised to align with the realities of how copyright materials are created in the digital era.

  13. Control bandwidth improvements in GRAVITY fringe tracker by switching to a synchronous real time computer architecture

    Science.gov (United States)

    Abuter, Roberto; Dembet, Roderick; Lacour, Sylvestre; di Lieto, Nicola; Woillez, Julien; Eisenhauer, Frank; Fedou, Pierre; Phan Duc, Than

    2016-08-01

    The new VLTI (Very Large Telescope Interferometer) 1 instrument GRAVITY5, 22, 23 is equipped with a fringe tracker16 able to stabilize the K-band fringes on six baselines at the same time. It has been designed to achieve a performance for average seeing conditions of a residual OPD (Optical Path Difference) lower than 300 nm with objects brighter than K = 10. The control loop implementing the tracking is composed of a four stage real time computer system compromising: a sensor where the detector pixels are read in and the OPD and GD (Group Delay) are calculated; a controller receiving the computed sensor quantities and producing commands for the piezo actuators; a concentrator which combines both the OPD commands with the real time tip/tilt corrections offloading them to the piezo actuator; and finally a Kalman15 parameter estimator. This last stage is used to monitor current measurements over a window of few seconds and estimate new values for the main Kalman15 control loop parameters. The hardware and software implementation of this design runs asynchronously and communicates the four computers for data transfer via the Reflective Memory Network3. With the purpose of improving the performance of the GRAVITY5, 23 fringe tracking16, 22 control loop, a deviation from the standard asynchronous communication mechanism has been proposed and implemented. This new scheme operates the four independent real time computers involved in the tracking loop synchronously using the Reflective Memory Interrupts2 as the coordination signal. This synchronous mechanism had the effect of reducing the total pure delay of the loop from 3.5 [ms] to 2.0 [ms] which then translates on a better stabilization of the fringes as the bandwidth of the system is substantially improved. This paper will explain in detail the real time architecture of the fringe tracker in both is synchronous and synchronous implementation. The achieved improvements on reducing the delay via this mechanism will be

  14. SHORTER MENSTRUAL CYCLES ASSOCIATED WITH CHLORINATION BY-PRODUCTS IN DRINKING WATER

    Science.gov (United States)

    Shorter Menstrual Cycles Associated with Chlorination by-Products in Drinking Water. Gayle Windham, Kirsten Waller, Meredith Anderson, Laura Fenster, Pauline Mendola, Shanna Swan. California Department of Health Services.In previous studies of tap water consumption we...

  15. Shorter exposures to harder X-rays trigger early apoptotic events in Xenopus laevis embryos.

    Directory of Open Access Journals (Sweden)

    JiaJia Dong

    Full Text Available BACKGROUND: A long-standing conventional view of radiation-induced apoptosis is that increased exposure results in augmented apoptosis in a biological system, with a threshold below which radiation doses do not cause any significant increase in cell death. The consequences of this belief impact the extent to which malignant diseases and non-malignant conditions are therapeutically treated and how radiation is used in combination with other therapies. Our research challenges the current dogma of dose-dependent induction of apoptosis and establishes a new parallel paradigm to the photoelectric effect in biological systems. METHODOLOGY/PRINCIPAL FINDINGS: We explored how the energy of individual X-ray photons and exposure time, both factors that determine the total dose, influence the occurrence of cell death in early Xenopus embryo. Three different experimental scenarios were analyzed and morphological and biochemical hallmarks of apoptosis were evaluated. Initially, we examined cell death events in embryos exposed to increasing incident energies when the exposure time was preset. Then, we evaluated the embryo's response when the exposure time was augmented while the energy value remained constant. Lastly, we studied the incidence of apoptosis in embryos exposed to an equal total dose of radiation that resulted from increasing the incoming energy while lowering the exposure time. CONCLUSIONS/SIGNIFICANCE: Overall, our data establish that the energy of the incident photon is a major contributor to the outcome of the biological system. In particular, for embryos exposed under identical conditions and delivered the same absorbed dose of radiation, the response is significantly increased when shorter bursts of more energetic photons are used. These results suggest that biological organisms display properties similar to the photoelectric effect in physical systems and provide new insights into how radiation-mediated apoptosis should be understood and

  16. Bound on quantum computation time: Quantum error correction in a critical environment

    International Nuclear Information System (INIS)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-01-01

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  17. Real-time computing in environmental monitoring of a nuclear power plant

    International Nuclear Information System (INIS)

    Deme, S.; Lang, E.; Nagy, Gy.

    1987-06-01

    A real-time computing method is described for calculating the environmental radiation exposure due to a nuclear power plant both at normal operation and at accident. The effects of the Gaussian plume are recalculated in every ten minutes based on meteorological parameters measured at a height of 20 and 120 m as well as on emission data. At normal operation the quantity of radioactive materials released through the stacks is measured and registered while, at an accident, the source strength is unknown and the calculated relative data are normalized to the values measured at the eight environmental monitoring stations. The doses due to noble gases and to dry and wet deposition as well as the time integral of 131 I concentration are calculated and stored by a professional personal computer for 720 points of the environment of 11 km radius. (author)

  18. The Educator´s Approach to Media Training and Computer Games within Leisure Time of School-children

    OpenAIRE

    MORAVCOVÁ, Dagmar

    2009-01-01

    The paper describes possible ways of approaching computer games playing as part of leisure time of school-children and deals with the significance of media training in leisure time. At first it specifies the concept of leisure time and its functions, then shows some positive and negative effects of the media. It further describes classical computer games, the problem of excess computer game playing and means of prevention. The paper deals with the educator's personality and the importance of ...

  19. Computation of the Short-Time Linear Canonical Transform with Dual Window

    Directory of Open Access Journals (Sweden)

    Lei Huang

    2017-01-01

    Full Text Available The short-time linear canonical transform (STLCT, which maps the time domain signal into the joint time and frequency domain, has recently attracted some attention in the area of signal processing. However, its applications are still limited due to the fact that selection of coefficients of the short-time linear canonical series (STLCS is not unique, because time and frequency elementary functions (together known as basis function of STLCS do not constitute an orthogonal basis. To solve this problem, this paper investigates a dual window solution. First, the nonorthogonal problem that suffered from original window is fulfilled by orthogonal condition with dual window. Then based on the obtained condition, a dual window computation approach of the GT is extended to the STLCS. In addition, simulations verify the validity of the proposed condition and solutions. Furthermore, some possible applied directions are discussed.

  20. Memristive Computational Architecture of an Echo State Network for Real-Time Speech Emotion Recognition

    Science.gov (United States)

    2015-05-28

    recognition is simpler and requires less computational resources compared to other inputs such as facial expressions . The Berlin database of Emotional ...Processing Magazine, IEEE, vol. 18, no. 1, pp. 32– 80, 2001. [15] K. R. Scherer, T. Johnstone, and G. Klasmeyer, “Vocal expression of emotion ...Network for Real-Time Speech- Emotion Recognition 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S) Q

  1. Computational model for real-time determination of tritium inventory in a detritiation installation

    International Nuclear Information System (INIS)

    Bornea, Anisia; Stefanescu, Ioan; Zamfirache, Marius; Stefan, Iuliana; Sofalca, Nicolae; Bidica, Nicolae

    2008-01-01

    Full text: At ICIT Rm.Valcea an experimental pilot plant was built having as main objective the development of a technology for detritiation of heavy water processed in the CANDU-type reactors of the nuclear power plant at Cernavoda, Romania. The aspects related to safeguards and safety for such a detritiation installation being of great importance, a complex computational model has been developed. The model allows real-time calculation of tritium inventory in a working installation. The applied detritiation technology is catalyzed isotopic exchange coupled with cryogenic distillation. Computational models for non-steady working conditions have been developed for each process of isotopic exchange. By coupling these processes tritium inventory can be determined in real-time. The computational model was developed based on the experience gained on the pilot installation. The model uses a set of parameters specific to isotopic exchange processes. These parameters were experimentally determined in the pilot installation. The model is included in the monitoring system and uses as input data the parameters acquired in real-time from automation system of the pilot installation. A friendly interface has been created to visualize the final results as data or graphs. (authors)

  2. A real-time computational model for estimating kinematics of ankle ligaments.

    Science.gov (United States)

    Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Quan

    2016-01-01

    An accurate assessment of ankle ligament kinematics is crucial in understanding the injury mechanisms and can help to improve the treatment of an injured ankle, especially when used in conjunction with robot-assisted therapy. A number of computational models have been developed and validated for assessing the kinematics of ankle ligaments. However, few of them can do real-time assessment to allow for an input into robotic rehabilitation programs. An ankle computational model was proposed and validated to quantify the kinematics of ankle ligaments as the foot moves in real-time. This model consists of three bone segments with three rotational degrees of freedom (DOFs) and 12 ankle ligaments. This model uses inputs for three position variables that can be measured from sensors in many ankle robotic devices that detect postures within the foot-ankle environment and outputs the kinematics of ankle ligaments. Validation of this model in terms of ligament length and strain was conducted by comparing it with published data on cadaver anatomy and magnetic resonance imaging. The model based on ligament lengths and strains is in concurrence with those from the published studies but is sensitive to ligament attachment positions. This ankle computational model has the potential to be used in robot-assisted therapy for real-time assessment of ligament kinematics. The results provide information regarding the quantification of kinematics associated with ankle ligaments related to the disability level and can be used for optimizing the robotic training trajectory.

  3. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Marlowe Thomas J.

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants. We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results. We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.

  4. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  5. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  6. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  7. Reliability of real-time computing with radiation data feedback at accidental release

    International Nuclear Information System (INIS)

    Deme, S.; Feher, I.; Lang, E.

    1990-01-01

    At the first workshop in 1985 we reported on the real-time dose computing method used at the Paks Nuclear Power Plant and on the telemetric system developed for the normalization of the computed data. At present, the computing method normalized for the telemetric data represents the primary information for deciding on any necessary counter measures in case of a nuclear reactor accident. In this connection we analyzed the reliability of the results obtained in this manner. The points of the analysis were: how the results are influenced by the choice of certain parameters that cannot be determined by direct methods and how the improperly chosen diffusion parameters would distort the determination of environmental radiation parameters normalized on the basis of the measurements ( 131 I activity concentration, gamma dose rate) at points lying at a given distance from the measuring stations. A further source of errors may be that, when determining the level of gamma radiation, the radionuclide doses in the cloud and on the ground surface are measured together by the environmental monitoring stations, whereas these doses appear separately in the computations. At the Paks NPP it is the time integral of the aiborne activity concentration of vapour form 131 I which is determined. This quantity includes neither the other physical and chemical forms of 131 I nor the other isotopes of radioiodine. We gave numerical examples for the uncertainties due to the above factors. As a result, we arrived at the conclusions that there is a need to decide on accident-related measures based on the computing method that the dose uncertainties may reach one order of magnitude for points lying far from the monitoring stations. Different measures are discussed to make the uncertainties significantly lower

  8. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    Science.gov (United States)

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  9. Computer simulation of the time evolution of a quenched model alloy in the nucleation region

    International Nuclear Information System (INIS)

    Marro, J.; Lebowitz, J.L.; Kalos, M.H.

    1979-01-01

    The time evolution of the structure function and of the cluster (or grain) distribution following quenching in a model binary alloy with a small concentration of minority atoms is obtained from computer simulations. The structure function S-bar (k,t) obeys a simple scaling relation, S-bar (k,t) = K -3 F (k/K) with K (t) proportional t/sup -a/, a approx. = 0.25, during the latter and larger part of the evolution. During the same period, the mean cluster size grows approximately linearly with time

  10. Manual cross check of computed dose times for motorised wedged fields

    International Nuclear Information System (INIS)

    Porte, J.

    2001-01-01

    If a mass of tissue equivalent material is exposed in turn to wedged and open radiation fields of the same size, for equal times, it is incorrect to assume that the resultant isodose pattern will be effectively that of a wedge having half the angle of the wedged field. Computer programs have been written to address the problem of creating an intermediate wedge field, commonly known as a motorized wedge. The total exposure time is apportioned between the open and wedged fields, to produce a beam modification equivalent to that of a wedged field of a given wedge angle. (author)

  11. Computations of concentration of radon and its decay products against time. Computer program; Obliczanie koncentracji radonu i jego produktow rozpadu w funkcji czasu. Program komputerowy

    Energy Technology Data Exchange (ETDEWEB)

    Machaj, B. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1996-12-31

    This research is aimed to develop a device for continuous monitoring of radon in the air, by measuring alpha activity of radon and its short lived decay products. The influence of alpha activity variation of radon and its daughters on the measured results is of importance and requires a knowledge of this variation with time. Employing the measurement of alpha radiation of radon and of its short lived decay products, require knowledge of radon concentration variation and its decay products against the time. A computer program in Turbo Pascal language was therefore developed performing the computations employing the known relations involved, the program being adapted for IBM PC computers. The presented program enables computation of activity of {sup 222}Rn and its daughter products: {sup 218}Po, {sup 214}Pb, {sup 214}Bi and {sup 214}Po every 1 min within the period of 0-255 min for any state of radiation equilibrium between the radon and its daughter products. The program permits also to compute alpha activity of {sup 222}Rn + {sup 218}Po + {sup 214}Po against time and the total alpha activity at selected interval of time. The results of computations are stored on the computer hard disk in ASCII format and are used a graphic program e.g. by DrawPerfect program to make diagrams. Equations employed for computation of the alpha activity of radon and its decay products as well as the description of program functions are given. (author). 2 refs, 4 figs.

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  13. Television Viewing, Computer Use, Time Driving and All‐Cause Mortality: The SUN Cohort

    Science.gov (United States)

    Basterra‐Gortari, Francisco Javier; Bes‐Rastrollo, Maira; Gea, Alfredo; Núñez‐Córdoba, Jorge María; Toledo, Estefanía; Martínez‐González, Miguel Ángel

    2014-01-01

    Background Sedentary behaviors have been directly associated with all‐cause mortality. However, little is known about different types of sedentary behaviors in relation to overall mortality. Our objective was to assess the association between different sedentary behaviors and all‐cause mortality. Methods and Results In this prospective, dynamic cohort study (the SUN Project) 13 284 Spanish university graduates with a mean age of 37 years were followed‐up for a median of 8.2 years. Television, computer, and driving time were assessed at baseline. Poisson regression models were fitted to examine the association between each sedentary behavior and total mortality. All‐cause mortality incidence rate ratios (IRRs) per 2 hours per day were 1.40 (95% confidence interval (CI): 1.06 to 1.84) for television viewing, 0.96 (95% CI: 0.79 to 1.18) for computer use, and 1.14 (95% CI: 0.90 to 1.44) for driving, after adjustment for age, sex, smoking status, total energy intake, Mediterranean diet adherence, body mass index, and physical activity. The risk of mortality was twofold higher for participants reporting ≥3 h/day of television viewing than for those reporting Television viewing was directly associated with all‐cause mortality. However, computer use and time spent driving were not significantly associated with higher mortality. Further cohort studies and trials designed to assess whether reductions in television viewing are able to reduce mortality are warranted. The lack of association between computer use or time spent driving and mortality needs further confirmation. PMID:24965030

  14. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    International Nuclear Information System (INIS)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide

    2007-01-01

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustrate our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls

  15. Computational derivation of quantum relativist electromagnetic systems with forward-backward space-time shifts

    International Nuclear Information System (INIS)

    Dubois, Daniel M.

    2000-01-01

    This paper is a continuation of our preceding paper dealing with computational derivation of the Klein-Gordon quantum relativist equation and the Schroedinger quantum equation with forward and backward space-time shifts. The first part introduces forward and backward derivatives for discrete and continuous systems. Generalized complex discrete and continuous derivatives are deduced. The second part deduces the Klein-Gordon equation from the space-time complex continuous derivatives. These derivatives take into account forward-backward space-time shifts related to an internal phase velocity u. The internal group velocity v is related to the speed of light u.v=c 2 and to the external group and phase velocities u.v=v g .v p . Without time shift, the Schroedinger equation is deduced, with a supplementary term, which could represent a reference potential. The third part deduces the Quantum Relativist Klein-Gordon equation for a particle in an electromagnetic field

  16. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  17. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems

    Science.gov (United States)

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  18. Cloud computing platform for real-time measurement and verification of energy performance

    International Nuclear Information System (INIS)

    Ke, Ming-Tsun; Yeh, Chia-Hung; Su, Cheng-Jie

    2017-01-01

    Highlights: • Application of PSO algorithm can improve the accuracy of the baseline model. • M&V cloud platform automatically calculates energy performance. • M&V cloud platform can be applied in all energy conservation measures. • Real-time operational performance can be monitored through the proposed platform. • M&V cloud platform facilitates the development of EE programs and ESCO industries. - Abstract: Nations worldwide are vigorously promoting policies to improve energy efficiency. The use of measurement and verification (M&V) procedures to quantify energy performance is an essential topic in this field. Currently, energy performance M&V is accomplished via a combination of short-term on-site measurements and engineering calculations. This requires extensive amounts of time and labor and can result in a discrepancy between actual energy savings and calculated results. In addition, the M&V period typically lasts for periods as long as several months or up to a year, the failure to immediately detect abnormal energy performance not only decreases energy performance, results in the inability to make timely correction, and misses the best opportunity to adjust or repair equipment and systems. In this study, a cloud computing platform for the real-time M&V of energy performance is developed. On this platform, particle swarm optimization and multivariate regression analysis are used to construct accurate baseline models. Instantaneous and automatic calculations of the energy performance and access to long-term, cumulative information about the energy performance are provided via a feature that allows direct uploads of the energy consumption data. Finally, the feasibility of this real-time M&V cloud platform is tested for a case study involving improvements to a cold storage system in a hypermarket. Cloud computing platform for real-time energy performance M&V is applicable to any industry and energy conservation measure. With the M&V cloud platform, real-time

  19. Automated selection of brain regions for real-time fMRI brain-computer interfaces

    Science.gov (United States)

    Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio

    2017-02-01

    Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.

  20. A sub-cubic time algorithm for computing the quartet distance between two general trees

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas

    2011-01-01

    Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...

  1. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  2. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  3. A stable computational scheme for stiff time-dependent constitutive equations

    International Nuclear Information System (INIS)

    Shih, C.F.; Delorenzi, H.G.; Miller, A.K.

    1977-01-01

    Viscoplasticity and creep type constitutive equations are increasingly being employed in finite element codes for evaluating the deformation of high temperature structural members. These constitutive equations frequently exhibit stiff regimes which makes an analytical assessment of the structure very costly. A computational scheme for handling deformation in stiff regimes is proposed in this paper. By the finite element discretization, the governing partial differential equations in the spatial (x) and time (t) variables are reduced to a system of nonlinear ordinary differential equations in the independent variable t. The constitutive equations are expanded in a Taylor's series about selected values of t. The resulting system of differential equations are then integrated by an implicit scheme which employs a predictor technique to initiate the Newton-Raphson procedure. To examine the stability and accuracy of the computational scheme, a series of calculations were carried out for uniaxial specimens and thick wall tubes subjected to mechanical and thermal loading. (Auth.)

  4. Real-time dynamics of lattice gauge theories with a few-qubit quantum computer

    Science.gov (United States)

    Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer

    2016-06-01

    Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.

  5. Fault tolerant distributed real time computer systems for I and C of prototype fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    Manimaran, M., E-mail: maran@igcar.gov.in; Shanmugam, A.; Parimalam, P.; Murali, N.; Satya Murty, S.A.V.

    2014-03-15

    Highlights: • Architecture of distributed real time computer system (DRTCS) used in I and C of PFBR is explained. • Fault tolerant (hot standby) architecture, fault detection and switch over are detailed. • Scaled down model was used to study functional and performance requirements of DRTCS. • Quality of service parameters for scaled down model was critically studied. - Abstract: Prototype fast breeder reactor (PFBR) is in the advanced stage of construction at Kalpakkam, India. Three-tier architecture is adopted for instrumentation and control (I and C) of PFBR wherein bottom tier consists of real time computer (RTC) systems, middle tier consists of process computers and top tier constitutes of display stations. These RTC systems are geographically distributed and networked together with process computers and display stations. Hot standby architecture comprising of dual redundant RTC systems with switch over logic system is deployed in order to achieve fault tolerance. Fault tolerant dual redundant network connectivity is provided in each RTC system and TCP/IP protocol is selected for network communication. In order to assess the performance of distributed RTC systems, scaled down model was developed with 9 representative systems and nearly 15% of I and C signals of PFBR were connected and monitored. Functional and performance testing were carried out for each RTC system and the fault tolerant characteristics were studied by creating various faults into the system and observed the performance. Various quality of service parameters like connection establishment delay, priority parameter, transit delay, throughput, residual error ratio, etc., are critically studied for the network.

  6. 21 CFR 10.20 - Submission of documents to Division of Dockets Management; computation of time; availability for...

    Science.gov (United States)

    2010-04-01

    ... Management; computation of time; availability for public disclosure. 10.20 Section 10.20 Food and Drugs FOOD... Management; computation of time; availability for public disclosure. (a) A submission to the Division of Dockets Management of a petition, comment, objection, notice, compilation of information, or any other...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  8. Computer modelling of structures with account of the construction stages and the time dependent material properties

    Directory of Open Access Journals (Sweden)

    Traykov Alexander

    2015-01-01

    Full Text Available Numerical studies are performed on computer models taking into account the stages of construction and time dependent material properties defined in two forms. A 2D model of three storey two spans frame is created. The first form deals with material defined in the usual design practice way - without taking into account the time dependent properties of the concrete. The second form creep and shrinkage of the concrete are taken into account. Displacements and internal forces in specific elements and sections are reported. The influence of the time dependent material properties on the displacement and the internal forces in the main structural elements is tracked down. The results corresponding to the two forms of material definition are compared together as well as with the results obtained by the usual design calculations. Conclusions on the influence of the concrete creep and shrinkage during the construction towards structural behaviour are made.

  9. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  10. hctsa: A Computational Framework for Automated Time-Series Phenotyping Using Massive Feature Extraction.

    Science.gov (United States)

    Fulcher, Ben D; Jones, Nick S

    2017-11-22

    Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    Science.gov (United States)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  12. Shorter height is related to lower cardiovascular disease risk – A narrative review

    Directory of Open Access Journals (Sweden)

    Thomas T. Samaras

    2013-01-01

    Full Text Available Numerous Western studies have shown a negative correlation between height and cardiovascular disease. However, these correlations do not prove causation. This review provides a variety of studies showing short people have little to no cardiovascular disease. When shorter people are compared to taller people, a number of biological mechanisms evolve favoring shorter people, including reduced telomere shortening, lower atrial fibrillation, higher heart pumping efficiency, lower DNA damage, lower risk of blood clots, lower left ventricular hypertrophy and superior blood parameters. The causes of increased heart disease among shorter people in the developed world are related to lower income, excessive weight, poor diet, lifestyle factors, catch-up growth, childhood illness and poor environmental conditions. For short people in developed countries, the data indicate that a plant-based diet, leanness and regular exercise can substantially reduce the risk of cardiovascular disease.

  13. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  14. Computing moment to moment BOLD activation for real-time neurofeedback

    Science.gov (United States)

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  15. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Sarrut, David; Guigues, Laurent

    2008-01-01

    Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy

  16. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  17. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  19. Present and future aspects of PROSA - A computer program for near real time accountancy

    International Nuclear Information System (INIS)

    Beedgen, R.

    1987-01-01

    The methods of near real time accountancy (NRTA) for safeguarding nuclear material received a lot of attention in the last years. They developed PROSA 1.0 as a computer program to evaluate a sequence of material balance data based on three statistical tests for a selected false alarm probability. A new NRTA test procedure will be included and an option for the calculation of detection probabilities of hypothetical loss patterns will be made available in future releases of PROSA. Under a non-loss assumption, PROSA may also be used for the analysis of facility measurement models

  20. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Thomas J. Marlowe

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants.

  1. Using real-time fMRI brain-computer interfacing to treat eating disorders.

    Science.gov (United States)

    Sokunbi, Moses O

    2018-05-15

    Real-time functional magnetic resonance imaging based brain-computer interfacing (fMRI neurofeedback) has shown encouraging outcomes in the treatment of psychiatric and behavioural disorders. However, its use in the treatment of eating disorders is very limited. Here, we give a brief overview of how to design and implement fMRI neurofeedback intervention for the treatment of eating disorders, considering the basic and essential components. We also attempt to develop potential adaptations of fMRI neurofeedback intervention for the treatment of anorexia nervosa, bulimia nervosa and binge eating disorder. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Evolution of perturbed dynamical systems: analytical computation with time independent accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Gurzadyan, A.V. [Russian-Armenian (Slavonic) University, Department of Mathematics and Mathematical Modelling, Yerevan (Armenia); Kocharyan, A.A. [Monash University, School of Physics and Astronomy, Clayton (Australia)

    2016-12-15

    An analytical method for investigation of the evolution of dynamical systems with independent on time accuracy is developed for perturbed Hamiltonian systems. The error-free estimation using of computer algebra enables the application of the method to complex multi-dimensional Hamiltonian and dissipative systems. It also opens principal opportunities for the qualitative study of chaotic trajectories. The performance of the method is demonstrated on perturbed two-oscillator systems. It can be applied to various non-linear physical and astrophysical systems, e.g. to long-term planetary dynamics. (orig.)

  3. Self-Motion Perception: Assessment by Real-Time Computer Generated Animations

    Science.gov (United States)

    Parker, Donald E.

    1999-01-01

    Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?

  4. Towards OpenVL: Improving Real-Time Performance of Computer Vision Applications

    Science.gov (United States)

    Shen, Changsong; Little, James J.; Fels, Sidney

    Meeting constraints for real-time performance is a main issue for computer vision, especially for embedded computer vision systems. This chapter presents our progress on our open vision library (OpenVL), a novel software architecture to address efficiency through facilitating hardware acceleration, reusability, and scalability for computer vision systems. A logical image understanding pipeline is introduced to allow parallel processing. We also discuss progress on our middleware—vision library utility toolkit (VLUT)—that enables applications to operate transparently over a heterogeneous collection of hardware implementations. OpenVL works as a state machine,with an event-driven mechanismto provide users with application-level interaction. Various explicit or implicit synchronization and communication methods are supported among distributed processes in the logical pipelines. The intent of OpenVL is to allow users to quickly and easily recover useful information from multiple scenes, in a cross-platform, cross-language manner across various software environments and hardware platforms. To validate the critical underlying concepts of OpenVL, a human tracking system and a local positioning system are implemented and described. The novel architecture separates the specification of algorithmic details from the underlying implementation, allowing for different components to be implemented on an embedded system without recompiling code.

  5. Real time analysis for atmospheric dispersions for Fukushima nuclear accident: Mobile phone based cloud computing assessment

    International Nuclear Information System (INIS)

    Woo, Tae Ho

    2014-01-01

    Highlights: • Possible nuclear accident is simulated for the atmospheric contaminations. • The simulations results give the relative importance of the fallouts. • The cloud computing of IT is performed successfully. • One can prepare for the possible damages of such a NPP accident. • Some other variables can be considered in the modeling. - Abstract: The radioactive material dispersion is investigated by the system dynamics (SD) method. The non-linear complex algorithm could give the information about the hazardous material behavior in the case of nuclear accident. The prevailing westerlies region is modeled for the dynamical consequences of the Fukushima nuclear accident. The event sequence shows the scenario from earthquake to dispersion of the radionuclides. Then, the dispersion reaches two cities in Korea. The importance of the radioactive dispersion is related to the fast and reliable data processing, which could be accomplished by cloud computing concept. The values of multiplications for the wind, plume concentrations, and cloud computing factor are obtained. The highest value is 94.13 in the 206th day for Seoul. In Pusan, the highest value is 15.48 in the 219th day. The source is obtained as dispersion of radionuclide multiplied by 100. The real time safety assessment is accomplished by mobile phone

  6. A computationally efficient electricity price forecasting model for real time energy markets

    International Nuclear Information System (INIS)

    Feijoo, Felipe; Silva, Walter; Das, Tapas K.

    2016-01-01

    Highlights: • A fast hybrid forecast model for electricity prices. • Accurate forecast model that combines K-means and machine learning techniques. • Low computational effort by elimination of feature selection techniques. • New benchmark results by using market data for year 2012 and 2015. - Abstract: Increased significance of demand response and proliferation of distributed energy resources will continue to demand faster and more accurate models for forecasting locational marginal prices. This paper presents such a model (named K-SVR). While yielding prediction accuracy comparable with the best known models in the literature, K-SVR requires a significantly reduced computational time. The computational reduction is attained by eliminating the use of a feature selection process, which is commonly used by the existing models in the literature. K-SVR is a hybrid model that combines clustering algorithms, support vector machine, and support vector regression. K-SVR is tested using Pennsylvania–New Jersey–Maryland market data from the periods 2005–6, 2011–12, and 2014–15. Market data from 2006 has been used to measure performance of many of the existing models. Authors chose these models to compare performance and demonstrate strengths of K-SVR. Results obtained from K-SVR using the market data from 2012 and 2015 are new, and will serve as benchmark for future models.

  7. A cascadic monotonic time-discretized algorithm for finite-level quantum control computation

    Science.gov (United States)

    Ditz, P.; Borzi`, A.

    2008-03-01

    A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes. Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals. Program summaryProgram title: CNMS Catalogue identifier: ADEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 770 No. of bytes in distributed program, including test data, etc.: 7098 Distribution format: tar.gz Programming language: MATLAB 6 Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAM Operating system: Microsoft Windows XP Word size: 32 Classification: 4.9 Nature of problem: Quantum control Solution method: Iterative Running time: 60-600 sec

  8. MO-E-BRD-02: Accelerated Partial Breast Irradiation in Brachytherapy: Is Shorter Better?

    International Nuclear Information System (INIS)

    Todor, D.

    2015-01-01

    Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant. A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current

  9. Less is more: latent learning is maximized by shorter training sessions in auditory perceptual learning.

    Science.gov (United States)

    Molloy, Katharine; Moore, David R; Sohoglu, Ediz; Amitay, Sygal

    2012-01-01

    The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session. We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (∼8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (∼4 hr in total) relative to a group that trained for a single session (∼1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased. Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and optimized spacing between sessions.

  10. MO-E-BRD-02: Accelerated Partial Breast Irradiation in Brachytherapy: Is Shorter Better?

    Energy Technology Data Exchange (ETDEWEB)

    Todor, D. [Virginia Commonwealth University (United States)

    2015-06-15

    Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant. A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current

  11. Computer-aided detection system for chest radiography: reducing report turnaround times of examinations with abnormalities.

    Science.gov (United States)

    Kao, E-Fong; Liu, Gin-Chung; Lee, Lo-Yeh; Tsai, Huei-Yi; Jaw, Twei-Shiun

    2015-06-01

    The ability to give high priority to examinations with pathological findings could be very useful to radiologists with large work lists who wish to first evaluate the most critical studies. A computer-aided detection (CAD) system for identifying chest examinations with abnormalities has therefore been developed. To evaluate the effectiveness of a CAD system on report turnaround times of chest examinations with abnormalities. The CAD system was designed to automatically mark chest examinations with possible abnormalities in the work list of radiologists interpreting chest examinations. The system evaluation was performed in two phases: two radiologists interpreted the chest examinations without CAD in phase 1 and with CAD in phase 2. The time information recorded by the radiology information system was then used to calculate the turnaround times. All chest examinations were reviewed by two other radiologists and were divided into normal and abnormal groups. The turnaround times for the examinations with pathological findings with and without the CAD system assistance were compared. The sensitivity and specificity of the CAD for chest abnormalities were 0.790 and 0.697, respectively, and use of the CAD system decreased the turnaround time for chest examinations with abnormalities by 44%. The turnaround times required for radiologists to identify chest examinations with abnormalities could be reduced by using the CAD system. This system could be useful for radiologists with large work lists who wish to first evaluate the most critical studies. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  12. Computer-assisted assessment of ultrasound real-time elastography: initial experience in 145 breast lesions.

    Science.gov (United States)

    Zhang, Xue; Xiao, Yang; Zeng, Jie; Qiu, Weibao; Qian, Ming; Wang, Congzhi; Zheng, Rongqin; Zheng, Hairong

    2014-01-01

    To develop and evaluate a computer-assisted method of quantifying five-point elasticity scoring system based on ultrasound real-time elastography (RTE), for classifying benign and malignant breast lesions, with pathologic results as the reference standard. Conventional ultrasonography (US) and RTE images of 145 breast lesions (67 malignant, 78 benign) were performed in this study. Each lesion was automatically contoured on the B-mode image by the level set method and mapped on the RTE image. The relative elasticity value of each pixel was reconstructed and classified into hard or soft by the fuzzy c-means clustering method. According to the hardness degree inside lesion and its surrounding tissue, the elasticity score of the RTE image was computed in an automatic way. Visual assessments of the radiologists were used for comparing the diagnostic performance. Histopathologic examination was used as the reference standard. The Student's t test and receiver operating characteristic (ROC) curve analysis were performed for statistical analysis. Considering score 4 or higher as test positive for malignancy, the diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 93.8% (136/145), 92.5% (62/67), 94.9% (74/78), 93.9% (62/66), and 93.7% (74/79) for the computer-assisted scheme, and 89.7% (130/145), 85.1% (57/67), 93.6% (73/78), 92.0% (57/62), and 88.0% (73/83) for manual assessment. Area under ROC curve (Az value) for the proposed method was higher than the Az value for visual assessment (0.96 vs. 0.93). Computer-assisted quantification of classical five-point scoring system can significantly eliminate the interobserver variability and thereby improve the diagnostic confidence of classifying the breast lesions to avoid unnecessary biopsy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Computer-assisted assessment of ultrasound real-time elastography: Initial experience in 145 breast lesions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xue; Xiao, Yang [Shenzhen Key Lab for Molecular Imaging, Paul C. Lauterbur Research Center for Biomedical Imaging, Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen (China); Zeng, Jie [Department of Medical Ultrasonics, Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou (China); Qiu, Weibao; Qian, Ming; Wang, Congzhi [Shenzhen Key Lab for Molecular Imaging, Paul C. Lauterbur Research Center for Biomedical Imaging, Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen (China); Zheng, Rongqin, E-mail: zhengronggin@hotmail.com [Department of Medical Ultrasonics, Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou (China); Zheng, Hairong, E-mail: hr.zheng@siat.ac.cn [Shenzhen Key Lab for Molecular Imaging, Paul C. Lauterbur Research Center for Biomedical Imaging, Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen (China)

    2014-01-15

    Purpose: To develop and evaluate a computer-assisted method of quantifying five-point elasticity scoring system based on ultrasound real-time elastography (RTE), for classifying benign and malignant breast lesions, with pathologic results as the reference standard. Materials and methods: Conventional ultrasonography (US) and RTE images of 145 breast lesions (67 malignant, 78 benign) were performed in this study. Each lesion was automatically contoured on the B-mode image by the level set method and mapped on the RTE image. The relative elasticity value of each pixel was reconstructed and classified into hard or soft by the fuzzy c-means clustering method. According to the hardness degree inside lesion and its surrounding tissue, the elasticity score of the RTE image was computed in an automatic way. Visual assessments of the radiologists were used for comparing the diagnostic performance. Histopathologic examination was used as the reference standard. The Student's t test and receiver operating characteristic (ROC) curve analysis were performed for statistical analysis. Results: Considering score 4 or higher as test positive for malignancy, the diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 93.8% (136/145), 92.5% (62/67), 94.9% (74/78), 93.9% (62/66), and 93.7% (74/79) for the computer-assisted scheme, and 89.7% (130/145), 85.1% (57/67), 93.6% (73/78), 92.0% (57/62), and 88.0% (73/83) for manual assessment. Area under ROC curve (A{sub z} value) for the proposed method was higher than the A{sub z} value for visual assessment (0.96 vs. 0.93). Conclusion: Computer-assisted quantification of classical five-point scoring system can significantly eliminate the interobserver variability and thereby improve the diagnostic confidence of classifying the breast lesions to avoid unnecessary biopsy.

  14. Time complexity analysis for distributed memory computers: implementation of parallel conjugate gradient method

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Haan, M.J.; Hertzberger, L.O.; van Leeuwen, J.

    1991-01-01

    New developments in Computer Science, both hardware and software, offer researchers, such as physicists, unprecedented possibilities to solve their computational intensive problems.However, full exploitation of e.g. new massively parallel computers, parallel languages or runtime environments

  15. TIMED: a computer program for calculating cumulated activity of a radionuclide in the organs of the human body at a given time, t, after deposition

    International Nuclear Information System (INIS)

    Watson, S.B.; Snyder, W.S.; Ford, M.R.

    1976-12-01

    TIMED is a computer program designed to calculate cumulated radioactivity in the various source organs at various times after radionuclide deposition. TIMED embodies a system of differential equations which describes activity transfer in the lungs, gastrointestinal tract, and other organs of the body. This system accounts for delay of transfer of activity between compartments of the body and radioactive daughters

  16. The application of digital computers to near-real-time processing of flutter test data

    Science.gov (United States)

    Hurley, S. R.

    1976-01-01

    Procedures used in monitoring, analyzing, and displaying flight and ground flutter test data are presented. These procedures include three digital computer programs developed to process structural response data in near real time. Qualitative and quantitative modal stability data are derived from time history response data resulting from rapid sinusoidal frequency sweep forcing functions, tuned-mode quick stops, and pilot induced control pulses. The techniques have been applied to both fixed and rotary wing aircraft, during flight, whirl tower rotor systems tests, and wind tunnel flutter model tests. An hydraulically driven oscillatory aerodynamic vane excitation system utilized during the flight flutter test programs accomplished during Lockheed L-1011 and S-3A development is described.

  17. An Energy Efficient Neuromorphic Computing System Using Real Time Sensing Method

    DEFF Research Database (Denmark)

    Farkhani, Hooman; Tohidi, Mohammad; Farkhani, Sadaf

    2017-01-01

    In spintronic-based neuromorphic computing systems (NCS), the switching of magnetic moment in a magnetic tunnel junction (MTJ) is used to mimic neuron firing. However, the stochastic switching behavior of the MTJ and process variations effect leads to extra stimulation time. This leads to extra...... energy consumption and delay of such NCSs. In this paper, a new real-time sensing (RTS) circuit is proposed to track the MTJ state and terminate stimulation phase immediately after MTJ switching. This leads to significant degradation in energy consumption and delay of NCS. The simulation results using...... a 65-nm CMOS technology and a 40-nm MTJ technology confirm that the energy consumption of a RTS-based NCS is improved by 50% in comparison with a typical NCS. Moreover, utilizing RTS circuit improves the overall speed of an NCS by 2.75x....

  18. Some selection criteria for computers in real-time systems for high energy physics

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1980-01-01

    The right choice of program source is for the organization of real-time systems of great importance as cost and reliability are decisive factors. Some selection criteria for program sources for high energy physics multiwire chamber spectrometers (MWCS) are considered in this report. MWCS's accept bits of information from event pattens. Large and small computers, microcomputers and intelligent controllers in CAMAC crates are compared with respect to the following characteristics: data exchange speed, number of addresses for peripheral devices, cost of interfacing a peripheral device, sizes of buffer and mass memory, configuration costs, and the mean time between failures (MTBF). The results of comparisons are shown by plots and histograms which allow the selection of program sources according to the above criteria. (Auth.)

  19. Hybrid automata models of cardiac ventricular electrophysiology for real-time computational applications.

    Science.gov (United States)

    Andalam, Sidharta; Ramanna, Harshavardhan; Malik, Avinash; Roop, Parthasarathi; Patel, Nitish; Trew, Mark L

    2016-08-01

    Virtual heart models have been proposed for closed loop validation of safety-critical embedded medical devices, such as pacemakers. These models must react in real-time to off-the-shelf medical devices. Real-time performance can be obtained by implementing models in computer hardware, and methods of compiling classes of Hybrid Automata (HA) onto FPGA have been developed. Models of ventricular cardiac cell electrophysiology have been described using HA which capture the complex nonlinear behavior of biological systems. However, many models that have been used for closed-loop validation of pacemakers are highly abstract and do not capture important characteristics of the dynamic rate response. We developed a new HA model of cardiac cells which captures dynamic behavior and we implemented the model in hardware. This potentially enables modeling the heart with over 1 million dynamic cells, making the approach ideal for closed loop testing of medical devices.

  20. Computer-Aided Software Engineering - An approach to real-time software development

    Science.gov (United States)

    Walker, Carrie K.; Turkovich, John J.

    1989-01-01

    A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.

  1. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    National Research Council Canada - National Science Library

    Skjellum, Anthony

    2004-01-01

    Polymorphous Computing Architectures (PCA) rapidly "morph" (reorganize) software and hardware configurations in order to achieve high performance on computation styles ranging from specialized streaming to general threaded applications...

  2. Reservoir computer predictions for the Three Meter magnetic field time evolution

    Science.gov (United States)

    Perevalov, A.; Rojas, R.; Lathrop, D. P.; Shani, I.; Hunt, B. R.

    2017-12-01

    The source of the Earth's magnetic field is the turbulent flow of liquid metal in the outer core. Our experiment's goal is to create Earth-like dynamo, to explore the mechanisms and to understand the dynamics of the magnetic and velocity fields. Since it is a complicated system, predictions of the magnetic field is a challenging problem. We present results of mimicking the three Meter experiment by a reservoir computer deep learning algorithm. The experiment is a three-meter diameter outer sphere and a one-meter diameter inner sphere with the gap filled with liquid sodium. The spheres can rotate up to 4 and 14 Hz respectively, giving a Reynolds number near to 108. Two external electromagnets apply magnetic fields, while an array of 31 external and 2 internal Hall sensors measure the resulting induced fields. We use this magnetic probe data to train a reservoir computer to predict the 3M time evolution and mimic waves in the experiment. Surprisingly accurate predictions can be made for several magnetic dipole time scales. This shows that such a complicated MHD system's behavior can be predicted. We gratefully acknowledge support from NSF EAR-1417148.

  3. Kajian dan Implementasi Real Time Operating System pada Single Board Computer Berbasis Arm

    Directory of Open Access Journals (Sweden)

    Wiedjaja A

    2014-06-01

    Full Text Available Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC ARM-based, namely Pandaboard ES with the Dual-core ARM Cortex-A9, TI OMAP 4460 type. Research was conducted by the method of implementation of the General Purpose OS Ubuntu 12:04 OMAP4-armhf-RTOS and Linux 3.4.0-rt17 + on PandaBoard ES. Then research compared the latency value of each OS on no-load and with full-load condition. The results obtained show the maximum latency value of RTOS on full load condition is at 45 uS, much smaller than the maximum value of GPOS at full-load at 17.712 uS. The lower value of latency demontrates that the RTOS has ability to run the process in a certain period of time much better than the GPOS.

  4. A neural computational model for animal's time-to-collision estimation.

    Science.gov (United States)

    Wang, Ling; Yao, Dezhong

    2013-04-17

    The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.

  5. Contrast timing in computed tomography: Effect of different contrast media concentrations on bolus geometry

    International Nuclear Information System (INIS)

    Mahnken, Andreas H.; Jost, Gregor; Seidensticker, Peter; Kuhl, Christiane; Pietsch, Hubertus

    2012-01-01

    Objective: To assess the effect of low-osmolar, monomeric contrast media with different iodine concentrations on bolus shape in aortic CT angiography. Materials and methods: Repeated sequential computed tomography scanning of the descending aorta of eight beagle dogs (5 male, 12.7 ± 3.1 kg) was performed without table movement with a standardized CT scan protocol. Iopromide 300 (300 mg I/mL), iopromide 370 (370 mg I/mL) and iomeprol 400 (400 mg I/mL) were administered via a foreleg vein with an identical iodine delivery rate of 1.2 g I/s and a total iodine dose of 300 mg I/kg body weight. Time-enhancement curves were computed and analyzed. Results: Iopromide 300 showed the highest peak enhancement (445.2 ± 89.1 HU), steepest up-slope (104.2 ± 17.5 HU/s) and smallest full width at half maximum (FWHM; 5.8 ± 1.0 s). Peak enhancement, duration of FWHM, enhancement at FWHM and up-slope differed significantly between iopromide 300 and iomeprol 400 (p 0.05). Conclusions: Low viscous iopromide 300 results in a better defined bolus with a significantly higher peak enhancement, steeper up-slope and smaller FWHM when compared to iomeprol 400. These characteristics potentially affect contrast timing.

  6. Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code

    Directory of Open Access Journals (Sweden)

    GHULAM MUSTAFA

    2017-01-01

    Full Text Available Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system

  7. Design considerations for computationally constrained two-way real-time video communication

    Science.gov (United States)

    Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.

    2009-08-01

    Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.

  8. Just-in-time compilation-inspired methodology for parallelization of compute intensive java code

    International Nuclear Information System (INIS)

    Mustafa, G.; Ghani, M.U.

    2017-01-01

    Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time) compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum) benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system. (author)

  9. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  10. Alternative majority-voting methods for real-time computing systems

    Science.gov (United States)

    Shin, Kang G.; Dolter, James W.

    1989-01-01

    Two techniques that provide a compromise between the high time overhead in maintaining synchronous voting and the difficulty of combining results in asynchronous voting are proposed. These techniques are specifically suited for real-time applications with a single-source/single-sink structure that need instantaneous error masking. They provide a compromise between a tightly synchronized system in which the synchronization overhead can be quite high, and an asynchronous system which lacks suitable algorithms for combining the output data. Both quorum-majority voting (QMV) and compare-majority voting (CMV) are most applicable to distributed real-time systems with single-source/single-sink tasks. All real-time systems eventually have to resolve their outputs into a single action at some stage. The development of the advanced information processing system (AIPS) and other similar systems serve to emphasize the importance of these techniques. Time bounds suggest that it is possible to reduce the overhead for quorum-majority voting to below that for synchronous voting. All the bounds assume that the computation phase is nonpreemptive and that there is no multitasking.

  11. Accuracy and computational time of a hierarchy of growth rate definitions for breeder reactor fuel

    International Nuclear Information System (INIS)

    Maudlin, P.J.; Borg, R.C.; Ott, K.O.

    1979-01-01

    For a hierarchy of four logically different definitions for calculating the asymptotic growth of fast breeder reactor fuel, an investigation is performed concerning the comparative accuracy and computational effort associated with each definition. The definition based on detailed calculation of the accumulating fuel in an expanding park of reactors asymptotically yields the most accurate value of the infinite time growth rate, γ/sup infinity/, which is used as a reference value. The computational effort involved with the park definition is very large. The definition based on the single reactor calculation of the equilibrium surplus production rate and fuel inventory gives a value for γ/sup infinity of comparable accuracy to the park definition and uses significantly less central processor unit (CPU) time. The third definition is based on a continuous treatment of the reactor fuel cycle for a single reactor and gives a value for γ/sup infinity/ that accurately approximates the second definition. The continuous definition requires very little CPU time. The fourth definition employs the isotopic breeding worths, w/sub i//sup */, for a projection of the asymptotic growth rate. The CPU time involved in this definition is practically nil if its calculation is based on the few-cycle depletion calculation normally performed for core design and critical enrichment evaluations. The small inaccuracy (approx. = 1%) of the breeding-worth-based definition is well within the inaccuracy range that results unavoidably from other sources such as nuclear cross sections, group constants, and flux calculations. This fully justifies the use of this approach in routine calculations

  12. Seeding the cloud: Financial bootstrapping in the computer software sector

    OpenAIRE

    Mac An Bhaird, Ciarán; Lynn, Theo

    2015-01-01

    This study investigates resourcing of computer software companies that have adopted cloud computing for the development and delivery of application software. Use of this innovative technology potentially impacts firm financing because the initial infrastructure investment requirement is much lower than for packaged software, lead time to market is shorter, and cloud computing supports instant scalability. We test these predictions by conducting in-depth interviews with founders of 18 independ...

  13. A real-time computer simulation of nuclear simulator software using standard PC hardware and linux environments

    International Nuclear Information System (INIS)

    Cha, K. H.; Kweon, K. C.

    2001-01-01

    A feasibility study, which standard PC hardware and Real-Time Linux are applied to real-time computer simulation of software for a nuclear simulator, is presented in this paper. The feasibility prototype was established with the existing software in the Compact Nuclear Simulator (CNS). Throughout the real-time implementation in the feasibility prototype, we has identified that the approach can enable the computer-based predictive simulation to be approached, due to both the remarkable improvement in real-time performance and the less efforts for real-time implementation under standard PC hardware and Real-Time Linux envrionments

  14. Design and development of a diversified real time computer for future FBRs

    International Nuclear Information System (INIS)

    Sujith, K.R.; Bhattacharyya, Anindya; Behera, R.P.; Murali, N.

    2014-01-01

    The current safety related computer system of Prototype Fast Breeder Reactor (PFBR) under construction in Kalpakkam consists of two redundant Versa Module Europa (VME) bus based Real Time Computer system with a Switch Over Logic Circuit (SOLC). Since both the VME systems are identical, the dual redundant system is prone to common cause failure (CCF). The probability of CCF can be reduced by adopting diversity. Design diversity has long been used to protect redundant systems against common-mode failures. The conventional notion of diversity relies on 'independent' generation of 'different' implementations. This paper discusses the design and development of a diversified Real Time Computer which will replace one of the computer system in the dual redundant architecture. Compact PCI (cPCI) bus systems are widely used in safety critical applications such as avionics, railways, defence and uses diverse electrical signaling and logical specifications, hence was chosen for development of the diversified system. Towards the initial development a CPU card based on an ARM-9 processor, 16 channel Relay Output (RO) card and a 30 channel Analog Input (AI) card was developed. All the cards mentioned supports hot-swap and geographic addressing capability. In order to mitigate the component obsolescence problem the 32 bit PCI target controller and associated glue logic for the slave I/O cards was indigenously developed using VHDL. U-boot was selected as the boot loader and arm Linux 2.6 as the preliminary operating system for the CPU card. Board specific initialization code for the CPU card was written in ARM assembly language and serial port initialization was written in C language. Boot loader along with Linux 2.6 kernel and jffs2 file system was flashed into the CPU card. Test applications written in C language were used to test the various peripherals of the CPU card. Device driver for the AI and RO card was developed as Linux kernel modules and application library was also

  15. Time and temperature dependence of cascade induced defect production in in situ experiments and computer simulation

    International Nuclear Information System (INIS)

    Ishino, Shiori

    1993-01-01

    Understanding of the defect production and annihilation processes in a cascade is important in modelling of radiation damage for establishing irradiation correlation. In situ observation of heavy ion radiation damage has a great prospect in this respect. Time and temperature dependence of formation and annihilation of vacancy clusters in a cascade with a time resolution of 30 ms has been studied with a facility which comprises a heavy ion accelerator and an electron microscope. Formation and annihilation rates of defect clusters have been separately measured by this technique. The observed processes have been analysed by simple kinetic equations, taking into account the sink effect of surface and the defect clusters themselves together with the annihilation process due to thermal emission of vacancies from the defect clusters. Another tool to study time and temperature dependence of defect production in a cascade is computer simulation. Recent results of molecular dynamics calculations on the temperature dependence of cascade evolution are presented, including directional and temperature dependence of the lengths of replacement collision sequences, temperature dependence of the process to reach thermal equilibrium and so on. These results are discussed under general time frame of radiation damage evolution covering from 10 -15 to 10 9 s, and several important issues for the general understanding have been identified. (orig.)

  16. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    Science.gov (United States)

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. © The Author(s) 2015.

  17. Use of time space Green's functions in the computation of transient eddy current fields

    International Nuclear Information System (INIS)

    Davey, K.; Turner, L.

    1988-01-01

    The utility of integral equations to solve eddy current problems has been borne out by numerous computations in the past few years, principally in sinusoidal steady-state problems. This paper attempts to examine the applicability of the integral approaches in both time and space for the more generic transient problem. The basic formulation for the time space Green's function approach is laid out. A technique employing Gauss-Laguerre integration is employed to realize the temporal solution, while Gauss--Legendre integration is used to resolve the spatial field character. The technique is then applied to the fusion electromagnetic induction experiments (FELIX) cylinder experiments in both two and three dimensions. It is found that quite accurate solutions can be obtained using rather coarse time steps and very few unknowns; the three-dimensional field solution worked out in this context used basically only four unknowns. The solution appears to be somewhat sensitive to the choice of time step, a consequence of a numerical instability imbedded in the Green's function near the origin

  18. Whole-body computed tomography in trauma patients: optimization of the patient scanning position significantly shortens examination time while maintaining diagnostic image quality

    Directory of Open Access Journals (Sweden)

    Hickethier T

    2018-05-01

    Full Text Available Tilman Hickethier,1,* Kamal Mammadov,1,* Bettina Baeßler,1 Thorsten Lichtenstein,1 Jochen Hinkelbein,2 Lucy Smith,3 Patrick Sven Plum,4 Seung-Hun Chon,4 David Maintz,1 De-Hua Chang1 1Department of Radiology, University Hospital of Cologne, Cologne, Germany; 2Department of Anesthesiology and Intensive Care Medicine, University Hospital of Cologne, Cologne, Germany; 3Faculty of Medicine, Memorial University of Newfoundland, St. John’s, Canada; 4Department of General, Visceral and Cancer Surgery, University Hospital of Cologne, Cologne, Germany *These authors contributed equally to this work Background: The study was conducted to compare examination time and artifact vulnerability of whole-body computed tomographies (wbCTs for trauma patients using conventional or optimized patient positioning. Patients and methods: Examination time was measured in 100 patients scanned with conventional protocol (Group A: arms positioned alongside the body for head and neck imaging and over the head for trunk imaging and 100 patients scanned with optimized protocol (Group B: arms flexed on a chest pillow without repositioning. Additionally, influence of two different scanning protocols on image quality in the most relevant body regions was assessed by two blinded readers. Results: Total wbCT duration was about 35% or 3:46 min shorter in B than in A. Artifacts in aorta (27 vs 6%, liver (40 vs 8% and spleen (27 vs 5% occurred significantly more often in B than in A. No incident of non-diagnostic image quality was reported, and no significant differences for lungs and spine were found. Conclusion: An optimized wbCT positioning protocol for trauma patients allows a significant reduction of examination time while still maintaining diagnostic image quality. Keywords: CT scan, polytrauma, acute care, time requirement, positioning

  19. Balancing Exploration, Uncertainty Representation and Computational Time in Many-Objective Reservoir Policy Optimization

    Science.gov (United States)

    Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.

    2016-12-01

    As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS

  20. Decreasing Computational Time for VBBinaryLensing by Point Source Approximation

    Science.gov (United States)

    Tirrell, Bethany M.; Visgaitis, Tiffany A.; Bozza, Valerio

    2018-01-01

    The gravitational lens of a binary system produces a magnification map that is more intricate than a single object lens. This map cannot be calculated analytically and one must rely on computational methods to resolve. There are generally two methods of computing the microlensed flux of a source. One is based on ray-shooting maps (Kayser, Refsdal, & Stabell 1986), while the other method is based on an application of Green’s theorem. This second method finds the area of an image by calculating a Riemann integral along the image contour. VBBinaryLensing is a C++ contour integration code developed by Valerio Bozza, which utilizes this method. The parameters at which the source object could be treated as a point source, or in other words, when the source is far enough from the caustic, was of interest to substantially decrease the computational time. The maximum and minimum values of the caustic curves produced, were examined to determine the boundaries for which this simplification could be made. The code was then run for a number of different maps, with separation values and accuracies ranging from 10-1 to 10-3, to test the theoretical model and determine a safe buffer for which minimal error could be made for the approximation. The determined buffer was 1.5+5q, with q being the mass ratio. The theoretical model and the calculated points worked for all combinations of the separation values and different accuracies except the map with accuracy and separation equal to 10-3 for y1 max. An alternative approach has to be found in order to accommodate a wider range of parameters.

  1. Time-Of-Flight Camera, Optical Tracker and Computed Tomography in Pairwise Data Registration.

    Directory of Open Access Journals (Sweden)

    Bartlomiej Pycinski

    Full Text Available A growing number of medical applications, including minimal invasive surgery, depends on multi-modal or multi-sensors data processing. Fast and accurate 3D scene analysis, comprising data registration, seems to be crucial for the development of computer aided diagnosis and therapy. The advancement of surface tracking system based on optical trackers already plays an important role in surgical procedures planning. However, new modalities, like the time-of-flight (ToF sensors, widely explored in non-medical fields are powerful and have the potential to become a part of computer aided surgery set-up. Connection of different acquisition systems promises to provide a valuable support for operating room procedures. Therefore, the detailed analysis of the accuracy of such multi-sensors positioning systems is needed.We present the system combining pre-operative CT series with intra-operative ToF-sensor and optical tracker point clouds. The methodology contains: optical sensor set-up and the ToF-camera calibration procedures, data pre-processing algorithms, and registration technique. The data pre-processing yields a surface, in case of CT, and point clouds for ToF-sensor and marker-driven optical tracker representation of an object of interest. An applied registration technique is based on Iterative Closest Point algorithm.The experiments validate the registration of each pair of modalities/sensors involving phantoms of four various human organs in terms of Hausdorff distance and mean absolute distance metrics. The best surface alignment was obtained for CT and optical tracker combination, whereas the worst for experiments involving ToF-camera.The obtained accuracies encourage to further develop the multi-sensors systems. The presented substantive discussion concerning the system limitations and possible improvements mainly related to the depth information produced by the ToF-sensor is useful for computer aided surgery developers.

  2. Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces.

    Directory of Open Access Journals (Sweden)

    Florent Bocquelet

    2016-11-01

    Full Text Available Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (tongue, jaw, velum, and lips into intelligible speech. The articulatory-to-acoustic mapping is performed using a deep neural network (DNN trained on electromagnetic articulography (EMA data recorded on a reference speaker synchronously with the produced speech signal. This DNN is then used in both offline and online modes to map the position of sensors glued on different speech articulators into acoustic parameters that are further converted into an audio signal using a vocoder. In offline mode, highly intelligible speech could be obtained as assessed by perceptual evaluation performed by 12 listeners. Then, to anticipate future BCI applications, we further assessed the real-time control of the synthesizer by both the reference speaker and new speakers, in a closed-loop paradigm using EMA data recorded in real time. A short calibration period was used to compensate for differences in sensor positions and articulatory differences between new speakers and the reference speaker. We found that real-time synthesis of vowels and consonants was possible with good intelligibility. In conclusion, these results open to future speech BCI applications using such articulatory-based speech synthesizer.

  3. Predictors for cecal insertion time: the impact of abdominal visceral fat measured by computed tomography.

    Science.gov (United States)

    Nagata, Naoyoshi; Sakamoto, Kayo; Arai, Tomohiro; Niikura, Ryota; Shimbo, Takuro; Shinozaki, Masafumi; Noda, Mitsuhiko; Uemura, Naomi

    2014-10-01

    Several factors affect the risk for longer cecal insertion time. The aim of this study was to identify the predictors of longer insertion time and to evaluate the effect of visceral fat measured by CT. This is a retrospective observational study. Outpatients for colorectal cancer screening who underwent colonoscopies and CT were enrolled. Computed tomography was performed in individuals who requested cancer screening and in those with GI bleeding. Information on obesity indices (BMI, visceral adipose tissue, and subcutaneous adipose tissue area), constipation score, history of abdominal surgery, poor preparation, fellow involvement, diverticulosis, patient discomfort, and the amount of sedation used was collected. The cecal insertion rate was 95.2% (899/944), and 899 patients were analyzed. Multiple regression analysis showed that female sex, lower BMI, lower visceral adipose tissue area, lower subcutaneous adipose tissue area, higher constipation score, history of surgery, poor bowel preparation, and fellow involvement were independently associated with longer insertion time. When obesity indices were considered simultaneously, smaller subcutaneous adipose tissue area (p = 0.038), but not lower BMI (p = 0.802) or smaller visceral adipose tissue area (p = 0.856), was associated with longer insertion time; the other aforementioned factors remained associated with longer insertion time. In the subanalysis of normal-weight patients (BMI abdominal fat, female sex, constipation, history of abdominal surgery, poor preparation, and fellow involvement were predictors of longer cecal insertion time. Among the obesity indices, high subcutaneous fat accumulation was the best predictive factor for easier passage of the colonoscope, even when body weight was normal.

  4. A State-of-the-Art Review of the Real-Time Computer-Aided Study of the Writing Process

    Science.gov (United States)

    Abdel Latif, Muhammad M.

    2008-01-01

    Writing researchers have developed various methods for investigating the writing process since the 1970s. The early 1980s saw the occurrence of the real-time computer-aided study of the writing process that relies on the protocols generated by recording the computer screen activities as writers compose using the word processor. This article…

  5. Extending 3D near-cloud corrections from shorter to longer wavelengths

    International Nuclear Information System (INIS)

    Marshak, Alexander; Evans, K. Frank; Várnai, Tamás; Wen, Guoyong

    2014-01-01

    Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model [18]. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov [9] proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths. - Highlights:

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  7. Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation

    International Nuclear Information System (INIS)

    Boer, J. de; Dannhaueser, G.

    1982-01-01

    The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)

  8. Optimal dose reduction in computed tomography methodologies predicted from real-time dosimetry

    Science.gov (United States)

    Tien, Christopher Jason

    Over the past two decades, computed tomography (CT) has become an increasingly common and useful medical imaging technique. CT is a noninvasive imaging modality with three-dimensional volumetric viewing abilities, all in sub-millimeter resolution. Recent national scrutiny on radiation dose from medical exams has spearheaded an initiative to reduce dose in CT. This work concentrates on dose reduction of individual exams through two recently-innovated dose reduction techniques: organ dose modulation (ODM) and tube current modulation (TCM). ODM and TCM tailor the phase and amplitude of x-ray current, respectively, used by the CT scanner during the scan. These techniques are unique because they can be used to achieve patient dose reduction without any appreciable loss in image quality. This work details the development of the tools and methods featuring real-time dosimetry which were used to provide pioneering measurements of ODM or TCM in dose reduction for CT.

  9. Real-time data acquisition and computation for the SSC using optical and electronic technologies

    International Nuclear Information System (INIS)

    Cantrell, C.D.; Fenyves, E.J.; Wallace, B.

    1990-01-01

    The authors discuss combinations of optical and electronic technologies that may be able to address major data-filtering and data-analysis problems at the SSC. Novel scintillation detectors and optical readout may permit the use of optical processing techniques for trigger decisions and particle tracking. Very-high-speed fiberoptic local-area networks will be necessary to pipeline data from the detectors to the triggers and from the triggers to computers. High-speed, few-processor MIMD superconductors with advanced fiberoptic I/O technology offer a usable, cost-effective alternative to the microprocessor farms currently proposed for event selection and analysis for the SSC. The use of a real-time operating system that provides standard programming tools will facilitate all tasks, from reprogramming the detectors' event-selection criteria to detector simulation and event analysis. 34 refs., 1 fig., 1 tab

  10. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    Science.gov (United States)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  11. Time Synchronization Strategy Between On-Board Computer and FIMS on STSAT-1

    Directory of Open Access Journals (Sweden)

    Seong Woo Kwak

    2004-06-01

    Full Text Available STSAT-1 was launched on sep. 2003 with the main payload of Far Ultra-violet Imaging Spectrograph(FIMS. The mission of FIMS is to observe universe and aurora. In this paper, we suggest a simple and reliable strategy adopted in STSAT-1 to synchronize time between On-board Computer(OBC and FIMS. For the characteristics of STSAT-1, this strategy is devised to maintain reliability of satellite system and to reduce implementation cost by using minimized electronic circuits. We suggested two methods with different synchronization resolutions to cope with unexpected faults in space. The backup method with low resolution can be activated when the main has some problems.

  12. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    Directory of Open Access Journals (Sweden)

    Ju-Chi Liu

    2016-01-01

    Full Text Available A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI. The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN, and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM and accuracy-recognition mode (AM, were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR. When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  13. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    Science.gov (United States)

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  14. SENSITIVITY OF HELIOSEISMIC TRAVEL TIMES TO THE IMPOSITION OF A LORENTZ FORCE LIMITER IN COMPUTATIONAL HELIOSEISMOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Moradi, Hamed; Cally, Paul S., E-mail: hamed.moradi@monash.edu [Monash Centre for Astrophysics, School of Mathematical Sciences, Monash University, Clayton, Victoria 3800 (Australia)

    2014-02-20

    The rapid exponential increase in the Alfvén wave speed with height above the solar surface presents a serious challenge to physical modeling of the effects of magnetic fields on solar oscillations, as it introduces a significant Courant-Friedrichs-Lewy time-step constraint for explicit numerical codes. A common approach adopted in computational helioseismology, where long simulations in excess of 10 hr (hundreds of wave periods) are often required, is to cap the Alfvén wave speed by artificially modifying the momentum equation when the ratio between the Lorentz and hydrodynamic forces becomes too large. However, recent studies have demonstrated that the Alfvén wave speed plays a critical role in the MHD mode conversion process, particularly in determining the reflection height of the upwardly propagating helioseismic fast wave. Using numerical simulations of helioseismic wave propagation in constant inclined (relative to the vertical) magnetic fields we demonstrate that the imposition of such artificial limiters significantly affects time-distance travel times unless the Alfvén wave-speed cap is chosen comfortably in excess of the horizontal phase speeds under investigation.

  15. Shorter Hospital Stays and Lower Costs for Rivaroxaban Compared With Warfarin for Venous Thrombosis Admissions.

    Science.gov (United States)

    Margolis, Jay M; Deitelzweig, Steven; Kline, Jeffrey; Tran, Oth; Smith, David M; Bookhart, Brahim; Crivera, Concetta; Schein, Jeff

    2016-10-06

    Venous thromboembolism, including deep vein thrombosis and pulmonary embolism, results in a substantial healthcare system burden. This retrospective observational study compared hospital length of stay (LOS) and hospitalization costs for patients with venous thromboembolism treated with rivaroxaban versus those treated with warfarin. Hospitalizations for adult patients with a primary diagnosis of deep vein thrombosis or pulmonary embolism who were initiated on rivaroxaban or warfarin were selected from MarketScan's Hospital Drug Database between November 1, 2012, and December 31, 2013. Patients treated with warfarin were matched 1:1 to patients treated with rivaroxaban using exact and propensity score matching. Hospital LOS, time from first dose to discharge, and hospitalization costs were reported descriptively and with generalized linear models (GLMs). The final study cohorts each included 1223 patients (751 with pulmonary embolism and 472 with deep vein thrombosis). Cohorts were well matched for demographic and clinical characteristics. Mean (±SD) LOS was 3.7±3.1 days for patients taking rivaroxaban and 5.2±3.7 days for patients taking warfarin, confirmed by GLM-adjusted results (rivaroxaban 3.7 days, warfarin 5.3 days, P<0.001). Patients with provoked venous thromboembolism admissions showed longer LOSs (rivaroxaban 5.1±4.5 days, warfarin 6.5±5.6 days, P<0.001) than those with unprovoked venous thromboembolism (rivaroxaban 3.3±2.4 days, warfarin 4.8±2.8 days, P<0.001). Days from first dose to discharge were 2.4±1.7 for patients treated with rivaroxaban and 3.9±3.7 for patients treated with warfarin when initiated with parenteral anticoagulants (P<0.001), and 2.7±1.7 and 3.7±2.1, respectively, when initiated without parenteral anticoagulants (P<0.001). Patients initiated on rivaroxaban incurred significantly lower mean total hospitalization costs ($8688±$9927 versus $9823±$9319, P=0.004), confirmed by modeling (rivaroxaban $8387 [95

  16. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  17. APPLICATION OF SOFT COMPUTING TECHNIQUES FOR PREDICTING COOLING TIME REQUIRED DROPPING INITIAL TEMPERATURE OF MASS CONCRETE

    Directory of Open Access Journals (Sweden)

    Santosh Bhattarai

    2017-07-01

    Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.

  18. ADAPTATION OF JOHNSON SEQUENCING ALGORITHM FOR JOB SCHEDULING TO MINIMISE THE AVERAGE WAITING TIME IN CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    SOUVIK PAL

    2016-09-01

    Full Text Available Cloud computing is an emerging paradigm of Internet-centric business computing where Cloud Service Providers (CSPs are providing services to the customer according to their needs. The key perception behind cloud computing is on-demand sharing of resources available in the resource pool provided by CSP, which implies new emerging business model. The resources are provisioned when jobs arrive. The job scheduling and minimization of waiting time are the challenging issue in cloud computing. When a large number of jobs are requested, they have to wait for getting allocated to the servers which in turn may increase the queue length and also waiting time. This paper includes system design for implementation which is concerned with Johnson Scheduling Algorithm that provides the optimal sequence. With that sequence, service times can be obtained. The waiting time and queue length can be reduced using queuing model with multi-server and finite capacity which improves the job scheduling model.

  19. Geographic Location of a Computer Node Examining a Time-to-Location Algorithm and Multiple Autonomous System Networks

    National Research Council Canada - National Science Library

    Sorgaard, Duane

    2004-01-01

    .... A time-to-location algorithm can successfully resolve a geographic location of a computer node using only latency information from known sites and mathematically calculating the Euclidean distance...

  20. Job tasks, computer use, and the decreasing part-time pay penalty for women in the UK

    NARCIS (Netherlands)

    Elsayed, A.E.A.; de Grip, A.; Fouarge, D.

    2014-01-01

    Using data from the UK Skills Surveys, we show that the part-time pay penalty for female workers within low- and medium-skilled occupations decreased significantly over the period 1997-2006. The convergence in computer use between part-time and full-time workers within these occupations explains a

  1. Application of a Statistical Linear Time-Varying System Model of High Grazing Angle Sea Clutter for Computing Interference Power

    Science.gov (United States)

    2017-12-08

    STATISTICAL LINEAR TIME-VARYING SYSTEM MODEL OF HIGH GRAZING ANGLE SEA CLUTTER FOR COMPUTING INTERFERENCE POWER 1. INTRODUCTION Statistical linear time...beam. We can approximate one of the sinc factors using the Dirichlet kernel to facilitate computation of the integral in (6) as follows: ∣∣∣∣sinc(WB...plotted in Figure 4. The resultant autocorrelation can then be found by substituting (18) into (28). The Python code used to generate Figures 1-4 is found

  2. N-Terminal Domains in Two-Domain Proteins Are Biased to Be Shorter and Predicted to Fold Faster Than Their C-Terminal Counterparts

    Directory of Open Access Journals (Sweden)

    Etai Jacob

    2013-04-01

    Full Text Available Computational analysis of proteomes in all kingdoms of life reveals a strong tendency for N-terminal domains in two-domain proteins to have shorter sequences than their neighboring C-terminal domains. Given that folding rates are affected by chain length, we asked whether the tendency for N-terminal domains to be shorter than their neighboring C-terminal domains reflects selection for faster-folding N-terminal domains. Calculations of absolute contact order, another predictor of folding rate, provide additional evidence that N-terminal domains tend to fold faster than their neighboring C-terminal domains. A possible explanation for this bias, which is more pronounced in prokaryotes than in eukaryotes, is that faster folding of N-terminal domains reduces the risk for protein aggregation during folding by preventing formation of nonnative interdomain interactions. This explanation is supported by our finding that two-domain proteins with a shorter N-terminal domain are much more abundant than those with a shorter C-terminal domain.

  3. Is equity confined to the shorter term projects - and if not, what does it need?

    International Nuclear Information System (INIS)

    Cryan, T.

    1996-01-01

    There are two types of equity investor generally found in shorter term energy projects: energy project developers or sponsors who view a given project as buying or building a business; and financial investors who have viewed an investment as buying a stream of cash flows. This article examines the objectives and needs of these two investor groups, and discusses the principal issues which govern their respective decision-making process. (author)

  4. Computational time-resolved and resonant x-ray scattering of strongly correlated materials

    Energy Technology Data Exchange (ETDEWEB)

    Bansil, Arun [Northeastern Univ., Boston, MA (United States)

    2016-11-09

    Basic-Energy Sciences of the Department of Energy (BES/DOE) has made large investments in x-ray sources in the U.S. (NSLS-II, LCLS, NGLS, ALS, APS) as powerful enabling tools for opening up unprecedented new opportunities for exploring properties of matter at various length and time scales. The coming online of the pulsed photon source, literally allows us to see and follow the dynamics of processes in materials at their natural timescales. There is an urgent need therefore to develop theoretical methodologies and computational models for understanding how x-rays interact with matter and the related spectroscopies of materials. The present project addressed aspects of this grand challenge of x-ray science. In particular, our Collaborative Research Team (CRT) focused on developing viable computational schemes for modeling x-ray scattering and photoemission spectra of strongly correlated materials in the time-domain. The vast arsenal of formal/numerical techniques and approaches encompassed by the members of our CRT were brought to bear through appropriate generalizations and extensions to model the pumped state and the dynamics of this non-equilibrium state, and how it can be probed via x-ray absorption (XAS), emission (XES), resonant and non-resonant x-ray scattering, and photoemission processes. We explored the conceptual connections between the time-domain problems and other second-order spectroscopies, such as resonant inelastic x-ray scattering (RIXS) because RIXS may be effectively thought of as a pump-probe experiment in which the incoming photon acts as the pump, and the fluorescent decay is the probe. Alternatively, when the core-valence interactions are strong, one can view K-edge RIXS for example, as the dynamic response of the material to the transient presence of a strong core-hole potential. Unlike an actual pump-probe experiment, here there is no mechanism for adjusting the time-delay between the pump and the probe. However, the core hole

  5. Spike-timing computation properties of a feed-forward neural network model

    Directory of Open Access Journals (Sweden)

    Drew Benjamin Sinha

    2014-01-01

    Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.

  6. Analog Integrated Circuit Design for Spike Time Dependent Encoder and Reservoir in Reservoir Computing Processors

    Science.gov (United States)

    2018-01-01

    HAS BEEN REVIEWED AND IS APPROVED FOR PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT. FOR THE CHIEF ENGINEER : / S / / S...bridged high-performance computing, nanotechnology , and integrated circuits & systems. 15. SUBJECT TERMS neuromorphic computing, neuron design, spike...multidisciplinary effort encompassed high-performance computing, nanotechnology , integrated circuits, and integrated systems. The project’s architecture was

  7. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  9. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  10. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  12. Time-Domain Techniques for Computation and Reconstruction of One-Dimensional Profiles

    Directory of Open Access Journals (Sweden)

    M. Rahman

    2005-01-01

    Full Text Available This paper presents a time-domain technique to compute the electromagnetic fields and to reconstruct the permittivity profile within a one-dimensional medium of finite length. The medium is characterized by a permittivity as well as conductivity profile which vary only with depth. The discussed scattering problem is thus one-dimensional. The modeling tool is divided into two different schemes which are named as the forward solver and the inverse solver. The task of the forward solver is to compute the internal fields of the specimen which is performed by Green’s function approach. When a known electromagnetic wave is incident normally on the media, the resulting electromagnetic field within the media can be calculated by constructing a Green’s operator. This operator maps the incident field on either side of the medium to the field at an arbitrary observation point. It is nothing but a matrix of integral operators with kernels satisfying known partial differential equations. The reflection and transmission behavior of the medium is also determined from the boundary values of the Green's operator. The inverse solver is responsible for solving an inverse scattering problem by reconstructing the permittivity profile of the medium. Though it is possible to use several algorithms to solve this problem, the invariant embedding method, also known as the layer-stripping method, has been implemented here due to the advantage that it requires a finite time trace of reflection data. Here only one round trip of reflection data is used, where one round trip is defined by the time required by the pulse to propagate through the medium and back again. The inversion process begins by retrieving the reflection kernel from the reflected wave data by simply using a deconvolution technique. The rest of the task can easily be performed by applying a numerical approach to determine different profile parameters. Both the solvers have been found to have the

  13. At least 10% shorter C–H bonds in cryogenic protein crystal structures than in current AMBER forcefields

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Yuan-Ping, E-mail: pang@mayo.edu

    2015-03-06

    High resolution protein crystal structures resolved with X-ray diffraction data at cryogenic temperature are commonly used as experimental data to refine forcefields and evaluate protein folding simulations. However, it has been unclear hitherto whether the C–H bond lengths in cryogenic protein structures are significantly different from those defined in forcefields to affect protein folding simulations. This article reports the finding that the C–H bonds in high resolution cryogenic protein structures are 10–14% shorter than those defined in current AMBER forcefields, according to 3709 C–H bonds in the cryogenic protein structures with resolutions of 0.62–0.79 Å. Also, 20 all-atom, isothermal–isobaric, 0.5-μs molecular dynamics simulations showed that chignolin folded from a fully-extended backbone formation to the native β-hairpin conformation in the simulations using AMBER forcefield FF12SB at 300 K with an aggregated native state population including standard error of 10 ± 4%. However, the aggregated native state population with standard error reduced to 3 ± 2% in the same simulations except that C–H bonds were shortened by 10–14%. Furthermore, the aggregated native state populations with standard errors increased to 35 ± 3% and 26 ± 3% when using FF12MC, which is based on AMBER forcefield FF99, with and without the shortened C–H bonds, respectively. These results show that the 10–14% bond length differences can significantly affect protein folding simulations and suggest that re-parameterization of C–H bonds according to the cryogenic structures could improve the ability of a forcefield to fold proteins in molecular dynamics simulations. - Highlights: • Cryogenic crystal structures are commonly used in computational studies of proteins. • C–H bonds in the cryogenic structures are shorter than those defined in forcefields. • A survey of 3709 C–H bonds shows that the cryogenic bonds are 10–14% shorter. • The

  14. Real-Time Evaluation of Breast Self-Examination Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Eman Mohammadi

    2014-01-01

    Full Text Available Breast cancer is the most common cancer among women worldwide and breast self-examination (BSE is considered as the most cost-effective approach for early breast cancer detection. The general objective of this paper is to design and develop a computer vision algorithm to evaluate the BSE performance in real-time. The first stage of the algorithm presents a method for detecting and tracking the nipples in frames while a woman performs BSE; the second stage presents a method for localizing the breast region and blocks of pixels related to palpation of the breast, and the third stage focuses on detecting the palpated blocks in the breast region. The palpated blocks are highlighted at the time of BSE performance. In a correct BSE performance, all blocks must be palpated, checked, and highlighted, respectively. If any abnormality, such as masses, is detected, then this must be reported to a doctor to confirm the presence of this abnormality and proceed to perform other confirmatory tests. The experimental results have shown that the BSE evaluation algorithm presented in this paper provides robust performance.

  15. Real-time evaluation of breast self-examination using computer vision.

    Science.gov (United States)

    Mohammadi, Eman; Dadios, Elmer P; Gan Lim, Laurence A; Cabatuan, Melvin K; Naguib, Raouf N G; Avila, Jose Maria C; Oikonomou, Andreas

    2014-01-01

    Breast cancer is the most common cancer among women worldwide and breast self-examination (BSE) is considered as the most cost-effective approach for early breast cancer detection. The general objective of this paper is to design and develop a computer vision algorithm to evaluate the BSE performance in real-time. The first stage of the algorithm presents a method for detecting and tracking the nipples in frames while a woman performs BSE; the second stage presents a method for localizing the breast region and blocks of pixels related to palpation of the breast, and the third stage focuses on detecting the palpated blocks in the breast region. The palpated blocks are highlighted at the time of BSE performance. In a correct BSE performance, all blocks must be palpated, checked, and highlighted, respectively. If any abnormality, such as masses, is detected, then this must be reported to a doctor to confirm the presence of this abnormality and proceed to perform other confirmatory tests. The experimental results have shown that the BSE evaluation algorithm presented in this paper provides robust performance.

  16. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    International Nuclear Information System (INIS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G

    2005-01-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs

  17. [Fluoroscopy dose reduction of computed tomography guided chest interventional radiology using real-time iterative reconstruction].

    Science.gov (United States)

    Hasegawa, Hiroaki; Mihara, Yoshiyuki; Ino, Kenji; Sato, Jiro

    2014-11-01

    The purpose of this study was to evaluate the radiation dose reduction to patients and radiologists in computed tomography (CT) guided examinations for the thoracic region using CT fluoroscopy. Image quality evaluation of the real-time filtered back-projection (RT-FBP) images and the real-time adaptive iterative dose reduction (RT-AIDR) images was carried out on noise and artifacts that were considered to affect the CT fluoroscopy. The image standard deviation was improved in the fluoroscopy setting with less than 30 mA on 120 kV. With regard to the evaluation of artifact visibility and the amount generated by the needle attached to the chest phantom, there was no significant difference between the RT-FBP images with 120 kV, 20 mA and the RT-AIDR images with low-dose conditions (greater than 80 kV, 30 mA and less than 120 kV, 20 mA). The results suggest that it is possible to reduce the radiation dose by approximately 34% at the maximum using RT-AIDR while maintaining image quality equivalent to the RT-FBP images with 120 V, 20 mA.

  18. Self-motion perception: assessment by real-time computer-generated animations

    Science.gov (United States)

    Parker, D. E.; Phillips, J. O.

    2001-01-01

    We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.

  19. Computing the Quartet Distance Between Evolutionary Trees in Time O(n log n)

    DEFF Research Database (Denmark)

    Brodal, Gerth Sølfting; Fagerberg, Rolf; Pedersen, Christian Nørgaard Storm

    2003-01-01

    Evolutionary trees describing the relationship for a set of species are central in evolutionary biology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook, McMorris, and ...... unrooted evolutionary trees of n species, where all internal nodes have degree three, in time O(n log n. The previous best algorithm for the problem uses time O(n 2).......Evolutionary trees describing the relationship for a set of species are central in evolutionary biology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook, Mc......Morris, and Meacham. The quartet distance between two unrooted evolutionary trees is the number of quartet topology differences between the two trees, where a quartet topology is the topological subtree induced by four species. In this paper we present an algorithm for computing the quartet distance between two...

  20. Real-time radiography, digital radiography, and computed tomography for nonintrusive waste drum characterization

    International Nuclear Information System (INIS)

    Martz, H.E.; Schneberk, D.J.; Roberson, G.P.

    1994-07-01

    We are investigating and developing the application of x-ray nondestructive evaluation (NDE) and gamma-ray nondestructive assay (NDA) methods to nonintrusively characterize 208-liter (55-gallon) mixed waste drums. Mixed wastes contain both hazardous and radioactive materials. We are investigating the use of x-ray NDE methods to verify the content of documented waste drums and determine if they can be used to identify hazardous and nonconforming materials. These NDE methods are also being used to help waste certification and hazardous waste management personnel at LLNL to verify/confirm and/or determine the contents of waste. The gamma-ray NDA method is used to identify the intrinsic radioactive source(s) and to accurately quantify its strength. The NDA method may also be able to identify some hazardous materials such as heavy metals. Also, we are exploring techniques to combine both NDE and NDA data sets to yield the maximum information from these nonintrusive, waste-drum characterization methods. In this paper, we report an our x-ray NDE R ampersand D activities, while our gamma-ray NDA activities are reported elsewhere in the proceedings. We have developed a data, acquisition scanner for x-ray NDE real-time radiography (RTR), as well as digital radiography transmission computed tomography (TCT) along with associated computational techniques for image reconstruction, analysis, and display. We are using this scanner and real-waste drums at Lawrence Livermore National Laboratory (LLNL). In this paper, we discuss some issues associated with x-ray imaging, describe the design construction of an inexpensive NDE drum scanner, provide representative DR and TCT results of both mock- and real-waste drums, and end with a summary of our efforts and future directions. The results of these scans reveal that RTR, DR, and CT imaging techniques can be used in concert to provide valuable information about the interior of low-level-, transuranic-, and mock-waste drums without

  1. Time evolution of a quenched binary alloy: computer simulation of a three-dimensional model system

    International Nuclear Information System (INIS)

    Marro, J.; Bortz, A.B.; Kalos, M.H.; Lebowitz, J.L.; Sur, A.

    1976-01-01

    Results are presented of computer simulation of the time evolution for a model of a binary alloy, such as ZnAl, following quenching. The model system is a simple cubic lattice the sites of which are occupied either by A or B particles. There is a nearest neighbor interaction favoring segregation into an A rich and a B rich phase at low temperatures, T less than T/sub c/. Starting from a random configuration, T much greater than T/sub c/, the system is quenched to and evolves at a temperature T less than T/sub c/. The evolution takes place through exchanges between A and B atoms on nearest neighbor sites. The probability of such an exchange is assumed proportional to e/sup -βΔU/ [1 + e/sup -βΔU/] -1 where β = (k/sub B/T) -1 and ΔU is the change in energy resulting from the exchange. In the simulations either a 30 x 30 x 30 or a 50 x 50 x 50 lattice is used with various fractions of the sites occupied by A particles. The evolution of the Fourier transform of the spherically averaged structure function S(k,t), the energy, and the cluster distribution were computed. Comparison is made with various theories of this process and with some experiments. It is found in particular that the results disagree with the predictions of the linearized Cahn-Hilliard theory of spinodal decomposition. The qualitative form of the results appear to be unaffected if the change in the positions of the atoms takes place via a vacancy mechanism rather than through direct exchanges

  2. Computational Fluid Dynamics Study on the Effects of RATO Timing on the Scale Model Acoustic Test

    Science.gov (United States)

    Nielsen, Tanner; Williams, B.; West, Jeff

    2015-01-01

    The Scale Model Acoustic Test (SMAT) is a 5% scale test of the Space Launch System (SLS), which is currently being designed at Marshall Space Flight Center (MSFC). The purpose of this test is to characterize and understand a variety of acoustic phenomena that occur during the early portions of lift off, one being the overpressure environment that develops shortly after booster ignition. The SLS lift off configuration consists of four RS-25 liquid thrusters on the core stage, with two solid boosters connected to each side. Past experience with scale model testing at MSFC (in ER42), has shown that there is a delay in the ignition of the Rocket Assisted Take Off (RATO) motor, which is used as the 5% scale analog of the solid boosters, after the signal to ignite is given. This delay can range from 0 to 16.5ms. While this small of a delay maybe insignificant in the case of the full scale SLS, it can significantly alter the data obtained during the SMAT due to the much smaller geometry. The speed of sound of the air and combustion gas constituents is not scaled, and therefore the SMAT pressure waves propagate at approximately the same speed as occurs during full scale. However, the SMAT geometry is much smaller allowing the pressure waves to move down the exhaust duct, through the trench, and impact the vehicle model much faster than occurs at full scale. To better understand the effect of the RATO timing simultaneity on the SMAT IOP test data, a computational fluid dynamics (CFD) analysis was performed using the Loci/CHEM CFD software program. Five different timing offsets, based on RATO ignition delay statistics, were simulated. A variety of results and comparisons will be given, assessing the overall effect of RATO timing simultaneity on the SMAT overpressure environment.

  3. A Computational Model for Real-Time Calculation of Electric Field due to Transcranial Magnetic Stimulation in Clinics

    Directory of Open Access Journals (Sweden)

    Alessandra Paffi

    2015-01-01

    Full Text Available The aim of this paper is to propose an approach for an accurate and fast (real-time computation of the electric field induced inside the whole brain volume during a transcranial magnetic stimulation (TMS procedure. The numerical solution implements the admittance method for a discretized realistic brain model derived from Magnetic Resonance Imaging (MRI. Results are in a good agreement with those obtained using commercial codes and require much less computational time. An integration of the developed code with neuronavigation tools will permit real-time evaluation of the stimulated brain regions during the TMS delivery, thus improving the efficacy of clinical applications.

  4. A practical O(n log2 n) time algorithm for computing the triplet distance on binary trees

    DEFF Research Database (Denmark)

    Sand, Andreas; Pedersen, Christian Nørgaard Storm; Mailund, Thomas

    2013-01-01

    rooted binary trees in time O (n log2 n). The algorithm is related to an algorithm for computing the quartet distance between two unrooted binary trees in time O (n log n). While the quartet distance algorithm has a very severe overhead in the asymptotic time complexity that makes it impractical compared......The triplet distance is a distance measure that compares two rooted trees on the same set of leaves by enumerating all sub-sets of three leaves and counting how often the induced topologies of the tree are equal or different. We present an algorithm that computes the triplet distance between two...

  5. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    Science.gov (United States)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  6. Real Time Computer for Plugging Indicator Control of Prototype Fast Breeder Reactor

    International Nuclear Information System (INIS)

    Manimaran, M.; Manoj, P.; Shanmugam, A.; Murali, N.; Satya Murty, S.A.V.

    2013-06-01

    Prototype Fast Breeder Reactor (PFBR) is in the advanced stage of construction at Kalpakkam, India. Liquid sodium is used as coolant to transfer the heat produced in the reactor core to steam water circuit. Impurities present in the sodium are removed using purification circuit. Plugging indicator is a device used to measure the purity of the sodium. Versa Module Europa bus based Real Time Computer (RTC) system is used for plugging indicator control. Hot standby architecture consisting of dual redundant RTC system with switch over logic system is the configuration adopted to achieve fault tolerance. Plugging indicator can be controlled in two modes namely continuous and discontinuous mode. Software based Proportional-Integral-Derivative (PID) algorithms are developed for plugging indicator control wherein the set point changes dynamically for every scan interval of the RTC system. Set points and PID constants are kept as configurable in runtime in order to control the process in very efficient manner, which calls for reliable communication between RTC system and control station, hence TCP/IP protocol is adopted. Performance of the RTC system for plugging indicator control was thoroughly studied in the laboratory by simulating the inputs and monitored the control outputs. The control outputs were also monitored for different PID constants. Continuous and discontinuous mode plots were generated. (authors)

  7. Time sequential single photon emission computed tomography studies in brain tumour using thallium-201

    International Nuclear Information System (INIS)

    Ueda, Takashi; Kaji, Yasuhiro; Wakisaka, Shinichiro; Watanabe, Katsushi; Hoshi, Hiroaki; Jinnouchi, Seishi; Futami, Shigemi

    1993-01-01

    Time sequential single photon emission computed tomography (SPECT) studies using thallium-201 were performed in 25 patients with brain tumours to evaluate the kinetics of thallium in the tumour and the biological malignancy grade preoperatively. After acquisition and reconstruction of SPECT data from 1 min post injection to 48 h (1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 15-20 min, followed by 4-6, 24 and 48 h), the thallium uptake ratio in the tumour versus the homologous contralateral area of the brain was calculated and compared with findings of X-ray CT, magnetic resonance imaging, cerebral angiography and histological investigations. Early uptake of thallium in tumours was related to tumour vascularity and the disruption of the blood-brain barrier. High and rapid uptake and slow reduction of thallium indicated a hypervascular malignant tumour; however, high and rapid uptake but rapid reduction of thallium indicated a hypervascular benign tumour, such as meningioma. Hypovascular and benign tumours tended to show low uptake and slow reduction of thallium. Long-lasting retention or uptake of thallium indicates tumour malignancy. (orig.)

  8. Anisakiasis presenting to the ED: clinical manifestations, time course, hematologic tests, computed tomographic findings, and treatment.

    Science.gov (United States)

    Takabayashi, Takeshi; Mochizuki, Toshiaki; Otani, Norio; Nishiyama, Kei; Ishimatsu, Shinichi

    2014-12-01

    The prevalence of anisakiasis is rare in the United States and Europe compared with that in Japan, with few reports of its presentation in the emergency department (ED). This study describes the clinical, hematologic, computed tomographic (CT) characteristics, and treatment in gastric and small intestinal anisakiasis patients in the ED. We retrospectively reviewed the data of 83 consecutive anisakiasis presentations in our ED between 2003 and 2012. Gastric anisakiasis was endoscopically diagnosed with the Anisakis polypide. Small intestinal anisakiasis was diagnosed based on both hematologic (Anisakis antibody) and CT findings. Of the 83 cases, 39 had gastric anisakiasis and 44 had small intestinal anisakiasis based on our diagnostic criteria. Although all patients had abdominal pain, the gastric anisakiasis group developed symptoms significantly earlier (peaking within 6 hours) than the small intestinal anisakiasis group (peaking within 48 hours), and fewer patients with gastric anisakiasis needed admission therapy (5% vs 57%, Pfindings revealed edematous wall thickening in all patients, and ascites and phlegmon of the mesenteric fat were more frequently observed in the small intestinal anisakiasis group. In the ED, early and accurate diagnosis of anisakiasis is important to treat and explain to the patient, and diagnosis can be facilitated by a history of raw seafood ingestion, evaluation of the time-to-symptom development, and classic CT findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Microscopic description of fission dynamics: Toward a 3D computation of the time dependent GCM equation

    Directory of Open Access Journals (Sweden)

    Regnier D.

    2017-01-01

    Full Text Available Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r-process to fuel cycle optimization in nuclear energy. The need for a predictive theory applicable where no data is available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. One of the most promising theoretical frameworks is the time dependent generator coordinate method (TDGCM applied under the Gaussian overlap approximation (GOA. However, the computational cost of this method makes it difficult to perform calculations with more than two collective degree of freedom. Meanwhile, it is well-known from both semi-phenomenological and fully microscopic approaches that at least four or five dimensions may play a role in the dynamics of fission. To overcome this limitation, we develop the code FELIX aiming to solve the TDGCM+GOA equation for an arbitrary number of collective variables. In this talk, we report the recent progress toward this enriched description of fission dynamics. We will briefly present the numerical methods adopted as well as the status of the latest version of FELIX. Finally, we will discuss fragments yields obtained within this approach for the low energy fission of major actinides.

  10. Microscopic description of fission dynamics: Toward a 3D computation of the time dependent GCM equation

    Science.gov (United States)

    Regnier, D.; Dubray, N.; Schunck, N.; Verrière, M.

    2017-09-01

    Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r-process to fuel cycle optimization in nuclear energy. The need for a predictive theory applicable where no data is available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. One of the most promising theoretical frameworks is the time dependent generator coordinate method (TDGCM) applied under the Gaussian overlap approximation (GOA). However, the computational cost of this method makes it difficult to perform calculations with more than two collective degree of freedom. Meanwhile, it is well-known from both semi-phenomenological and fully microscopic approaches that at least four or five dimensions may play a role in the dynamics of fission. To overcome this limitation, we develop the code FELIX aiming to solve the TDGCM+GOA equation for an arbitrary number of collective variables. In this talk, we report the recent progress toward this enriched description of fission dynamics. We will briefly present the numerical methods adopted as well as the status of the latest version of FELIX. Finally, we will discuss fragments yields obtained within this approach for the low energy fission of major actinides.

  11. Time series analysis of reference crop evapotranspiration using soft computing techniques for Ganjam District, Odisha, India

    Science.gov (United States)

    Patra, S. R.

    2017-12-01

    Evapotranspiration (ET0) influences water resources and it is considered as a vital process in aridic hydrologic frameworks. It is one of the most important measure in finding the drought condition. Therefore, time series forecasting of evapotranspiration is very important in order to help the decision makers and water system mangers build up proper systems to sustain and manage water resources. Time series considers that -history repeats itself, hence by analysing the past values, better choices, or forecasts, can be carried out for the future. Ten years of ET0 data was used as a part of this study to make sure a satisfactory forecast of monthly values. In this study, three models: (ARIMA) mathematical model, artificial neural network model, support vector machine model are presented. These three models are used for forecasting monthly reference crop evapotranspiration based on ten years of past historical records (1991-2001) of measured evaporation at Ganjam region, Odisha, India without considering the climate data. The developed models will allow water resource managers to predict up to 12 months, making these predictions very useful to optimize the resources needed for effective water resources management. In this study multistep-ahead prediction is performed which is more complex and troublesome than onestep ahead. Our investigation proposed that nonlinear relationships may exist among the monthly indices, so that the ARIMA model might not be able to effectively extract the full relationship hidden in the historical data. Support vector machines are potentially helpful time series forecasting strategies on account of their strong nonlinear mapping capability and resistance to complexity in forecasting data. SVMs have great learning capability in time series modelling compared to ANN. For instance, the SVMs execute the structural risk minimization principle, which allows in better generalization as compared to neural networks that use the empirical risk

  12. The return trip is felt shorter only postdictively: A psychophysiological study of the return trip effect [corrected].

    Directory of Open Access Journals (Sweden)

    Ryosuke Ozawa

    Full Text Available The return trip often seems shorter than the outward trip even when the distance and actual time are identical. To date, studies on the return trip effect have failed to confirm its existence in a situation that is ecologically valid in terms of environment and duration. In addition, physiological influences as part of fundamental timing mechanisms in daily activities have not been investigated in the time perception literature. The present study compared round-trip and non-round-trip conditions in an ecological situation. Time estimation in real time and postdictive estimation were used to clarify the situations where the return trip effect occurs. Autonomic nervous system activity was evaluated from the electrocardiogram using the Lorenz plot to demonstrate the relationship between time perception and physiological indices. The results suggest that the return trip effect is caused only postdictively. Electrocardiographic analysis revealed that the two experimental conditions induced different responses in the autonomic nervous system, particularly in sympathetic nervous function, and that parasympathetic function correlated with postdictive timing. To account for the main findings, the discrepancy between the two time estimates is discussed in the light of timing strategies, i.e., prospective and retrospective timing, which reflect different emphasis on attention and memory processes. Also each timing method, i.e., the verbal estimation, production or comparative judgment, has different characteristics such as the quantification of duration in time units or knowledge of the target duration, which may be responsible for the discrepancy. The relationship between postdictive time estimation and the parasympathetic nervous system is also discussed.

  13. Comparative evaluation of image quality in computed radiology systems using imaging plates with different usage time

    International Nuclear Information System (INIS)

    Lazzaro, M.V.; Luz, R.M. da; Capaverde, A.S.; Silva, A.M. Marques da

    2015-01-01

    Computed Radiology (CR) systems use imaging plates (IPs) for latent image acquisition. Taking into account the quality control (QC) of these systems, imaging plates usage time is undetermined. Different recommendations and publications on the subject suggest tests to evaluate these systems. The objective of this study is to compare the image quality of IPs of a CR system, in a mammography service, considering the usage time and consistency of assessments. 8 IPs were used divided into two groups: the first group included 4 IPs with 3 years of use (Group A); the second group consisted of 4 new IPs with no previous exposure (Group B). The tests used to assess the IP's quality were: Uniformity, Differential Signal to Noise Ratio (SDNR), Ghost Effect and Figure of Merit (FOM). Statistical results show that the proposed tests are shown efficient in assessing the conditions of image quality obtained in CR systems in mammography and can be used as determining factors for the replacement of IP's. Moreover, comparing the two sets of IP, results led to the replacement of all the set of IP’s with 3 years of use. This work demonstrates the importance of an efficient quality control, not only with regard to the quality of IP's used, but in the acquisition system as a whole. From this work, these tests will be conducted on an annual basis, already targeting as future work, monitoring the wear of IP's Group B and the creation of a baseline for analysis and future replacements. (author)

  14. Reemission spectra and inelastic processes at interaction of attosecond and shorter duration electromagnetic pulses with atoms

    International Nuclear Information System (INIS)

    Makarov, D.N.; Matveev, V.I.

    2017-01-01

    Inelastic processes and the reemission of attosecond and shorter electromagnetic pulses by atoms have been considered within the analytical solution of the Schrödinger equation in the sudden perturbation approximation. A method of calculations with the exact inclusion of spatial inhomogeneity of the field of an ultrashort pulse and the momenta of photons in the reemission processes has been developed. The probabilities of inelastic processes and spectra of reemission of ultrashort electromagnetic pulses by one- and many-electron atoms have been calculated. The results have been presented in the form of analytical formulas.

  15. Investigating the influence of eating habits, body weight and television programme preferences on television viewing time and domestic computer usage.

    Science.gov (United States)

    Raptou, Elena; Papastefanou, Georgios; Mattas, Konstadinos

    2017-01-01

    The present study explored the influence of eating habits, body weight and television programme preference on television viewing time and domestic computer usage, after adjusting for sociodemographic characteristics and home media environment indicators. In addition, potential substitution or complementarity in screen time was investigated. Individual level data were collected via questionnaires that were administered to a random sample of 2,946 Germans. The econometric analysis employed a seemingly unrelated bivariate ordered probit model to conjointly estimate television viewing time and time engaged in domestic computer usage. Television viewing and domestic computer usage represent two independent behaviours in both genders and across all age groups. Dietary habits have a significant impact on television watching with less healthy food choices associated with increasing television viewing time. Body weight is found to be positively correlated with television screen time in both men and women, and overweight individuals have a higher propensity for heavy television viewing. Similar results were obtained for age groups where an increasing body mass index (BMI) in adults over 24 years old is more likely to be positively associated with a higher duration of television watching. With respect to dietary habits of domestic computer users, participants aged over 24 years of both genders seem to adopt more healthy dietary patterns. A downward trend in the BMI of domestic computer users was observed in women and adults aged 25-60 years. On the contrary, young domestic computer users 18-24 years old have a higher body weight than non-users. Television programme preferences also affect television screen time with clear differences to be observed between genders and across different age groups. In order to reduce total screen time, health interventions should target different types of screen viewing audiences separately.

  16. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    Science.gov (United States)

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  17. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junghoon Lee

    2011-03-01

    Full Text Available Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  18. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    Science.gov (United States)

    2015-06-01

    CONFABULATION BASED REAL-TIME ANOMALY DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE SYRACUSE...DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-12-1-0251 5b. GRANT...processors including graphic processor units (GPUs) and Intel Xeon Phi processors. Experimental results showed significant speedups, which can enable

  19. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    Science.gov (United States)

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.

  20. Rigorous bounds on survival times in circular accelerators and efficient computation of fringe-field transfer maps

    International Nuclear Information System (INIS)

    Hoffstaetter, G.H.

    1994-12-01

    Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)

  1. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    NARCIS (Netherlands)

    Rodriguez, A.; Ibanescu, M.; Iannuzzi, D.; Joannopoulos, J. D.; Johnson, S.T.

    2007-01-01

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the

  2. Towards a real time computation of the dose in a phantom segmented into homogeneous meshes

    International Nuclear Information System (INIS)

    Blanpain, B.

    2009-10-01

    Automatic radiation therapy treatment planning necessitates a very fast computation of the dose delivered to the patient. We propose to compute the dose by segmenting the patient's phantom into homogeneous meshes, and by associating, to the meshes, projections to dose distributions pre-computed in homogeneous phantoms, along with weights managing heterogeneities. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. This method is very fast, in particular when there are few points of interest (several hundreds). In this case, results are obtained in less than one second. With such performances, practical realization of automatic treatment planning becomes practically feasible. (author)

  3. Shorter preschool, leukocyte telomere length is associated with obesity at age 9 in Latino children.

    Science.gov (United States)

    Kjaer, T W; Faurholt-Jepsen, D; Mehta, K M; Christensen, V B; Epel, E; Lin, J; Blackburn, E; Wojcicki, J M

    2018-04-01

    The aim of this study was to determine the potential role of leukocyte telomere length as a biomarker for development of childhood obesity in a low-income Latino population. A birth cohort of Latino children (N = 201) in San Francisco (recruited May 2006-May 2007) was followed until age 9 and assessed annually for obesity and dietary intake. Leukocyte telomere length was measured at 4 and 5 years (n = 102) and assessed as a predictor for obesity at age 9, adjusting for known risk factors. Furthermore, leukocyte telomere length at age 4 and 5 was evaluated as a possible mediator of the relationship between excessive sugar-sweetened beverage consumption and obesity at age 9. Shorter leukocyte telomere length in preschoolers was associated with obesity at age 9 (adjusted odds ratio 0.35, 95% confidence interval 0.13-0.94) after adjustment for known risk factors. Telomere length mediated 11% of the relationship between excessive sugar-sweetened beverage consumption and obesity. Shorter leukocyte telomere length may be an indicator of future obesity risk in high-risk populations as it is particularly sensitive to damage from oxidative stress exposure, including those from sugar-sweetened beverages. © 2017 World Obesity Federation.

  4. Applicability of the shorter ‘Bangladesh regimen’ in high multidrug-resistant tuberculosis settings

    Directory of Open Access Journals (Sweden)

    Giovanni Sotgiu

    2017-03-01

    Full Text Available In spite of the recent introduction of two new drugs (delamanid and bedaquiline and a few repurposed compounds to treat multidrug-resistant and extensively drug-resistant tuberculosis (MDR- and XDR-TB, clinicians are facing increasing problems in designing effective regimens in severe cases. Recently a 9 to 12-month regimen (known as the ‘Bangladesh regimen’ proved to be effective in treating MDR-TB cases. It included an initial phase of 4 to 6 months of kanamycin, moxifloxacin, prothionamide, clofazimine, pyrazinamide, high-dose isoniazid, and ethambutol, followed by 5 months of moxifloxacin, clofazimine, pyrazinamide, and ethambutol. However, recent evidence from Europe and Latin America identified prevalences of resistance to the first-line drugs in this regimen (ethambutol and pyrazinamide exceeding 60%, and of prothionamide exceeding 50%. Furthermore, the proportions of resistance to the two most important pillars of the regimen – quinolones and kanamycin – were higher than 40%. Overall, only 14 out of 348 adult patients (4.0% were susceptible to all of the drugs composing the regimen, and were therefore potentially suitable for the ‘shorter regimen’. A shorter, cheaper, and well-tolerated MDR-TB regimen is likely to impact the number of patients treated and improve adherence if prescribed to the right patients through the systematic use of rapid MTBDRsl testing.

  5. Are Shorter Versions of the Positive and Negative Syndrome Scale (PANSS) Doable? A Critical Review.

    Science.gov (United States)

    Lindenmayer, Jean-Pierre

    2017-12-01

    The Positive and Negative Syndrome Scale (PANSS) is a well-established assessment tool for measuring symptom severity in schizophrenia. Researchers and clinicians have been interested in the development of a short version of the PANSS that could reduce the burden of its administration for patients and raters. The author presents a comprehensive overview of existing brief PANSS measures, including their strengths and limitations, and discusses some possible next steps. There are two available scales that offer a reduced number of original PANSS items: PANSS-14 and PANSS-19; and two shorter versions that include six items: Brief PANSS and PANSS-6. The PANSS-6 has been tested quite extensively in established trials and appears to demonstrate high sensitivity to change and an established cut off definition for remission. Prospective testing in new antipsychotic treatment trials is still required for these shorter versions of PANSS. In addition, they need to be supplemented with interview guides, as well as provide conversion formulas to translate total scores from the short PANSS versions to the PANSS-30. Both short versions of the PANSS are essentially designed to evaluate response to antipsychotic treatment. Future PANSS scale development needs to address specific measurement of treatment-responsive positive symptoms by including treatment-sensitive items, as well as illness-phase specific PANSS tools.

  6. Dual-time-point Imaging and Delayed-time-point Fluorodeoxyglucose-PET/Computed Tomography Imaging in Various Clinical Settings

    DEFF Research Database (Denmark)

    Houshmand, Sina; Salavati, Ali; Antonsen Segtnan, Eivind

    2016-01-01

    The techniques of dual-time-point imaging (DTPI) and delayed-time-point imaging, which are mostly being used for distinction between inflammatory and malignant diseases, has increased the specificity of fluorodeoxyglucose (FDG)-PET for diagnosis and prognosis of certain diseases. A gradually incr...

  7. Computing Conditional VaR using Time-varying CopulasComputing Conditional VaR using Time-varying Copulas

    Directory of Open Access Journals (Sweden)

    Beatriz Vaz de Melo Mendes

    2005-12-01

    Full Text Available It is now widespread the use of Value-at-Risk (VaR as a canonical measure at risk. Most accurate VaR measures make use of some volatility model such as GARCH-type models. However, the pattern of volatility dynamic of a portfolio follows from the (univariate behavior of the risk assets, as well as from the type and strength of the associations among them. Moreover, the dependence structure among the components may change conditionally t past observations. Some papers have attempted to model this characteristic by assuming a multivariate GARCH model, or by considering the conditional correlation coefficient, or by incorporating some possibility for switches in regimes. In this paper we address this problem using time-varying copulas. Our modeling strategy allows for the margins to follow some FIGARCH type model while the copula dependence structure changes over time.

  8. MUSIDH, multiple use of simulated demographic histories, a novel method to reduce computation time in microsimulation models of infectious diseases.

    Science.gov (United States)

    Fischer, E A J; De Vlas, S J; Richardus, J H; Habbema, J D F

    2008-09-01

    Microsimulation of infectious diseases requires simulation of many life histories of interacting individuals. In particular, relatively rare infections such as leprosy need to be studied in very large populations. Computation time increases disproportionally with the size of the simulated population. We present a novel method, MUSIDH, an acronym for multiple use of simulated demographic histories, to reduce computation time. Demographic history refers to the processes of birth, death and all other demographic events that should be unrelated to the natural course of an infection, thus non-fatal infections. MUSIDH attaches a fixed number of infection histories to each demographic history, and these infection histories interact as if being the infection history of separate individuals. With two examples, mumps and leprosy, we show that the method can give a factor 50 reduction in computation time at the cost of a small loss in precision. The largest reductions are obtained for rare infections with complex demographic histories.

  9. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Science.gov (United States)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  10. The relationship between TV/computer time and adolescents' health-promoting behavior: a secondary data analysis.

    Science.gov (United States)

    Chen, Mei-Yen; Liou, Yiing-Mei; Wu, Jen-Yee

    2008-03-01

    Television and computers provide significant benefits for learning about the world. Some studies have linked excessive television (TV) watching or computer game playing to disadvantage of health status or some unhealthy behavior among adolescents. However, the relationships between watching TV/playing computer games and adolescents adopting health promoting behavior were limited. This study aimed to discover the relationship between time spent on watching TV and on leisure use of computers and adolescents' health promoting behavior, and associated factors. This paper used secondary data analysis from part of a health promotion project in Taoyuan County, Taiwan. A cross-sectional design was used and purposive sampling was conducted among adolescents in the original project. A total of 660 participants answered the questions appropriately for this work between January and June 2004. Findings showed the mean age of the respondents was 15.0 +/- 1.7 years. The mean numbers of TV watching hours were 2.28 and 4.07 on weekdays and weekends respectively. The mean hours of leisure (non-academic) computer use were 1.64 and 3.38 on weekdays and weekends respectively. Results indicated that adolescents spent significant time watching TV and using the computer, which was negatively associated with adopting health-promoting behaviors such as life appreciation, health responsibility, social support and exercise behavior. Moreover, being boys, being overweight, living in a rural area, and being middle-school students were significantly associated with spending long periods watching TV and using the computer. Therefore, primary health care providers should record the TV and non-academic computer time of youths when conducting health promotion programs, and educate parents on how to become good and healthy electronic media users.

  11. Shorter telomeres in peripheral blood mononuclear cells from older persons with sarcopenia: results from an exploratory study

    Directory of Open Access Journals (Sweden)

    Emanuele eMarzetti

    2014-08-01

    Full Text Available Background. Telomere shortening in peripheral blood mononuclear cells (PBMCs has been associated with biological age and several chronic degenerative diseases. However, the relationship between telomere length and sarcopenia, a hallmark of the aging process, is unknown. The aim of the present study was therefore to determine whether PBMC telomeres obtained from sarcopenic older persons were shorter relative to non-sarcopenic peers. We further explored if PBMC telomere length was associated with frailty, a major clinical correlate of sarcopenia.Methods. Analyses were conducted in 142 persons aged >/= 65 years referred to a geriatric outpatient clinic (University Hospital. The presence of sarcopenia was established according to the European Working Group on Sarcopenia in Older People criteria, with bioelectrical impedance analysis used for muscle mass estimation. The frailty status was determined by both the Fried’s criteria (physical frailty, PF and a modified Rockwood’s frailty index (FI. Telomere length was measured in PBMCs by quantitative real-time polymerase chain reaction according to the Telomere/Single copy gene ratio (T/S method.Results. Among 142 outpatients (mean age 75.0 ± 6.5 years, 59.2% women, sarcopenia was diagnosed in 23 individuals (19.3%. The PF phenotype was detected in 74 participants (52.1%. The average FI score was 0.46 ± 0.17. PBMC telomeres were shorter in sarcopenic subjects (T/S = 0.21; 95% CI: 0.18 – 0.24 relative to non-sarcopenic individuals (T/S = 0.26; 95%: CI: 0.24 – 0.28; p = 0.01, independent of age, gender, smoking habit, or comorbidity. No significant associations were determined between telomere length and either PF or FI.Conclusion. PBMC telomere length, expressed as T/S values, is shorter in older outpatients with sarcopenia. The cross-sectional assessment of PBMC telomere length is not sufficient at capturing the complex, multidimensional syndrome of frailty.

  12. Computer/Mobile Device Screen Time of Children and Their Eye Care Behavior: The Roles of Risk Perception and Parenting.

    Science.gov (United States)

    Chang, Fong-Ching; Chiu, Chiung-Hui; Chen, Ping-Hung; Miao, Nae-Fang; Chiang, Jeng-Tung; Chuang, Hung-Yi

    2018-03-01

    This study assessed the computer/mobile device screen time and eye care behavior of children and examined the roles of risk perception and parental practices. Data were obtained from a sample of 2,454 child-parent dyads recruited from 30 primary schools in Taipei city and New Taipei city, Taiwan, in 2016. Self-administered questionnaires were collected from students and parents. Fifth-grade students spend more time on new media (computer/smartphone/tablet: 16 hours a week) than on traditional media (television: 10 hours a week). The average daily screen time (3.5 hours) for these children exceeded the American Academy of Pediatrics recommendations (≤2 hours). Multivariate analysis results showed that after controlling for demographic factors, the parents with higher levels of risk perception and parental efficacy were more likely to mediate their child's eye care behavior. Children who reported lower academic performance, who were from non-intact families, reported lower levels of risk perception of mobile device use, had parents who spent more time using computers and mobile devices, and had lower levels of parental mediation were more likely to spend more time using computers and mobile devices; whereas children who reported higher academic performance, higher levels of risk perception, and higher levels of parental mediation were more likely to engage in higher levels of eye care behavior. Risk perception by children and parental practices are associated with the amount of screen time that children regularly engage in and their level of eye care behavior.

  13. Project Energise: Using participatory approaches and real time computer prompts to reduce occupational sitting and increase work time physical activity in office workers.

    Science.gov (United States)

    Gilson, Nicholas D; Ng, Norman; Pavey, Toby G; Ryde, Gemma C; Straker, Leon; Brown, Wendy J

    2016-11-01

    This efficacy study assessed the added impact real time computer prompts had on a participatory approach to reduce occupational sedentary exposure and increase physical activity. Quasi-experimental. 57 Australian office workers (mean [SD]; age=47 [11] years; BMI=28 [5]kg/m 2 ; 46 men) generated a menu of 20 occupational 'sit less and move more' strategies through participatory workshops, and were then tasked with implementing strategies for five months (July-November 2014). During implementation, a sub-sample of workers (n=24) used a chair sensor/software package (Sitting Pad) that gave real time prompts to interrupt desk sitting. Baseline and intervention sedentary behaviour and physical activity (GENEActiv accelerometer; mean work time percentages), and minutes spent sitting at desks (Sitting Pad; mean total time and longest bout) were compared between non-prompt and prompt workers using a two-way ANOVA. Workers spent close to three quarters of their work time sedentary, mostly sitting at desks (mean [SD]; total desk sitting time=371 [71]min/day; longest bout spent desk sitting=104 [43]min/day). Intervention effects were four times greater in workers who used real time computer prompts (8% decrease in work time sedentary behaviour and increase in light intensity physical activity; pcomputer prompts facilitated the impact of a participatory approach on reductions in occupational sedentary exposure, and increases in physical activity. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  14. FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.

    Directory of Open Access Journals (Sweden)

    Maxim Nikolaievich Shokhirev

    Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.

  15. Hydrologic Response to Climate Change: Missing Precipitation Data Matters for Computed Timing Trends

    Science.gov (United States)

    Daniels, B.

    2016-12-01

    This work demonstrates the derivation of climate timing statistics and applying them to determine resulting hydroclimate impacts. Long-term daily precipitation observations from 50 California stations were used to compute climate trends of precipitation event Intensity, event Duration and Pause between events. Each precipitation event trend was then applied as input to a PRMS hydrology model which showed hydrology changes to recharge, baseflow, streamflow, etc. An important concern was precipitation uncertainty induced by missing observation values and causing errors in quantification of precipitation trends. Many standard statistical techniques such as ARIMA and simple endogenous or even exogenous imputation were applied but failed to help resolve these uncertainties. What helped resolve these uncertainties was use of multiple imputation techniques. This involved fitting of Weibull probability distributions to multiple imputed values for the three precipitation trends.Permutation resampling techniques using Monte Carlo processing were then applied to the multiple imputation values to derive significance p-values for each trend. Significance at the 95% level for Intensity was found for 11 of the 50 stations, Duration from 16 of the 50, and Pause from 19, of which 12 were 99% significant. The significance weighted trends for California are Intensity -4.61% per decade, Duration +3.49% per decade, and Pause +3.58% per decade. Two California basins with PRMS hydrologic models were studied: Feather River in the northern Sierra Nevada mountains and the central coast Soquel-Aptos. Each local trend was changed without changing the other trends or the total precipitation. Feather River Basin's critical supply to Lake Oroville and the State Water Project benefited from a total streamflow increase of 1.5%. The Soquel-Aptos Basin water supply was impacted by a total groundwater recharge decrease of -7.5% and streamflow decrease of -3.2%.

  16. Quality Saving Mechanisms of Mitochondria during Aging in a Fully Time-Dependent Computational Biophysical Model.

    Directory of Open Access Journals (Sweden)

    Daniel Mellem

    Full Text Available Mitochondria are essential for the energy production of eukaryotic cells. During aging mitochondria run through various processes which change their quality in terms of activity, health and metabolic supply. In recent years, many of these processes such as fission and fusion of mitochondria, mitophagy, mitochondrial biogenesis and energy consumption have been subject of research. Based on numerous experimental insights, it was possible to qualify mitochondrial behaviour in computational simulations. Here, we present a new biophysical model based on the approach of Figge et al. in 2012. We introduce exponential decay and growth laws for each mitochondrial process to derive its time-dependent probability during the aging of cells. All mitochondrial processes of the original model are mathematically and biophysically redefined and additional processes are implemented: Mitochondrial fission and fusion is separated into a metabolic outer-membrane part and a protein-related inner-membrane part, a quality-dependent threshold for mitophagy and mitochondrial biogenesis is introduced and processes for activity-dependent internal oxidative stress as well as mitochondrial repair mechanisms are newly included. Our findings reveal a decrease of mitochondrial quality and a fragmentation of the mitochondrial network during aging. Additionally, the model discloses a quality increasing mechanism due to the interplay of the mitophagy and biogenesis cycle and the fission and fusion cycle of mitochondria. It is revealed that decreased mitochondrial repair can be a quality saving process in aged cells. Furthermore, the model finds strategies to sustain the quality of the mitochondrial network in cells with high production rates of reactive oxygen species due to large energy demands. Hence, the model adds new insights to biophysical mechanisms of mitochondrial aging and provides novel understandings of the interdependency of mitochondrial processes.

  17. On the Correctness of Real-Time Modular Computer Systems Modeling with Stopwatch Automata Networks

    Directory of Open Access Journals (Sweden)

    Alevtina B. Glonina

    2018-01-01

    Full Text Available In this paper, we consider a schedulability analysis problem for real-time modular computer systems (RT MCS. A system configuration is called schedulable if all the jobs finish within their deadlines. The authors propose a stopwatch automata-based general model of RT MCS operation. A model instance for a given RT MCS configuration is a network of stopwatch automata (NSA and it can be built automatically using the general model. A system operation trace, which is necessary for checking the schedulability criterion, can be obtained from the corresponding NSA trace. The paper substantiates the correctness of the proposed approach. A set of correctness requirements to models of system components and to the whole system model were derived from RT MCS specifications. The authors proved that if all models of system components satisfy the corresponding requirements, the whole system model built according to the proposed approach satisfies its correctness requirements and is deterministic (i.e. for a given configuration a trace generated by the corresponding model run is uniquely determined. The model determinism implies that any model run can be used for schedulability analysis. This fact is crucial for the approach efficiency, as the number of possible model runs grows exponentially with the number of jobs in a system. Correctness requirements to models of system components models can be checked automatically by a verifier using observer automata approach. The authors proved by using UPPAAL verifier that all the developed models of system components satisfy the corresponding requirements. User-defined models of system components can be also used for system modeling if they satisfy the requirements.

  18. Phased searching with NEAT in a time-scaled framework: experiments on a computer-aided detection system for lung nodules.

    Science.gov (United States)

    Tan, Maxine; Deklerck, Rudi; Cornelis, Jan; Jansen, Bart

    2013-11-01

    In the field of computer-aided detection (CAD) systems for lung nodules in computed tomography (CT) scans, many image features are presented and many artificial neural network (ANN) classifiers with various structural topologies are analyzed; frequently, the classifier topologies are selected by trial-and-error experiments. To avoid these trial and error approaches, we present a novel classifier that evolves ANNs using genetic algorithms, called "Phased Searching with NEAT in a Time or Generation-Scaled Framework", integrating feature selection with the classification task. We analyzed our method's performance on 360 CT scans from the public Lung Image Database Consortium database. We compare our method's performance with other more-established classifiers, namely regular NEAT, Feature-Deselective NEAT (FD-NEAT), fixed-topology ANNs, and support vector machines (SVMs) using ten-fold cross-validation experiments of all 360 scans. The results show that the proposed "Phased Searching" method performs better and faster than regular NEAT, better than FD-NEAT, and achieves sensitivities at 3 and 4 false positives (FP) per scan that are comparable with the fixed-topology ANN and SVM classifiers, but with fewer input features. It achieves a detection sensitivity of 83.0±9.7% with an average of 4FP/scan, for nodules with a diameter greater than or equal to 3mm. It also evolves networks with shorter evolution times and with lower complexities than regular NEAT (p=0.026 and pNEAT and by our approach shows that our approach searches for good solutions in lower dimensional search spaces, and evolves networks without superfluous structure. We have presented a novel approach that combines feature selection with the evolution of ANN topology and weights. Compared with the original threshold-based Phased Searching method of Green, our method requires fewer parameters and converges to the optimal network complexity required for the classification task at hand. The results of the

  19. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    International Nuclear Information System (INIS)

    Yasu, Y.; Fujii, H.; Nomachi, M.; Kodama, H.; Inoue, E.; Tajima, Y.; Takeuchi, Y.; Shimizu, Y.

    1994-01-01

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers

  20. Artificial neuron operations and spike-timing-dependent plasticity using memristive devices for brain-inspired computing

    Science.gov (United States)

    Marukame, Takao; Nishi, Yoshifumi; Yasuda, Shin-ichi; Tanamoto, Tetsufumi

    2018-04-01

    The use of memristive devices for creating artificial neurons is promising for brain-inspired computing from the viewpoints of computation architecture and learning protocol. We present an energy-efficient multiplier accumulator based on a memristive array architecture incorporating both analog and digital circuitries. The analog circuitry is used to full advantage for neural networks, as demonstrated by the spike-timing-dependent plasticity (STDP) in fabricated AlO x /TiO x -based metal-oxide memristive devices. STDP protocols for controlling periodic analog resistance with long-range stability were experimentally verified using a variety of voltage amplitudes and spike timings.

  1. Timing of computed tomography-based postimplant assessment following permanent transperineal prostate brachytherapy

    International Nuclear Information System (INIS)

    Prestidge, Bradley R.; Bice, William S.; Kiefer, Eric J.; Prete, James J.

    1998-01-01

    Purpose: To establish the rate of resolution of prostatic edema following transperineal interstitial permanent prostate brachytherapy, and to determine the results and impact of timing of the postimplant assessment on the dose-volume relationship. Methods and Materials: A series of 19 consecutive patients with early-stage adenocarcinoma of the prostate receiving transperineal interstitial permanent prostate brachytherapy, were enrolled in this study. Twelve received 125 I and seven received 103 Pd. Postoperative assessment included a computed tomographic (CT) scan on postoperative days 1, 8, 30, 90, and 180. On each occasion, CT scans were performed on a GE helical unit at 3-mm abutting slices, 15-cm field of view. Prostate volumes were outlined on CT scans by a single clinician. Following digitization of the volumes and radioactive sources, volumes and dose-volume histograms were calculated. The prostate volume encompassed by the 80% and 100% reference isodose volumes was calculated. Results: Preimplant transrectal ultrasound determined volumes varied from 17.5 to 38.6 cc (median 27.9 cc). Prostate volumes previously defined on 40 randomly selected postimplant CT scans were compared in a blinded fashion to a second CT-derived volume and ranged from -32% to +24%. The Pearson correlation coefficient for prostate CT volume reproducibility was 0.77 (p < 0.03). CT scan-determined volume performed on postoperative day 1 was an average of 41.4% greater than the volume determined by preimplant ultrasound. Significant decreases in average volume were seen during the first month postoperatively. Average volume decreased 14% from day 1 to day 8, 10% from day 8 to day 30, 3% from day 30 to day 90, and 2% thereafter. Coverage of the prostate volume by the 80% isodose volume increased from 85.6% on postoperative day 1 to 92.2% on postoperative day 180. The corresponding increase in the 100% reference dose coverage of the prostate volume ranged from 73.1% to 83.3% between

  2. Job tasks, computer use, and the decreasing part-time pay penalty for women in the UK

    OpenAIRE

    Elsayed, A.E.A.; de Grip, A.; Fouarge, D.

    2014-01-01

    Using data from the UK Skills Surveys, we show that the part-time pay penalty for female workers within low- and medium-skilled occupations decreased significantly over the period 1997-2006. The convergence in computer use between part-time and full-time workers within these occupations explains a large share of the decrease in the part-time pay penalty. However, the lower part-time pay penalty is also related to lower wage returns to reading and writing which are performed more intensively b...

  3. [Evaluation of production and clinical working time of computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays for complete denture].

    Science.gov (United States)

    Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X

    2017-02-18

    To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (Pmanufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being

  4. The use of diffusion theory to compute invasion effects for the pulsed neutron thermal decay time log

    International Nuclear Information System (INIS)

    Tittle, C.W.

    1992-01-01

    Diffusion theory has been successfully used to model the effect of fluid invasion into the formation for neutron porosity logs and for the gamma-gamma density log. The purpose of this paper is to present results of computations using a five-group time-dependent diffusion code on invasion effects for the pulsed neutron thermal decay time log. Previous invasion studies by the author involved the use of a three-dimensional three-group steady-state diffusion theory to model the dual-detector thermal neutron porosity log and the gamma-gamma density log. The five-group time-dependent code MGNDE (Multi-Group Neutron Diffusion Equation) used in this work was written by Ferguson. It has been successfully used to compute the intrinsic formation life-time correction for pulsed neutron thermal decay time logs. This application involves the effect of fluid invasion into the formation

  5. Real time recording system of radioisotopes by local area network (LAN) computer system and user input processing

    International Nuclear Information System (INIS)

    Shinohara, Kunio; Ito, Atsushi; Kawaguchi, Hajime; Yanase, Makoto; Uno, Kiyoshi.

    1991-01-01

    A computer-assisted real time recording system was developed for management of radioisotopes. The system composed of two personal computers forming LAN, identification-card (ID-card) reader, and electricity-operating door-lock. One computer is operated by radiation safety staffs and stores the records of radioisotopes. The users of radioisotopes are registered in this computer. Another computer is installed in front of the storage room for radioisotopes. This computer is ready for operation by a registered ID-card and is input data by the user. After the completion of data input, the door to the storage room is unlocked. The present system enables us the following merits: Radiation safety staffs can easily keep up with the present states of radioisotopes in the storage room and save much labor. Radioactivity is always corrected. The upper limit of radioactivities in use per day is automatically checked and users are regulated when they input the amounts to be used. Users can obtain storage records of radioisotopes any time. In addition, the system is applicable to facilities which have more than two storage rooms. (author)

  6. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar; Heide, Felix; Heidrich, Wolfgang; Wetzstein, Gordon

    2016-01-01

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design

  7. Comparison of the Amount of Time Spent on Computer Games and Aggressive Behavior in Male Middle School Students of Tehran

    Directory of Open Access Journals (Sweden)

    Mehrangiz Shoaa Kazemi

    2016-12-01

    Full Text Available Background and Objectives: Modern technologies have a prominent role in adolescent's daily life. These technologies include specific cultural and moral patterns, which could be highly effective on adolescents. This research aimed at comparing the amount of time spent on computer games and aggressive behavior in male middle school students of Tehran. Materials and Methods: This study had a descriptive design. The study population included all male students of middle school of Tehran, and the sample included 120 male students, of which 60 were dependent on computer games with aggressive behavior and 60 were non-dependent on computer games with normal behavior; the sample was randomly selected from Tehran regions (south, north, west, and east regions with random multi-stage sampling. Data were gathered using questionnaires, including Aggressive Questionnaire (AGQ and a researcher-made questionnaire consisting of 10 multiple questions that measure the use or non-use of computer games. Data were analyzed using SPSS-19 statistical software. For data analysis, Pearson correlation and t test were used. Results: The results showed that there was a meaningful relationship between computer gaming and aggressive behavior and also between duration of using computer games and aggressive behaviors (P <0.05. Conclusions: According to the results, it seems that children could be kept safe from the adverse effects of computer games by controlling the duration and the type of the games that they play.

  8. EDUCATIONAL COMPUTER SIMULATION EXPERIMENT «REAL-TIME SINGLE-MOLECULE IMAGING OF QUANTUM INTERFERENCE»

    Directory of Open Access Journals (Sweden)

    Alexander V. Baranov

    2015-01-01

    Full Text Available Taking part in the organized project activities students of the technical University create virtual physics laboratories. The article gives an example of the student’s project-computer modeling and visualization one of the most wonderful manifestations of reality-quantum interference of particles. The real experiment with heavy organic fluorescent molecules is used as a prototype for this computer simulation. The student’s software product can be used in informational space of the system of open education.

  9. The association between post-traumatic stress disorder and shorter telomere length: A systematic review and meta-analysis.

    Science.gov (United States)

    Li, Xuemei; Wang, Jiang; Zhou, Jianghua; Huang, Pan; Li, Jiping

    2017-08-15

    Post-traumatic stress disorder (PTSD) is a common psychiatric disorder, which may accelerate aging. Many study have investigated the association between telomeres length and PTSD, but results from published studies are contradictory. Therefore, Meta-analysis approaches were conducted to give more precise estimate of relationship between telomere length and PTSD. We systematically reviewed the databases of PUBMED, PsycINFO, Medline(Ovid SP) and EMBASE for all articles on the association between telomere length and PTSD. Data were summarized by using random-effects in the meta-analysis. The heterogeneity among studies were examined by using Cochrane's Q statistic and I-squared. Five eligible studies containing 3851 participants were included in our meta-analysis. Shorten telomere length was significantly associated with PTSD with mean difference of -0.19( 95% CI: -0.27, -0.01; P<0.001) with I-square of 96%. The results from subgroup analysis demonstrated that shorter telomere length was significantly associated with PTSD across all gender groups, with mean difference of -0.15( 95% CI: -0.29, -0.01; P=0.04) for female, mean difference of -0.17( 95% CI: -0.19, -0.15; P<0.001) for male. Meanwhile, shorten telomere length was significantly associated with sexual assault(mean difference =-0.15, 95% CI: -0.29, -0.01), childhood trauma (mean difference =-0.08, 95% CI: -0.19, -0.07), but not combat (mean difference =-0.39, 95% CI: -0.83, 0.05). Compared to the individuals without PTSD, individuals with PTSD have shorter telomere length, which has implications for early intervention and timely treatment to prevent future adverse health outcomes. Copyright © 2017. Published by Elsevier B.V.

  10. Usefulness of measurement of circulation time using MgSO4 : correlation with time-density curve using electron beam computed tomography

    International Nuclear Information System (INIS)

    Kim, Byung Ki; Lee, Hui Joong; Lee, Jong Min; Kim, Yong Joo; Kang, Duck Sik

    1999-01-01

    To determine the usefulness of MgSO 4 for measuring the systemic circulation time. Systemic circulation time, defined as elapsed time from the injection of MgSO 4 solution to the point of pharyngeal burning sensation, was measured in 63 volunteers. MgSO 4 was injected into a superficial vein of an upper extremity. Using dynamic electron beam computed tomography at the level of the abdominal aorta and celiac axis, a time-intensity curve was plotted, and for these two locations, maximal enhancement time was compared. For 60 of the 63 subjects, both systemic circulation time and maximal enhancement time were determined. Average systemic circulation time was 17.4 (SD:3.6) secs. and average maximal enhancement times at the level of the abdominal aorta and celiac axis were 17.5 (SD:3.0) secs. and 18.5 (SD:3.2) secs., respectively. Correlation coefficients between systemic circulation time and maximal enhancement time for the abdominal aorta and celiac axis were 0.73 (p 4 injection and maximal enhancement time for the abdominal aorta showed significant correlation. Thus, to determine the appropriate scanning time in contrast-enhanced radiological studies, MgSO 4 can be used instead of a test bolus study

  11. Towards shorter wavelength x-ray lasers using a high power, short pulse pump laser

    International Nuclear Information System (INIS)

    Tighe, W.; Krushelnick, K.; Valeo, E.; Suckewer, S.

    1991-05-01

    A near-terawatt, KrF* laser system, focussable to power densities >10 18 W/cm 2 has been constructed for use as a pump laser in various schemes aimed at the development of x-ray lasing below 5nm. The laser system along with output characteristics such as the pulse duration, the focal spot size, and the percentage of amplified spontaneous emission (ASE) emitted along with the laser pulse will be presented. Schemes intended to lead to shorter wavelength x-ray emission will be described. The resultant requirements on the pump laser characteristics and the target design will be outlined. Results from recent solid target experiments and two-laser experiments, showing the interaction of a high-power, short pulse laser with a preformed plasma, will be presented. 13 refs., 5 figs

  12. Shorter epilepsy duration is associated with better seizure outcome in temporal lobe epilepsy surgery

    Directory of Open Access Journals (Sweden)

    Lucas Crociati Meguins

    2015-03-01

    Full Text Available Objective To investigate the influence of patient’s age and seizure onset on surgical outcome of temporal lobe epilepsy (TLE. Method A retrospective observational investigation performed from a cohort of patients from 2000 to 2012. Results A total of 229 patients were included. One-hundred and eleven of 179 patients (62% were classified as Engel I in the group with < 50 years old, whereas 33 of 50 (66% in the group with ≥ 50 years old group (p = 0.82. From those Engel I, 88 (61% reported epilepsy duration inferior to 10 years and 56 (39% superior to 10 years (p < 0.01. From the total of patients not seizure free, 36 (42% reported epilepsy duration inferior to 10 years and 49 (58% superior to 10 years (p < 0.01. Conclusion Patients with shorter duration of epilepsy before surgery had better postoperative seizure control than patients with longer duration of seizures.

  13. Association of mutations in the hemochromatosis gene with shorter life expectancy

    DEFF Research Database (Denmark)

    Bathum, L; Christiansen, L; Nybo, H

    2001-01-01

    BACKGROUND: To investigate whether the frequency of carriers of mutations in the HFE gene associated with hereditary hemochromatosis diminishes with age as an indication that HFE mutations are associated with increased mortality. It is of value in the debate concerning screening for hereditary...... hemochromatosis to determine the significance of heterozygosity. METHODS: Genotyping for mutations in exons 2 and 4 of the HFE gene using denaturing gradient gel electrophoresis in 1784 participants aged 45 to 100 years from 4 population-based studies: all 183 centenarians from the Danish Centenarian Study, 601...... in the distribution of mutations in exon 2 in the different age groups. CONCLUSIONS: In a high-carrier frequency population like Denmark, mutations in HFE show an age-related reduction in the frequency of heterozygotes for C282Y, which suggests that carrier status is associated with shorter life expectancy....

  14. Shorter Decentralized Attribute-Based Encryption via Extended Dual System Groups

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-01-01

    Full Text Available Decentralized attribute-based encryption (ABE is a special form of multiauthority ABE systems, in which no central authority and global coordination are required other than creating the common reference parameters. In this paper, we propose a new decentralized ABE in prime-order groups by using extended dual system groups. We formulate some assumptions used to prove the security of our scheme. Our proposed scheme is fully secure under the standard k-Lin assumption in random oracle model and can support any monotone access structures. Compared with existing fully secure decentralized ABE systems, our construction has shorter ciphertexts and secret keys. Moreover, fast decryption is achieved in our system, in which ciphertexts can be decrypted with a constant number of pairings.

  15. Shorter preschool, leukocyte telomere length is associated with obesity at age 9 in Latino children

    DEFF Research Database (Denmark)

    Kjaer, Thora Wesenberg; Faurholt-Jepsen, D; Mehta, K M

    2018-01-01

    The aim of this study was to determine the potential role of leukocyte telomere length as a biomarker for development of childhood obesity in a low-income Latino population. A birth cohort of Latino children (N = 201) in San Francisco (recruited May 2006-May 2007) was followed until age 9...... and assessed annually for obesity and dietary intake. Leukocyte telomere length was measured at 4 and 5 years (n = 102) and assessed as a predictor for obesity at age 9, adjusting for known risk factors. Furthermore, leukocyte telomere length at age 4 and 5 was evaluated as a possible mediator...... of the relationship between excessive sugar-sweetened beverage consumption and obesity at age 9. Shorter leukocyte telomere length in preschoolers was associated with obesity at age 9 (adjusted odds ratio 0.35, 95% confidence interval 0.13-0.94) after adjustment for known risk factors. Telomere length mediated 11...

  16. Smoking Topography among Korean Smokers: Intensive Smoking Behavior with Larger Puff Volume and Shorter Interpuff Interval.

    Science.gov (United States)

    Kim, Sungroul; Yu, Sol

    2018-05-18

    The difference of smoker's topography has been found to be a function many factors, including sex, personality, nicotine yield, cigarette type (i.e., flavored versus non-flavored) and ethnicity. We evaluated the puffing behaviors of Korean smokers and its association with smoking-related biomarker levels. A sample of 300 participants was randomly recruited from metropolitan areas in South Korea. Topography measures during a 24-hour period were obtained using a CReSS pocket device. Korean male smokers smoked two puffs less per cigarette compared to female smokers (15.0 (13.0⁻19.0) vs. 17.5 (15.0⁻21.0) as the median (Interquartile range)), but had a significantly larger puff volume (62.7 (52.7⁻75.5) mL vs. 53.5 (42.0⁻64.2) mL); p = 0.012). The interpuff interval was similar between men and women (8.9 (6.5⁻11.2) s vs. 8.3 (6.2⁻11.0) s; p = 0.122) but much shorter than other study results. A dose-response association ( p = 0.0011) was observed between daily total puff volumes and urinary cotinine concentrations, after controlling for sex, age, household income level and nicotine addiction level. An understanding of the difference of topography measures, particularly the larger puff volume and shorter interpuff interval of Korean smokers, may help to overcome a potential underestimation of internal doses of hazardous byproducts of smoking.

  17. ATM/RB1 mutations predict shorter overall survival in urothelial cancer.

    Science.gov (United States)

    Yin, Ming; Grivas, Petros; Emamekhoo, Hamid; Mendiratta, Prateek; Ali, Siraj; Hsu, JoAnn; Vasekar, Monali; Drabick, Joseph J; Pal, Sumanta; Joshi, Monika

    2018-03-30

    Mutations of DNA repair genes, e.g. ATM/RB1 , are frequently found in urothelial cancer (UC) and have been associated with better response to cisplatin-based chemotherapy. Further external validation of the prognostic value of ATM/RB1 mutations in UC can inform clinical decision making and trial designs. In the discovery dataset, ATM/RB1 mutations were present in 24% of patients and were associated with shorter OS (adjusted HR 2.67, 95% CI, 1.45-4.92, p = 0.002). There was a higher mutation load in patients carrying ATM/RB1 mutations (median mutation load: 6.7 versus 5.5 per Mb, p = 0.072). In the validation dataset, ATM/RB1 mutations were present in 22.2% of patients and were non-significantly associated with shorter OS (adjusted HR 1.87, 95% CI, 0.97-3.59, p = 0.06) and higher mutation load (median mutation load: 8.1 versus 7.2 per Mb, p = 0.126). Exome sequencing data of 130 bladder UC patients from The Cancer Genome Atlas (TCGA) dataset were analyzed as a discovery cohort to determine the prognostic value of ATM/RB1 mutations. Results were validated in an independent cohort of 81 advanced UC patients. Cox proportional hazard regression analysis was performed to calculate the hazard ratio (HR) and 95% confidence interval (CI) to compare overall survival (OS). ATM/RB1 mutations may be a biomarker of poor prognosis in unselected UC patients and may correlate with higher mutational load. Further studies are required to determine factors that can further stratify prognosis and evaluate predictive role of ATM/RB1 mutation status to immunotherapy and platinum-based chemotherapy.

  18. [Construction and analysis of a monitoring system with remote real-time multiple physiological parameters based on cloud computing].

    Science.gov (United States)

    Zhu, Lingyun; Li, Lianjie; Meng, Chunyan

    2014-12-01

    There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.

  19. Biofeedback effectiveness to reduce upper limb muscle activity during computer work is muscle specific and time pressure dependent

    DEFF Research Database (Denmark)

    Vedsted, Pernille; Søgaard, Karen; Blangsted, Anne Katrine

    2011-01-01

    trapezius (TRA) can reduce bilateral TRA activity but not extensor digitorum communis (EDC) activity; (2) biofeedback from EDC can reduce activity in EDC but not in TRA; (3) biofeedback is more effective in no time constraint than in the time constraint working condition. Eleven healthy women performed......Continuous electromyographic (EMG) activity level is considered a risk factor in developing muscle disorders. EMG biofeedback is known to be useful in reducing EMG activity in working muscles during computer work. The purpose was to test the following hypotheses: (1) unilateral biofeedback from...... computer work during two different working conditions (time constraint/no time constraint) while receiving biofeedback. Biofeedback was given from right TRA or EDC through two modes (visual/auditory) by the use of EMG or mechanomyography as biofeedback source. During control sessions (no biofeedback), EMG...

  20. ZOCO V - a computer code for the calculation of time-dependent spatial pressure distribution in reactor containments

    International Nuclear Information System (INIS)

    Mansfeld, G.; Schally, P.

    1978-06-01

    ZOCO V is a computer code which can calculate the time- and space- dependent pressure distribution in containments of water-cooled nuclear power reactors (both full pressure containments and pressure suppression systems) following a loss-of-coolant accident, caused by the rupture of a main coolant or steam pipe

  1. Using Just-in-Time Information to Support Scientific Discovery Learning in a Computer-Based Simulation

    Science.gov (United States)

    Hulshof, Casper D.; de Jong, Ton

    2006-01-01

    Students encounter many obstacles during scientific discovery learning with computer-based simulations. It is hypothesized that an effective type of support, that does not interfere with the scientific discovery learning process, should be delivered on a "just-in-time" base. This study explores the effect of facilitating access to…

  2. LUCKY-TD code for solving the time-dependent transport equation with the use of parallel computations

    Energy Technology Data Exchange (ETDEWEB)

    Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2016-12-15

    An algorithm for solving the time-dependent transport equation in the P{sub m}S{sub n} group approximation with the use of parallel computations is presented. The algorithm is implemented in the LUCKY-TD code for supercomputers employing the MPI standard for the data exchange between parallel processes.

  3. Dyslexics' faster decay of implicit memory for sounds and words is manifested in their shorter neural adaptation.

    Science.gov (United States)

    Jaffe-Dax, Sagi; Frenkel, Or; Ahissar, Merav

    2017-01-24

    Dyslexia is a prevalent reading disability whose underlying mechanisms are still disputed. We studied the neural mechanisms underlying dyslexia using a simple frequency-discrimination task. Though participants were asked to compare the two tones in each trial, implicit memory of previous trials affected their responses. We hypothesized that implicit memory decays faster among dyslexics. We tested this by increasing the temporal intervals between consecutive trials, and by measuring the behavioral impact and ERP responses from the auditory cortex. Dyslexics showed a faster decay of implicit memory effects on both measures, with similar time constants. Finally, faster decay of implicit memory also characterized the impact of sound regularities in benefitting dyslexics' oral reading rate. Their benefit decreased faster as a function of the time interval from the previous reading of the same non-word. We propose that dyslexics' shorter neural adaptation paradoxically accounts for their longer reading times, since it reduces their temporal window of integration of past stimuli, resulting in noisier and less reliable predictions for both simple and complex stimuli. Less reliable predictions limit their acquisition of reading expertise.

  4. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    Science.gov (United States)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  5. Computer simulation of different designs of pseudo-random time-of-flight velocity analysers for molecular beam scattering experiments

    International Nuclear Information System (INIS)

    Rotzoll, G.

    1982-01-01

    After a brief summary of the pseudo-random time-of-flight (TOF) method, the design criteria for construction of a pseudo-random TOF disc are considered and complemented by computer simulations. The question of resolution and the choice of the sequence length and number of time channels per element are discussed. Moreover, the stability requirements of the chopper motor frequency are investigated. (author)

  6. Theory and computation of disturbance invariant sets for discrete-time linear systems

    Directory of Open Access Journals (Sweden)

    Ilya Kolmanovsky

    1998-01-01

    . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.

  7. Investigating the Quality of Time Kindergarten Children Spend with Television, Computer, Books, and Toys

    Directory of Open Access Journals (Sweden)

    Ali ÇAKMAK

    2015-12-01

    Full Text Available The purpose of this study is to understand the place of four stimuli in lives of children attending early childhood; television, computer, books and toys. In the present study, data obtained from children’s drawing and interviews was analyzed. Fifty-one children between the age of 5 and 6 participated in the study. They were attending three private kindergartens. First, the children were asked to draw themselves with a television, computer, books and toys. Then, they were interviewed to learn about their use of television, computer, books and toys. Following, the pictures and interview transcripts were analyzed and coding categories were determined via content analysis. The findings indicate that children mention watching cartoons most; and they draw themselves as playing with popular cartoon characters. Children have positive feelings towards all of the stimuli; however, they used more powerful and detailed explanations of their feelings towards books and toys

  8. Computing the Skewness of the Phylogenetic Mean Pairwise Distance in Linear Time

    DEFF Research Database (Denmark)

    Tsirogiannis, Constantinos; Sandel, Brody Steven

    2014-01-01

    The phylogenetic Mean Pairwise Distance (MPD) is one of the most popular measures for computing the phylogenetic distance between a given group of species. More specifically, for a phylogenetic tree and for a set of species R represented by a subset of the leaf nodes of , the MPD of R is equal...... to the average cost of all possible simple paths in that connect pairs of nodes in R. Among other phylogenetic measures, the MPD is used as a tool for deciding if the species of a given group R are closely related. To do this, it is important to compute not only the value of the MPD for this group but also...

  9. PROSA: A computer program for statistical analysis of near-real-time-accountancy (NRTA) data

    International Nuclear Information System (INIS)

    Beedgen, R.; Bicking, U.

    1987-04-01

    The computer program PROSA (Program for Statistical Analysis of NRTA Data) is a tool to decide on the basis of statistical considerations if, in a given sequence of materials balance periods, a loss of material might have occurred or not. The evaluation of the material balance data is based on statistical test procedures. In PROSA three truncated sequential tests are applied to a sequence of material balances. The manual describes the statistical background of PROSA and how to use the computer program on an IBM-PC with DOS 3.1. (orig.) [de

  10. Monte Carlo computation of correlation times of independent relaxation modes at criticality

    NARCIS (Netherlands)

    Bloete, H.W.J.; Nightingale, M.P.

    2000-01-01

    We investigate aspects of universality of Glauber critical dynamics in two dimensions. We compute the critical exponent $z$ and numerically corroborate its universality for three different models in the static Ising universality class and for five independent relaxation modes. We also present

  11. Simulating Serious Games: A Discrete-Time Computational Model Based on Cognitive Flow Theory

    Science.gov (United States)

    Westera, Wim

    2018-01-01

    This paper presents a computational model for simulating how people learn from serious games. While avoiding the combinatorial explosion of a games micro-states, the model offers a meso-level pathfinding approach, which is guided by cognitive flow theory and various concepts from learning sciences. It extends a basic, existing model by exposing…

  12. Computation of short-time diffusion using the particle simulation method

    International Nuclear Information System (INIS)

    Janicke, L.

    1983-01-01

    The method of particle simulation allows a correct description of turbulent diffusion even in areas near the source and the computation of overall average values (anticipated values). The model is suitable for dealing with complex situation. It is derived from the K-model which describes the dispersion of noxious matter using the diffusion formula. (DG) [de

  13. Using the CPU and GPU for real-time video enhancement on a mobile computer

    CSIR Research Space (South Africa)

    Bachoo, AK

    2010-09-01

    Full Text Available . In this paper, the current advances in mobile CPU and GPU hardware are used to implement video enhancement algorithms in a new way on a mobile computer. Both the CPU and GPU are used effectively to achieve realtime performance for complex image enhancement...

  14. Computing the Skewness of the Phylogenetic Mean Pairwise Distance in Linear Time

    DEFF Research Database (Denmark)

    Tsirogiannis, Constantinos; Sandel, Brody Steven

    2013-01-01

    to the average cost of all possible simple paths in T that connect pairs of nodes in R. Among other phylogenetic measures, the MPD is used as a tool for deciding if the species of a given group R are closely related. To do this, it is important to compute not only the value of the MPD for this group but also...

  15. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    Science.gov (United States)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  16. Mesh and Time-Step Independent Computational Fluid Dynamics (CFD) Solutions

    Science.gov (United States)

    Nijdam, Justin J.

    2013-01-01

    A homework assignment is outlined in which students learn Computational Fluid Dynamics (CFD) concepts of discretization, numerical stability and accuracy, and verification in a hands-on manner by solving physically realistic problems of practical interest to engineers. The students solve a transient-diffusion problem numerically using the common…

  17. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm...

  18. A prototype system for real time computer animation of slow traffic in a driving simulator

    NARCIS (Netherlands)

    Roerdink, JBTM; van Delden, MJB; Hin, AJS; van Wolffelaar, PC; Thalmann, NM; Skala,

    1997-01-01

    The Traffic Research Centre (TRC) of the University of Groningen in the Netherlands has developed a driving simulator with 'intelligent' computer-controlled traffic, consisting at the moment only of saloon cars. The range of possible applications would be greatly enhanced if other traffic

  19. A Prototype System for Real Time Computer Animation of Slow Traffic in a Driving Simulator

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.; Delden, Mattijs J.B. van; Hin, Andrea J.S.; Wolffelaar, Peter C. van

    1997-01-01

    The Traffic Research Centre (TRC) of the University of Groningen in the Netherlands has developed a driving simulator with ‘intelligent’ computer-controlled traffic, consisting at the moment only of saloon cars. The range of possible applications would be greatly enhanced if other traffic

  20. Real-Time Computer Animation of Bicyclists and Pedestrians in a Driving Simulator

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.; Delden, Mattijs J.B. van; Hin, Andrea J.S.; Wolffelaar, Peter C. van

    1996-01-01

    The Traffic Research Centre (TRC) of the University of Groningen in the Netherlands has developed a driving simulator with ‘intelligent’ computer-controlled traffic, consisting at the moment only of saloon cars. The range of possible applications would be greatly enhanced if other traffic

  1. Reduction of computing time for seismic applications based on the Helmholtz equation by Graphics Processing Units

    NARCIS (Netherlands)

    Knibbe, H.P.

    2015-01-01

    The oil and gas industry makes use of computational intensive algorithms to provide an image of the subsurface. The image is obtained by sending wave energy into the subsurface and recording the signal required for a seismic wave to reflect back to the surface from the Earth interfaces that may have

  2. The investigation and implementation of real-time face pose and direction estimation on mobile computing devices

    Science.gov (United States)

    Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae

    2012-04-01

    The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.

  3. Computational biology approaches to plant metabolism and photosynthesis: applications for corals in times of climate change and environmental stress.

    Science.gov (United States)

    Crabbe, M James C

    2010-08-01

    Knowledge of factors that are important in reef resilience helps us to understand how reef ecosystems react following major anthropogenic and environmental disturbances. The symbiotic relationship between the photosynthetic zooxanthellae algal cells and corals is that the zooxanthellae provide the coral with carbon, while the coral provides protection and access to enough light for the zooxanthellae to photosynthesise. This article reviews some recent advances in computational biology relevant to photosynthetic organisms, including Beyesian approaches to kinetics, computational methods for flux balances in metabolic processes, and determination of clades of zooxanthallae. Application of these systems will be important in the conservation of coral reefs in times of climate change and environmental stress.

  4. TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 1, Numerical methods and input instructions

    International Nuclear Information System (INIS)

    Trent, D.S.; Eyler, L.L.; Budden, M.J.

    1983-09-01

    This document describes the numerical methods, current capabilities, and the use of the TEMPEST (Version L, MOD 2) computer program. TEMPEST is a transient, three-dimensional, hydrothermal computer program that is designed to analyze a broad range of coupled fluid dynamic and heat transfer systems of particular interest to the Fast Breeder Reactor thermal-hydraulic design community. The full three-dimensional, time-dependent equations of motion, continuity, and heat transport are solved for either laminar or turbulent fluid flow, including heat diffusion and generation in both solid and liquid materials. 10 refs., 22 figs., 2 tabs

  5. Minimally invasive oesophagectomy more expensive than open despite shorter length of stay.

    Science.gov (United States)

    Dhamija, Anish; Dhamija, Ankit; Hancock, Jacquelyn; McCloskey, Barbara; Kim, Anthony W; Detterbeck, Frank C; Boffa, Daniel J

    2014-05-01

    The minimally invasive oesophagectomy (MIO) approach offers a number of advantages over open approaches including reduced discomfort, shorter length of stay and a faster recovery to baseline status. On the other hand, minimally invasive procedures typically are longer and consume greater disposable instrumentation, potentially resulting in a greater overall cost. The objective of this study was to compare costs associated with various oesophagectomy approaches for oesophageal cancer. An institutional Resource Information Management System (RIMS) was queried for cost data relating to hospital expenditures (as opposed to billings or collections). The RIMS was searched for patients undergoing oesophagectomy for oesophageal cancer between 2003 and 2012 via minimally invasive, open transthoracic (OTT) (including Ivor Lewis, modified McKeown or thoracoabdominal) or transhiatal approaches. Patients that were converted from minimally invasive to open, or involved hybrid procedures, were excluded. A total of 160 oesophagectomies were identified, including 61 minimally invasive, 35 open transthoracic and 64 transhiatal. Costs on the day of surgery averaged higher in the MIO group ($12 476 ± 2190) compared with the open groups, OTT ($8202 ± 2512, P < 0.0001) or OTH ($5809 ± 2575, P < 0.0001). The median costs associated with the entire hospitalization also appear to be higher in the MIO group ($25 935) compared with OTT ($24 440) and OTH ($15 248). The average length of stay was lowest in the MIO group (11 ± 9 days) compared with OTT (19 ± 18 days, P = 0.006) and OTH (18 ± 28 days P = 0.07). The operative mortality was similar in the three groups (MIO = 3%, OTT = 9% and OTH = 3%). The operating theatre costs associated with minimally invasive oesophagectomy are significantly higher than OTT or OTH approaches. Unfortunately, a shorter hospital stay after MIO does not consistently offset higher surgical expense, as total hospital costs trend higher in the MIO patients. In

  6. Access to Electric Light Is Associated with Shorter Sleep Duration in a Traditionally Hunter-Gatherer Community.

    Science.gov (United States)

    de la Iglesia, Horacio O; Fernández-Duque, Eduardo; Golombek, Diego A; Lanza, Norberto; Duffy, Jeanne F; Czeisler, Charles A; Valeggia, Claudia R

    2015-08-01

    Access to electric light might have shifted the ancestral timing and duration of human sleep. To test this hypothesis, we studied two communities of the historically hunter-gatherer indigenous Toba/Qom in the Argentinean Chaco. These communities share the same ethnic and sociocultural background, but one has free access to electricity while the other relies exclusively on natural light. We fitted participants in each community with wrist activity data loggers to assess their sleep-wake cycles during one week in the summer and one week in the winter. During the summer, participants with access to electricity had a tendency to a shorter daily sleep bout (43 ± 21 min) than those living under natural light conditions. This difference was due to a later daily bedtime and sleep onset in the community with electricity, but a similar sleep offset and rise time in both communities. In the winter, participants without access to electricity slept longer (56 ± 17 min) than those with access to electricity, and this was also related to earlier bedtimes and sleep onsets than participants in the community with electricity. In both communities, daily sleep duration was longer during the winter than during the summer. Our field study supports the notion that access to inexpensive sources of artificial light and the ability to create artificially lit environments must have been key factors in reducing sleep in industrialized human societies. © 2015 The Author(s).

  7. Contributing to the design of run-time systems dedicated to high performance computing

    International Nuclear Information System (INIS)

    Perache, M.

    2006-10-01

    In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)

  8. "Taller and Shorter": Human 3-D Spatial Memory Distorts Familiar Multilevel Buildings.

    Directory of Open Access Journals (Sweden)

    Thomas Brandt

    Full Text Available Animal experiments report contradictory findings on the presence of a behavioural and neuronal anisotropy exhibited in vertical and horizontal capabilities of spatial orientation and navigation. We performed a pointing experiment in humans on the imagined 3-D direction of the location of various invisible goals that were distributed horizontally and vertically in a familiar multilevel hospital building. The 21 participants were employees who had worked for years in this building. The hypothesis was that comparison of the experimentally determined directions and the true directions would reveal systematic inaccuracy or dimensional anisotropy of the localizations. The study provides first evidence that the internal representation of a familiar multilevel building was distorted compared to the dimensions of the true building: vertically 215% taller and horizontally 51% shorter. This was not only demonstrated in the mathematical reconstruction of the mental model based on the analysis of the pointing experiments but also by the participants' drawings of the front view and the ground plan of the building. Thus, in the mental model both planes were altered in different directions: compressed for the horizontal floor plane and stretched for the vertical column plane. This could be related to human anisotropic behavioural performance of horizontal and vertical navigation in such buildings.

  9. Risky family processes prospectively forecast shorter telomere length mediated through negative emotions.

    Science.gov (United States)

    Brody, Gene H; Yu, Tianyi; Shalev, Idan

    2017-05-01

    This study was designed to examine prospective associations of risky family environments with subsequent levels of negative emotions and peripheral blood mononuclear cell telomere length (TL), a marker of cellular aging. A second purpose was to determine whether negative emotions mediate the hypothesized link between risky family processes and diminished telomere length. Participants were 293 adolescents (age 17 years at the first assessment) and their primary caregivers. Caregivers provided data on risky family processes when the youths were age 17 years, youths reported their negative emotions at age 18 years, and youths' TL was assayed from a blood sample at age 22 years. The results revealed that (a) risky family processes forecast heightened negative emotions (β = .316, p emotions forecast shorter TL (β = -.187, p = .012), and (c) negative emotions served as a mediator connecting risky family processes with diminished TL (indirect effect = -0.012, 95% CI [-0.036, -0.002]). These findings are consistent with the hypothesis that risky family processes presage premature cellular aging through effects on negative emotions, with potential implications for lifelong health. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Gain of chromosome arm 1q in atypical meningioma correlates with shorter progression-free survival.

    LENUS (Irish Health Repository)

    2012-02-01

    Aims: Atypical (WHO grade II) meningiomas have moderately high recurrence rates; even for completely resected tumours, approximately one-third will recur. Post-operative radiotherapy (RT) may aid local control and improve survival, but carries the risk of side effects. More accurate prediction of recurrence risk is therefore needed for patients with atypical meningioma. Previously, we used high-resolution array CGH to identify genetic variations in 47 primary atypical meningiomas and found that approximately 60% of tumors show gain of 1q at 1q25.1 and 1q25.3 to 1q32.1 and that 1q gain appeared to correlate with shorter progression-free survival. This study aimed to validate and extend these findings in an independent sample. Methods: 86 completely resected atypical meningiomas (with 25 recurrences) from two neurosurgical centres in Ireland were identified and clinical follow up was obtained. Utilizing a dual-colour interphase FISH assay, 1q gain was assessed using BAC probes directed against 1q25.1 and 1q32.1. Results: The results confirm the high prevalence of 1q gain at these loci in atypical meningiomas. We further show that gain at 1q32.1 and age each correlate with progression-free survival in patients who have undergone complete surgical resection of atypical meningiomas. Conclusions: These independent findings suggest that assessment of 1q copy number status can add clinically useful information for the management of patients with atypical meningiomas.

  11. Greater reproductive investment, but shorter lifespan, in agrosystem than in natural-habitat toads

    Directory of Open Access Journals (Sweden)

    Francisco Javier Zamora-Camacho

    2017-09-01

    Full Text Available Global amphibian decline is due to several factors: habitat loss, anthropization, pollution, emerging diseases, and global warming. Amphibians, with complex life cycles, are particularly susceptible to habitat alterations, and their survival may be impaired in anthropized habitats. Increased mortality is a well-known consequence of anthropization. Life-history theory predicts higher reproductive investment when mortality is increased. In this work, we compared age, body size, and different indicators of reproductive investment, as well as prey availability, in natterjack toads (Epidalea calamita from agrosystems and adjacent natural pine groves in Southwestern Spain. Mean age was lower in agrosystems than in pine groves, possibly as a consequence of increased mortality due to agrosystem environmental stressors. Remarkably, agrosystem toads were larger despite being younger, suggesting accelerated growth rate. Although we detected no differences in prey availability between habitats, artificial irrigation could shorten aestivation in agrosystems, thus increasing energy trade. Moreover, agrosystem toads exhibited increased indicators of reproductive investment. In the light of life-history theory, agrosystem toads might compensate for lesser reproductive events—due to shorter lives—with a higher reproductive investment in each attempt. Our results show that agrosystems may alter demography, which may have complex consequences on both individual fitness and population stability.

  12. "Taller and Shorter": Human 3-D Spatial Memory Distorts Familiar Multilevel Buildings.

    Science.gov (United States)

    Brandt, Thomas; Huber, Markus; Schramm, Hannah; Kugler, Günter; Dieterich, Marianne; Glasauer, Stefan

    2015-01-01

    Animal experiments report contradictory findings on the presence of a behavioural and neuronal anisotropy exhibited in vertical and horizontal capabilities of spatial orientation and navigation. We performed a pointing experiment in humans on the imagined 3-D direction of the location of various invisible goals that were distributed horizontally and vertically in a familiar multilevel hospital building. The 21 participants were employees who had worked for years in this building. The hypothesis was that comparison of the experimentally determined directions and the true directions would reveal systematic inaccuracy or dimensional anisotropy of the localizations. The study provides first evidence that the internal representation of a familiar multilevel building was distorted compared to the dimensions of the true building: vertically 215% taller and horizontally 51% shorter. This was not only demonstrated in the mathematical reconstruction of the mental model based on the analysis of the pointing experiments but also by the participants' drawings of the front view and the ground plan of the building. Thus, in the mental model both planes were altered in different directions: compressed for the horizontal floor plane and stretched for the vertical column plane. This could be related to human anisotropic behavioural performance of horizontal and vertical navigation in such buildings.

  13. Polynomial-time Algorithms for Computing Distances of Fuzzy Transition Systems

    OpenAIRE

    Chen, Taolue; Han, Tingting; Cao, Yongzhi

    2017-01-01

    Behaviour distances to measure the resemblance of two states in a (nondeterministic) fuzzy transition system have been proposed recently in the literature. Such a distance, defined as a pseudo-ultrametric over the state space of the model, provides a quantitative analogue of bisimilarity. In this paper, we focus on the problem of computing these distances. We first extend the definition of the pseudo-ultrametric by introducing discount such that the discounting factor being equal to 1 capture...

  14. Investigation of accuracy and computation time of a hierarchy of growth rate definitions

    International Nuclear Information System (INIS)

    Maudlin, P.J.; Borg, R.C.; Ott, K.O.

    1977-07-01

    A numerical illustration of the hierarchy of four logically different procedures for the calculation of the asymptotic growth of fast breeder fuel is presented. Each hierarchy level is analyzed in terms of accuracy and computational effort. Using the first procedure as reference, the fourth procedure, which incorporates the isotopic breeding worths, w vector*, requires a minimum amount of effort with a negligible decrease in accuracy

  15. Detection of advance item knowledge using response times in computer adaptive testing

    NARCIS (Netherlands)

    Meijer, R.R.; Sotaridona, Leonardo

    2006-01-01

    We propose a new method for detecting item preknowledge in a CAT based on an estimate of “effective response time” for each item. Effective response time is defined as the time required for an individual examinee to answer an item correctly. An unusually short response time relative to the expected

  16. ESPRIT-Tree: hierarchical clustering analysis of millions of 16S rRNA pyrosequences in quasilinear computational time.

    Science.gov (United States)

    Cai, Yunpeng; Sun, Yijun

    2011-08-01

    Taxonomy-independent analysis plays an essential role in microbial community analysis. Hierarchical clustering is one of the most widely employed approaches to finding operational taxonomic units, the basis for many downstream analyses. Most existing algorithms have quadratic space and computational complexities, and thus can be used only for small or medium-scale problems. We propose a new online learning-based algorithm that simultaneously addresses the space and computational issues of prior work. The basic idea is to partition a sequence space into a set of subspaces using a partition tree constructed using a pseudometric, then recursively refine a clustering structure in these subspaces. The technique relies on new methods for fast closest-pair searching and efficient dynamic insertion and deletion of tree nodes. To avoid exhaustive computation of pairwise distances between clusters, we represent each cluster of sequences as a probabilistic sequence, and define a set of operations to align these probabilistic sequences and compute genetic distances between them. We present analyses of space and computational complexity, and demonstrate the effectiveness of our new algorithm using a human gut microbiota data set with over one million sequences. The new algorithm exhibits a quasilinear time and space complexity comparable to greedy heuristic clustering algorithms, while achieving a similar accuracy to the standard hierarchical clustering algorithm.

  17. Elastic Spatial Query Processing in OpenStack Cloud Computing Environment for Time-Constraint Data Analysis

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2017-03-01

    Full Text Available Geospatial big data analysis (GBDA is extremely significant for time-constraint applications such as disaster response. However, the time-constraint analysis is not yet a trivial task in the cloud computing environment. Spatial query processing (SQP is typical computation-intensive and indispensable for GBDA, and the spatial range query, join query, and the nearest neighbor query algorithms are not scalable without using MapReduce-liked frameworks. Parallel SQP algorithms (PSQPAs are trapped in screw-processing, which is a known issue in Geoscience. To satisfy time-constrained GBDA, we propose an elastic SQP approach in this paper. First, Spark is used to implement PSQPAs. Second, Kubernetes-managed Core Operation System (CoreOS clusters provide self-healing Docker containers for running Spark clusters in the cloud. Spark-based PSQPAs are submitted to Docker containers, where Spark master instances reside. Finally, the horizontal pod auto-scaler (HPA would scale-out and scale-in Docker containers for supporting on-demand computing resources. Combined with an auto-scaling group of virtual instances, HPA helps to find each of the five nearest neighbors for 46,139,532 query objects from 834,158 spatial data objects in less than 300 s. The experiments conducted on an OpenStack cloud demonstrate that auto-scaling containers can satisfy time-constraint GBDA in clouds.

  18. Computing time-series suspended-sediment concentrations and loads from in-stream turbidity-sensor and streamflow data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Doug; Ziegler, Andrew C.

    2010-01-01

    Over the last decade, use of a method for computing suspended-sediment concentration and loads using turbidity sensors—primarily nephelometry, but also optical backscatter—has proliferated. Because an in- itu turbidity sensor is capa le of measuring turbidity instantaneously, a turbidity time series can be recorded and related directly to time-varying suspended-sediment concentrations. Depending on the suspended-sediment characteristics of the measurement site, this method can be more reliable and, in many cases, a more accurate means for computing suspended-sediment concentrations and loads than traditional U.S. Geological Survey computational methods. Guidelines and procedures for estimating time s ries of suspended-sediment concentration and loading as a function of turbidity and streamflow data have been published in a U.S. Geological Survey Techniques and Methods Report, Book 3, Chapter C4. This paper is a summary of these guidelines and discusses some of the concepts, s atistical procedures, and techniques used to maintain a multiyear suspended sediment time series.

  19. Development of a real-time monitoring system and integration of different computer system in LHD experiments using IP multicast

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Nakamura, Yukio; Teramachi, Yasuaki; Okumura, Haruhiko; Yamaguchi, Satarou

    2002-01-01

    There are several different computer systems in LHD (Large Helical Device) experiment, and therefore the coalition of these computers is a key to perform the experiment. Real-time monitoring system is also important because the long discharge is needed in the LHD experiment. In order to achieve these two requirements, the technique of IP multicast is adopted. The authors have developed three new systems, the first one is the real-time monitoring system, the next one is the delivery system of the shot number and the last one is the real-time notification system of the plasma data registration. The first system can deliver the real-time monitoring data to the LHD experimental LAN through the firewall of the LHD control LAN in NIFS. The other two systems are used to realize high coalition of the different computers in the LHD plasma experiment. We can conclude that IP multicast is very useful both in the LHD experiment and a future large plasma experiment from various experiences. (author)

  20. Development of a computer program for drop time and impact velocity of the rod cluster control assembly

    International Nuclear Information System (INIS)

    Choi, K.-S.; Yim, J.-S.; Kim, I.-K.; Kim, K.-T.

    1993-01-01

    In PWR the rod cluster control assembly (RCCA) for shutdown is released upon the action of the control drive mechanism and falls down through the guide thimble by its weight. Drop time and impact velocity of the RCCA are two key parameters with respect to reactivity insertion time and the mechanical integrity of fuel assembly. Therefore, the precise control of the drop time and impact velocity is prerequisite to modifying the existing design features of the RCCA and guide thimble or newly designing them. During its falling down into the core, the RCCA is retarded by various forces acting on it such as flow resistance and friction caused by the RCCA movement, buoyancy mechanical friction caused by contacting inner surface of the guide thimble, etc. However, complicated coupling of the various forces makes it difficult to derive an analytical dynamic equation for the drop time and impact velocity. This paper deals with the development of a computer program containing an analytical dynamic equation applicable to the Korean Fuel Assembly (KOFA) loaded in the Korean nuclear power plants. The computer program is benchmarked with an available single control rod drop tests. Since the predicted values are in good agreements with the test results, the computer program developed in this paper can be employed to modify the existing design features of the RCCA and guide thimble and to develop their new design features for advanced nuclear reactors. (author)

  1. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  2. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  3. A Karaoke System with Real-Time Media Merging and Sharing Functions for a Cloud-Computing-Integrated Mobile Device

    Directory of Open Access Journals (Sweden)

    Her-Tyan Yeh

    2013-01-01

    Full Text Available Mobile devices such as personal digital assistants (PDAs, smartphones, and tablets have increased in popularity and are extremely efficient for work-related, social, and entertainment uses. Popular entertainment services have also attracted substantial attention. Thus, relevant industries have exerted considerable efforts in establishing a method by which mobile devices can be used to develop excellent and convenient entertainment services. Because cloud-computing technology is mature and possesses a strong computing processing capacity, integrating this technology into the entertainment service function in mobile devices can reduce the data load on a system and maintain mobile device performances. This study combines cloud computing with a mobile device to design a karaoke system that contains real-time media merging and sharing functions. This system enables users to download music videos (MVs from their mobile device and sing and record their singing by using the device. They can upload the recorded song to the cloud server where it is merged with real-time media. Subsequently, by employing a media streaming technology, users can store their personal MVs in their mobile device or computer and instantaneously share these videos with others on the Internet. Through this process, people can instantly watch shared videos, enjoy the leisure and entertainment effects of mobile devices, and satisfy their desire for singing.

  4. Computer Breakdown as a Stress Factor during Task Completion under Time Pressure: Identifying Gender Differences Based on Skin Conductance

    Directory of Open Access Journals (Sweden)

    René Riedl

    2013-01-01

    Full Text Available In today’s society, as computers, the Internet, and mobile phones pervade almost every corner of life, the impact of Information and Communication Technologies (ICT on humans is dramatic. The use of ICT, however, may also have a negative side. Human interaction with technology may lead to notable stress perceptions, a phenomenon referred to as technostress. An investigation of the literature reveals that computer users’ gender has largely been ignored in technostress research, treating users as “gender-neutral.” To close this significant research gap, we conducted a laboratory experiment in which we investigated users’ physiological reaction to the malfunctioning of technology. Based on theories which explain that men, in contrast to women, are more sensitive to “achievement stress,” we predicted that male users would exhibit higher levels of stress than women in cases of system breakdown during the execution of a human-computer interaction task under time pressure, if compared to a breakdown situation without time pressure. Using skin conductance as a stress indicator, the hypothesis was confirmed. Thus, this study shows that user gender is crucial to better understanding the influence of stress factors such as computer malfunctions on physiological stress reactions.

  5. Person-related determinants of TV viewing and computer time in a cohort of young Dutch adults: Who sits the most?

    NARCIS (Netherlands)

    Uijtdewilligen, L.; Singh, A.S.; Chin A Paw, M.J.M.; Twisk, J.W.R.; van Mechelen, W.

    2015-01-01

    We aimed to assess the associations of person-related factors with leisure time television (TV) viewing and computer time among young adults. We analyzed self-reported TV viewing (h/week) and leisure computer time (h/week) from 475 Dutch young adults (47% male) who had participated in the Amsterdam

  6. Reliability of real-time computing with radiation data feedback at accidental release

    International Nuclear Information System (INIS)

    Deme, S.; Feher, I.; Lang, E.

    1989-07-01

    At present, the computing method normalized for the telemetric data represents the primary information for deciding on any necessary countermeasures in case of a nuclear reactor accident. The reliability of the results, however, are influenced by the choice of certain parameters that can not be determined by direct methods. Improperly chosen diffusion parameters would distort the determination of environmental radiation parameters normalized on the basis of the measurements ( 131 I activity concentration, gamma dose rate) at points lying at a given distance from the measuring stations. Numerical examples for the uncertainties due to the above factors are analyzed. (author) 4 refs.; 14 figs

  7. Cone-beam computed tomography: Time to move from ALARA to ALADA

    Energy Technology Data Exchange (ETDEWEB)

    Jaju, Prashant P.; Jaju, Sushma P. [Rishiraj College of Dental Sciences and Research Centre, Bhopa(Indonesia)

    2015-12-15

    Cone-beam computed tomography (CBCT) is routinely recommended for dental diagnosis and treatment planning. CBCT exposes patients to less radiation than does conventional CT. Still, lack of proper education among dentists and specialists is resulting in improper referral for CBCT. In addition, aiming to generate high-quality images, operators may increase the radiation dose, which can expose the patient to unnecessary risk. This letter advocates appropriate radiation dosing during CBCT to the benefit of both patients and dentists, and supports moving from the concept of 'as low as reasonably achievable' (ALARA) to 'as low as diagnostically acceptable' (ALADA.

  8. Generic Cospark of a Matrix Can Be Computed in Polynomial Time

    OpenAIRE

    Zhong, Sichen; Zhao, Yue

    2017-01-01

    The cospark of a matrix is the cardinality of the sparsest vector in the column space of the matrix. Computing the cospark of a matrix is well known to be an NP hard problem. Given the sparsity pattern (i.e., the locations of the non-zero entries) of a matrix, if the non-zero entries are drawn from independently distributed continuous probability distributions, we prove that the cospark of the matrix equals, with probability one, to a particular number termed the generic cospark of the matrix...

  9. Fine Output Voltage Control Method considering Time-Delay of Digital Inverter System for X-ray Computed Tomography

    Science.gov (United States)

    Shibata, Junji; Kaneko, Kazuhide; Ohishi, Kiyoshi; Ando, Itaru; Ogawa, Mina; Takano, Hiroshi

    This paper proposes a new output voltage control for an inverter system, which has time-delay and nonlinear load. In the next generation X-ray computed tomography of a medical device (X-ray CT) that uses the contactless power transfer method, the feedback signal often contains time-delay due to AD/DA conversion and error detection/correction time. When the PID controller of the inverter system is received the adverse effects of the time-delay, the controller often has an overshoot and a oscillated response. In order to overcome this problem, this paper proposes a compensation method based on the Smith predictor for an inverter system having a time-delay and the nonlinear loads which are the diode bridge rectifier and X-ray tube. The proposed compensation method consists of the hybrid Smith predictor system based on an equivalent analog circuit and DSP. The experimental results confirm the validity of the proposed system.

  10. Prenatal paracetamol exposure is associated with shorter anogenital distance in male infants

    Science.gov (United States)

    Fisher, B.G.; Thankamony, A.; Hughes, I.A.; Ong, K.K.; Dunger, D.B.; Acerini, C.L.

    2016-01-01

    STUDY QUESTION What is the relationship between maternal paracetamol intake during the masculinisation programming window (MPW, 8–14 weeks of gestation) and male infant anogenital distance (AGD), a biomarker for androgen action during the MPW? SUMMARY ANSWER Intrauterine paracetamol exposure during 8–14 weeks of gestation is associated with shorter AGD from birth to 24 months of age. WHAT IS ALREADY KNOWN The increasing prevalence of male reproductive disorders may reflect environmental influences on foetal testicular development during the MPW. Animal and human xenograft studies have demonstrated that paracetamol reduces foetal testicular testosterone production, consistent with reported epidemiological associations between prenatal paracetamol exposure and cryptorchidism. STUDY DESIGN, SIZE, DURATION Prospective cohort study (Cambridge Baby Growth Study), with recruitment of pregnant women at ~12 post-menstrual weeks of gestation from a single UK maternity unit between 2001 and 2009, and 24 months of infant follow-up. Of 2229 recruited women, 1640 continued with the infancy study after delivery, of whom 676 delivered male infants and completed a medicine consumption questionnaire. PARTICIPANTS/MATERIALS, SETTING, METHOD Mothers self-reported medicine consumption during pregnancy by a questionnaire administered during the perinatal period. Infant AGD (measured from 2006 onwards), penile length and testicular descent were assessed at 0, 3, 12, 18 and 24 months of age, and age-specific Z scores were calculated. Associations between paracetamol intake during three gestational periods (14 weeks) and these outcomes were tested by linear mixed models. Two hundred and twenty-five (33%) of six hundred and eighty-one male infants were exposed to paracetamol during pregnancy, of whom sixty-eight were reported to be exposed during 8–14 weeks. AGD measurements were available for 434 male infants. MAIN RESULTS AND THE ROLE OF CHANCE Paracetamol exposure during 8–14

  11. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    International Nuclear Information System (INIS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-01-01

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical

  12. Hard real-time quick EXAFS data acquisition with all open source software on a commodity personal computer

    International Nuclear Information System (INIS)

    So, I.; Siddons, D.P.; Caliebe, W.A.; Khalid, S.

    2007-01-01

    We describe here the data acquisition subsystem of the Quick EXAFS (QEXAFS) experiment at the National Synchrotron Light Source of Brookhaven National Laboratory. For ease of future growth and flexibility, almost all software components are open source with very active maintainers. Among them, Linux running on x86 desktop computer, RTAI for real-time response, COMEDI driver for the data acquisition hardware, Qt and PyQt for graphical user interface, PyQwt for plotting, and Python for scripting. The signal (A/D) and energy-reading (IK220 encoder) devices in the PCI computer are also EPICS enabled. The control system scans the monochromator energy through a networked EPICS motor. With the real-time kernel, the system is capable of deterministic data-sampling period of tens of micro-seconds with typical timing-jitter of several micro-seconds. At the same time, Linux is running in other non-real-time processes handling the user-interface. A modern Qt-based controls-frontend enhances productivity. The fast plotting and zooming of data in time or energy coordinates let the experimenters verify the quality of the data before detailed analysis. Python scripting is built-in for automation. The typical data-rate for continuous runs are around 10 M bytes/min

  13. The role of real-time in biomedical science: a meta-analysis on computational complexity, delay and speedup.

    Science.gov (United States)

    Faust, Oliver; Yu, Wenwei; Rajendra Acharya, U

    2015-03-01

    The concept of real-time is very important, as it deals with the realizability of computer based health care systems. In this paper we review biomedical real-time systems with a meta-analysis on computational complexity (CC), delay (Δ) and speedup (Sp). During the review we found that, in the majority of papers, the term real-time is part of the thesis indicating that a proposed system or algorithm is practical. However, these papers were not considered for detailed scrutiny. Our detailed analysis focused on papers which support their claim of achieving real-time, with a discussion on CC or Sp. These papers were analyzed in terms of processing system used, application area (AA), CC, Δ, Sp, implementation/algorithm (I/A) and competition. The results show that the ideas of parallel processing and algorithm delay were only recently introduced and journal papers focus more on Algorithm (A) development than on implementation (I). Most authors compete on big O notation (O) and processing time (PT). Based on these results, we adopt the position that the concept of real-time will continue to play an important role in biomedical systems design. We predict that parallel processing considerations, such as Sp and algorithm scaling, will become more important. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Toward a web-based real-time radiation treatment planning system in a cloud computing environment.

    Science.gov (United States)

    Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei

    2013-09-21

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are

  15. Toward a web-based real-time radiation treatment planning system in a cloud computing environment

    International Nuclear Information System (INIS)

    Na, Yong Hum; Kapp, Daniel S; Xing, Lei; Suh, Tae-Suk

    2013-01-01

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm 2 ) from the Varian TrueBeam TM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are

  16. The Effect of Shorter Treatment Regimens for Hepatitis C on Population Health and Under Fixed Budgets.

    Science.gov (United States)

    Morgan, Jake R; Kim, Arthur Y; Naggie, Susanna; Linas, Benjamin P

    2018-01-01

    Direct acting antiviral hepatitis C virus (HCV) therapies are highly effective but costly. Wider adoption of an 8-week ledipasvir/sofosbuvir treatment regimen could result in significant savings, but may be less efficacious compared with a 12-week regimen. We evaluated outcomes under a constrained budget and cost-effectiveness of 8 vs 12 weeks of therapy in treatment-naïve, noncirrhotic, genotype 1 HCV-infected black and nonblack individuals and considered scenarios of IL28B and NS5A resistance testing to determine treatment duration in sensitivity analyses. We developed a decision tree to use in conjunction with Monte Carlo simulation to investigate the cost-effectiveness of recommended treatment durations and the population health effect of these strategies given a constrained budget. Outcomes included the total number of individuals treated and attaining sustained virologic response (SVR) given a constrained budget and incremental cost-effectiveness ratios. We found that treating eligible (treatment-naïve, noncirrhotic, HCV-RNA budget among both black and nonblack individuals, and our results suggested that NS5A resistance testing is cost-effective. Eight-week therapy provides good value, and wider adoption of shorter treatment could allow more individuals to attain SVR on the population level given a constrained budget. This analysis provides an evidence base to justify movement of the 8-week regimen to the preferred regimen list for appropriate patients in the HCV treatment guidelines and suggests expanding that recommendation to black patients in settings where cost and relapse trade-offs are considered.

  17. Are Shorter Article Titles More Attractive for Citations? Cross-sectional Study of 22 Scientific Journals

    Science.gov (United States)

    Habibzadeh, Farrokh; Yadollahie, Mahboobeh

    2010-01-01

    Aim To investigate the correlation between the length of the title of a scientific article and the number of citations it receives, in view of the common editorial call for shorter titles. Methods Title and the number of citations to all articles published in 2005 in 22 arbitrarily chosen English-language journals (n = 9031) were retrieved from citation database Scopus. The 2008 journal impact factors of these 22 journals were also retrieved from Thomson Reuters’ Journal Citation Report (JCR). Assuming the article title length as the independent variable, and the number of citations to the article as the dependent variable, a linear regression model was applied. Results The slope of the regression line for some journals (n = 6, when titles were measured in characters but 7 when titles were measured in words) was negative – none was significantly different from 0. The overall slope for all journals was 0.140 (when titles were measured in characters) and 0.778 (when titles were measured in words), significantly different from 0 (P articles with longer titles received more citations – Spearman ρ = 0.266 – when titles were measured in characters, and ρ = 0.244 when titles were measured in words (P 10 and for 2 out of 14 journals with impact factor <10 (P < 0.001, Fisher exact test). Conclusion Longer titles seem to be associated with higher citation rates. This association is more pronounced for journals with high impact factors. Editors who insist on brief and concise titles should perhaps update the guidelines for authors of their journals and have more flexibility regarding the length of the title. PMID:20401960

  18. Hypermetabolism in ALS is associated with greater functional decline and shorter survival.

    Science.gov (United States)

    Steyn, Frederik J; Ioannides, Zara A; van Eijk, Ruben P A; Heggie, Susan; Thorpe, Kathryn A; Ceslis, Amelia; Heshmat, Saman; Henders, Anjali K; Wray, Naomi R; van den Berg, Leonard H; Henderson, Robert D; McCombe, Pamela A; Ngo, Shyuan T

    2018-04-29

    To determine the prevalence of hypermetabolism, relative to body composition, in amyotrophic lateral sclerosis (ALS) and its relationship with clinical features of disease and survival. Fifty-eight patients with clinically definite or probable ALS as defined by El Escorial criteria, and 58 age and sex-matched control participants underwent assessment of energy expenditure. Our primary outcome was the prevalence of hypermetabolism in cases and controls. Longitudinal changes in clinical parameters between hypermetabolic and normometabolic patients with ALS were determined for up to 12 months following metabolic assessment. Survival was monitored over a 30-month period following metabolic assessment. Hypermetabolism was more prevalent in patients with ALS than controls (41% vs 12%, adjusted OR=5.4; pALS. Mean lower motor neuron score (SD) was greater in hypermetabolic patients when compared with normometabolic patients (4 (0.3) vs 3 (0.7); p=0.04). In the 12 months following metabolic assessment, there was a greater change in Revised ALS Functional Rating Scale score in hypermetabolic patients when compared with normometabolic patients (-0.68 points/month vs -0.39 points/month; p=0.01). Hypermetabolism was inversely associated with survival. Overall, hypermetabolism increased the risk of death during follow-up to 220% (HR 3.2, 95% CI 1.1 to 9.4, p=0.03). Hypermetabolic patients with ALS have a greater level of lower motor neuron involvement, faster rate of functional decline and shorter survival. The metabolic index could be important for informing prognosis in ALS. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Alternative promoter usage generates novel shorter MAPT mRNA transcripts in Alzheimer's disease and progressive supranuclear palsy brains.

    Science.gov (United States)

    Huin, Vincent; Buée, Luc; Behal, Hélène; Labreuche, Julien; Sablonnière, Bernard; Dhaenens, Claire-Marie

    2017-10-03

    Alternative promoter usage is an important mechanism for transcriptome diversity and the regulation of gene expression. Indeed, this alternative usage may influence tissue/subcellular specificity, protein translation and function of the proteins. The existence of an alternative promoter for MAPT gene was considered for a long time to explain differential tissue specificity and differential response to transcription and growth factors between mRNA transcripts. The alternative promoter usage could explain partly the different tau proteins expression patterns observed in tauopathies. Here, we report on our discovery of a functional alternative promoter for MAPT, located upstream of the gene's second exon (exon 1). By analyzing genome databases and brain tissue from control individuals and patients with Alzheimer's disease or progressive supranuclear palsy, we identified novel shorter transcripts derived from this alternative promoter. These transcripts are increased in patients' brain tissue as assessed by 5'RACE-PCR and qPCR. We suggest that these new MAPT isoforms can be translated into normal or amino-terminal-truncated tau proteins. We further suggest that activation of MAPT's alternative promoter under pathological conditions leads to the production of truncated proteins, changes in protein localization and function, and thus neurodegeneration.

  20. Cleavage of SNAP25 and its shorter versions by the protease domain of serotype A botulinum neurotoxin.

    Directory of Open Access Journals (Sweden)

    Rahman M Mizanur

    Full Text Available Various substrates, catalysts, and assay methods are currently used to screen inhibitors for their effect on the proteolytic activity of botulinum neurotoxin. As a result, significant variation exists in the reported results. Recently, we found that one source of variation was the use of various catalysts, and have therefore evaluated its three forms. In this paper, we characterize three substrates under near uniform reaction conditions using the most active catalytic form of the toxin. Bovine serum albumin at varying optimum concentrations stimulated enzymatic activity with all three substrates. Sodium chloride had a stimulating effect on the full length synaptosomal-associated protein of 25 kDa (SNAP25 and its 66-mer substrates but had an inhibitory effect on the 17-mer substrate. We found that under optimum conditions, full length SNAP25 was a better substrate than its shorter 66-mer or 17-mer forms both in terms of kcat, Km, and catalytic efficiency kcat/Km. Assay times greater than 15 min introduced large variations and significantly reduced the catalytic efficiency. In addition to characterizing the three substrates, our results identify potential sources of variations in previous published results, and underscore the importance of using well-defined reaction components and assay conditions.