WorldWideScience

Sample records for previous computational results

  1. Low-dose computed tomography image restoration using previous normal-dose scan

    International Nuclear Information System (INIS)

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-01-01

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  2. The Importance of Business Model Factors for Cloud Computing Adoption: Role of Previous Experiences

    Directory of Open Access Journals (Sweden)

    Bogataj Habjan Kristina

    2017-08-01

    Full Text Available Background and Purpose: Bringing several opportunities for more effective and efficient IT governance and service exploitation, cloud computing is expected to impact the European and global economies significantly. Market data show that despite many advantages and promised benefits the adoption of cloud computing is not as fast and widespread as foreseen. This situation shows the need for further exploration of the potentials of cloud computing and its implementation on the market. The purpose of this research was to identify individual business model factors with the highest impact on cloud computing adoption. In addition, the aim was to identify the differences in opinion regarding the importance of business model factors on cloud computing adoption according to companies’ previous experiences with cloud computing services.

  3. Value of computed tomography pelvimetry in patients with a previous cesarean section

    International Nuclear Information System (INIS)

    Yamani, Tarik Y.; Rouzi, Abdulrahim A.

    1998-01-01

    A case-control study was conducted at the Department of Obstetrics and Gynaecology, King Abdulaziz University Hospital, Jeddah, Saudi Arabia to determine the value of computed tomography pelivimetry in patients with a previous cesarean section. Between January 1993 and December 1995, 219 pregnant women with one previous cesarean had antenatal CT pelvimetry for assessment of the pelvis. One hundred and nineteen women did not have CT pelvimetry and served as control. Fifty-one women (51%) in the CT pelvimetry group were delivered by cesarean section. Twenty-three women (23%) underwent elective cesarean section for contracted pelvis based upon the findings of CT pelvimetry and 28 women (28%) underwent emergency cesarean section after trial of labor. In the group who did not have CT pelvimetry, 26 women (21.8%) underwent emergency cesarean section. This was a statistically significant difference (P=0.02). There were no statistically significant differences in birthweight and Apgar scores either group. There was no prenatal or maternal mortality in this study. Computed tomography pelvimetry increased the rate of cesarean delivery without any benefit in the immediate delivery outcomes. Therefore, the practice of documenting the adequacy of the pelvis by CT pelvimetry before vaginal birth after cesarean should be abandoned. (author)

  4. Automated, computer interpreted radioimmunoassay results

    International Nuclear Information System (INIS)

    Hill, J.C.; Nagle, C.E.; Dworkin, H.J.; Fink-Bennett, D.; Freitas, J.E.; Wetzel, R.; Sawyer, N.; Ferry, D.; Hershberger, D.

    1984-01-01

    90,000 Radioimmunoassay results have been interpreted and transcribed automatically using software developed for use on a Hewlett Packard Model 1000 mini-computer system with conventional dot matrix printers. The computer program correlates the results of a combination of assays, interprets them and prints a report ready for physician review and signature within minutes of completion of the assay. The authors designed and wrote a computer program to query their patient data base for radioassay laboratory results and to produce a computer generated interpretation of these results using an algorithm that produces normal and abnormal interpretives. Their laboratory assays 50,000 patient samples each year using 28 different radioassays. Of these 85% have been interpreted using our computer program. Allowances are made for drug and patient history and individualized reports are generated with regard to the patients age and sex. Finalization of reports is still subject to change by the nuclear physician at the time of final review. Automated, computerized interpretations have realized cost savings through reduced personnel and personnel time and provided uniformity of the interpretations among the five physicians. Prior to computerization of interpretations, all radioassay results had to be dictated and reviewed for signing by one of the resident or staff physicians. Turn around times for reports prior to the automated computer program generally were two to three days. Whereas, the computerized interpret system allows reports to generally be issued the day assays are completed

  5. Computations for a condenser. Experimental results

    International Nuclear Information System (INIS)

    Walden, Jean.

    1975-01-01

    Computations for condensers are presented with experimental results. The computations are concerned with the steam flux at the condenser input, and inside the tube bundle. Experimental results are given for the flux inside the condenser sleeve and the flow passing through the tube bundle [fr

  6. Emphysema and bronchiectasis in COPD patients with previous pulmonary tuberculosis: computed tomography features and clinical implications

    Directory of Open Access Journals (Sweden)

    Jin J

    2018-01-01

    Full Text Available Jianmin Jin,1 Shuling Li,2 Wenling Yu,2 Xiaofang Liu,1 Yongchang Sun1,3 1Department of Respiratory and Critical Care Medicine, Beijing Tongren Hospital, Capital Medical University, Beijing, 2Department of Radiology, Beijing Tongren Hospital, Capital Medical University, Beijing, 3Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing, China Background: Pulmonary tuberculosis (PTB is a risk factor for COPD, but the clinical characteristics and the chest imaging features (emphysema and bronchiectasis of COPD with previous PTB have not been studied well.Methods: The presence, distribution, and severity of emphysema and bronchiectasis in COPD patients with and without previous PTB were evaluated by high-resolution computed tomography (HRCT and compared. Demographic data, respiratory symptoms, lung function, and sputum culture of Pseudomonas aeruginosa were also compared between patients with and without previous PTB.Results: A total of 231 COPD patients (82.2% ex- or current smokers, 67.5% male were consecutively enrolled. Patients with previous PTB (45.0% had more severe (p=0.045 and longer history (p=0.008 of dyspnea, more exacerbations in the previous year (p=0.011, and more positive culture of P. aeruginosa (p=0.001, compared with those without PTB. Patients with previous PTB showed a higher prevalence of bronchiectasis (p<0.001, which was more significant in lungs with tuberculosis (TB lesions, and a higher percentage of more severe bronchiectasis (Bhalla score ≥2, p=0.031, compared with those without previous PTB. The overall prevalence of emphysema was not different between patients with and without previous PTB, but in those with previous PTB, a higher number of subjects with middle (p=0.001 and lower (p=0.019 lobe emphysema, higher severity score (p=0.028, higher prevalence of panlobular emphysema (p=0.013, and more extensive centrilobular emphysema (p=0.039 were observed. Notably, in patients with

  7. Charged-particle thermonuclear reaction rates: IV. Comparison to previous work

    International Nuclear Information System (INIS)

    Iliadis, C.; Longland, R.; Champagne, A.E.; Coc, A.

    2010-01-01

    We compare our Monte Carlo reaction rates (see Paper II of this issue) to previous results that were obtained by using the classical method of computing thermonuclear reaction rates. For each reaction, the comparison is presented using two types of graphs: the first shows the change in reaction rate uncertainties, while the second displays our new results normalized to the previously recommended reaction rate. We find that the rates have changed significantly for almost all reactions considered here. The changes are caused by (i) our new Monte Carlo method of computing reaction rates (see Paper I of this issue), and (ii) newly available nuclear physics information (see Paper III of this issue).

  8. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  9. Computer usage and national energy consumption: Results from a field-metering study

    Energy Technology Data Exchange (ETDEWEB)

    Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Fuchs, Heidi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Greenblatt, Jeffery [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Pratt, Stacy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Willem, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Claybaugh, Erin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Beraki, Bereket [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Nagaraju, Mythri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Price, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Young, Scott [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division

    2014-12-01

    The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Bay Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power

  10. Air Space Proportion in Pterosaur Limb Bones Using Computed Tomography and Its Implications for Previous Estimates of Pneumaticity

    Science.gov (United States)

    Martin, Elizabeth G.; Palmer, Colin

    2014-01-01

    Air Space Proportion (ASP) is a measure of how much air is present within a bone, which allows for a quantifiable comparison of pneumaticity between specimens and species. Measured from zero to one, higher ASP means more air and less bone. Conventionally, it is estimated from measurements of the internal and external bone diameter, or by analyzing cross-sections. To date, the only pterosaur ASP study has been carried out by visual inspection of sectioned bones within matrix. Here, computed tomography (CT) scans are used to calculate ASP in a small sample of pterosaur wing bones (mainly phalanges) and to assess how the values change throughout the bone. These results show higher ASPs than previous pterosaur pneumaticity studies, and more significantly, higher ASP values in the heads of wing bones than the shaft. This suggests that pneumaticity has been underestimated previously in pterosaurs, birds, and other archosaurs when shaft cross-sections are used to estimate ASP. Furthermore, ASP in pterosaurs is higher than those found in birds and most sauropod dinosaurs, giving them among the highest ASP values of animals studied so far, supporting the view that pterosaurs were some of the most pneumatized animals to have lived. The high degree of pneumaticity found in pterosaurs is proposed to be a response to the wing bone bending stiffness requirements of flight rather than a means to reduce mass, as is often suggested. Mass reduction may be a secondary result of pneumaticity that subsequently aids flight. PMID:24817312

  11. Imprecise results: Utilizing partial computations in real-time systems

    Science.gov (United States)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  12. A randomised clinical trial of intrapartum fetal monitoring with computer analysis and alerts versus previously available monitoring

    Directory of Open Access Journals (Sweden)

    Santos Cristina

    2010-10-01

    Full Text Available Abstract Background Intrapartum fetal hypoxia remains an important cause of death and permanent handicap and in a significant proportion of cases there is evidence of suboptimal care related to fetal surveillance. Cardiotocographic (CTG monitoring remains the basis of intrapartum surveillance, but its interpretation by healthcare professionals lacks reproducibility and the technology has not been shown to improve clinically important outcomes. The addition of fetal electrocardiogram analysis has increased the potential to avoid adverse outcomes, but CTG interpretation remains its main weakness. A program for computerised analysis of intrapartum fetal signals, incorporating real-time alerts for healthcare professionals, has recently been developed. There is a need to determine whether this technology can result in better perinatal outcomes. Methods/design This is a multicentre randomised clinical trial. Inclusion criteria are: women aged ≥ 16 years, able to provide written informed consent, singleton pregnancies ≥ 36 weeks, cephalic presentation, no known major fetal malformations, in labour but excluding active second stage, planned for continuous CTG monitoring, and no known contra-indication for vaginal delivery. Eligible women will be randomised using a computer-generated randomisation sequence to one of the two arms: continuous computer analysis of fetal monitoring signals with real-time alerts (intervention arm or continuous CTG monitoring as previously performed (control arm. Electrocardiographic monitoring and fetal scalp blood sampling will be available in both arms. The primary outcome measure is the incidence of fetal metabolic acidosis (umbilical artery pH ecf > 12 mmol/L. Secondary outcome measures are: caesarean section and instrumental vaginal delivery rates, use of fetal blood sampling, 5-minute Apgar score Discussion This study will provide evidence of the impact of intrapartum monitoring with computer analysis and real

  13. Membrane computing: brief introduction, recent results and applications.

    Science.gov (United States)

    Păun, Gheorghe; Pérez-Jiménez, Mario J

    2006-07-01

    The internal organization and functioning of living cells, as well as their cooperation in tissues and higher order structures, can be a rich source of inspiration for computer science, not fully exploited at the present date. Membrane computing is an answer to this challenge, well developed at the theoretical (mathematical and computability theory) level, already having several applications (via usual computers), but without having yet a bio-lab implementation. After briefly discussing some general issues related to natural computing, this paper provides an informal introduction to membrane computing, focused on the main ideas, the main classes of results and of applications. Then, three recent achievements, of three different types, are briefly presented, with emphasis on the usefulness of membrane computing as a framework for devising models of interest for biological and medical research.

  14. Computer-aided detection system performance on current and previous digital mammograms in patients with contralateral metachronous breast cancer

    International Nuclear Information System (INIS)

    Kim, Seung Ja; Moon, Woo Kyung; Cho, Nariya; Chang, Jung Min

    2012-01-01

    Background: The computer-aided detection (CAD) system is widely used for screening mammography. The performance of the CAD system for contralateral breast cancer has not been reported for women with a history of breast cancer. Purpose: To retrospectively evaluate the performance of a CAD system on current and previous mammograms in patients with contralateral metachronous breast cancer. Material and Methods: During a 3-year period, 4945 postoperative patients had follow-up examinations, from whom we selected 55 women with contralateral breast cancers. Among them, 38 had visible malignant signs on the current mammograms. We analyzed the sensitivity and false-positive marks of the system on the current and previous mammograms according to lesion type and breast density. Results: The total visible lesion components on the current mammograms included 27 masses and 14 calcifications in 38 patients. The case-based sensitivity for all lesion types was 63.2% (24/38) with false-positive marks of 0.71 per patient. The lesion-based sensitivity for masses and calcifications was 59.3% (16/27) and 71.4% (10/14), respectively. The lesion-based sensitivity for masses in fatty and dense breasts was 68.8% (11/16) and 45.5% (5/11), respectively. The lesion-based sensitivity for calcifications in fatty and dense breasts was 100.0% (3/3) and 63.6% (7/11), respectively. The total visible lesion components on the previous mammograms included 13 masses and three calcifications in 16 patients, and the sensitivity for all lesion types was 31.3% (5/16) with false-positive marks of 0.81 per patient. On these mammograms, the sensitivity for masses and calcifications was 30.8% (4/13) and 33.3% (1/3), respectively. The sensitivity in fatty and dense breasts was 28.6% (2/7) and 33.3% (3/9), respectively. Conclusion: In the women with a history of breast cancer, the sensitivity of the CAD system in visible contralateral breast cancer was lower than in most previous reports using the same CAD

  15. Re-Computation of Numerical Results Contained in NACA Report No. 496

    Science.gov (United States)

    Perry, Boyd, III

    2015-01-01

    An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

  16. Determination of the Boltzmann constant with cylindrical acoustic gas thermometry: new and previous results combined

    Science.gov (United States)

    Feng, X. J.; Zhang, J. T.; Lin, H.; Gillis, K. A.; Mehl, J. B.; Moldover, M. R.; Zhang, K.; Duan, Y. N.

    2017-10-01

    We report a new determination of the Boltzmann constant k B using a cylindrical acoustic gas thermometer. We determined the length of the copper cavity from measurements of its microwave resonance frequencies. This contrasts with our previous work (Zhang et al 2011 Int. J. Thermophys. 32 1297, Lin et al 2013 Metrologia 50 417, Feng et al 2015 Metrologia 52 S343) that determined the length of a different cavity using two-color optical interferometry. In this new study, the half-widths of the acoustic resonances are closer to their theoretical values than in our previous work. Despite significant changes in resonator design and the way in which the cylinder length is determined, the value of k B is substantially unchanged. We combined this result with our four previous results to calculate a global weighted mean of our k B determinations. The calculation follows CODATA’s method (Mohr and Taylor 2000 Rev. Mod. Phys. 72 351) for obtaining the weighted mean value of k B that accounts for the correlations among the measured quantities in this work and in our four previous determinations of k B. The weighted mean {{\\boldsymbol{\\hat{k}}}{B}} is 1.380 6484(28)  ×  10-23 J K-1 with the relative standard uncertainty of 2.0  ×  10-6. The corresponding value of the universal gas constant is 8.314 459(17) J K-1 mol-1 with the relative standard uncertainty of 2.0  ×  10-6.

  17. [Usage patterns of internet and computer games : Results of an observational study of Tyrolean adolescents].

    Science.gov (United States)

    Riedl, David; Stöckl, Andrea; Nussbaumer, Charlotte; Rumpold, Gerhard; Sevecke, Kathrin; Fuchs, Martin

    2016-12-01

    The use of digital media such as the Internet and Computer games has greatly increased. In the western world, almost all young people regularly use these relevant technologies. Against this background, forms of use with possible negative consequences for young people have been recognized and scientifically examined. The aim of our study was therefore to investigate the prevalence of pathological use of these technologies in a sample of young Tyrolean people. 398 students (average age 15.2 years, SD ± 2.3 years, 34.2% female) were interviewed by means of the structured questionnaires CIUS (Internet), CSV-S (Computer games) and SWE (Self efficacy). Additionally, socio demographic data were collected. In line with previous studies, 7.7% of the adolescents of our sample showed criteria for problematic internet use, 3.3% for pathological internet use. 5.4% of the sample reported pathological computer game usage. The most important aspect to influence our results was the gender of the subjects. Intensive users in the field of Internet and Computer games were more often young men, young women, however, showed significantly less signs of pathological computer game use. A significant percentage of Tyrolean adolescents showed difficulties in the development of competent media use, indicating the growing significance of prevention measures such as media education. In a follow-up project, a sample of adolescents with mental disorders will be examined concerning their media use and be compared with our school-sample.

  18. Computing nucleon EDM on a lattice

    Science.gov (United States)

    Abramczyk, Michael; Aoki, Sinya; Blum, Tom; Izubuchi, Taku; Ohki, Hiroshi; Syritsyn, Sergey

    2018-03-01

    I will discuss briefly recent changes in the methodology of computing the baryon EDM on a lattice. The associated correction substantially reduces presently existing lattice values for the proton and neutron theta-induced EDMs, so that even the most precise previous lattice results become consistent with zero. On one hand, this change removes previous disagreements between these lattice results and the phenomenological estimates of the nucleon EDM. On the other hand, the nucleon EDM becomes much harder to compute on a lattice. In addition, I will review the progress in computing quark chromo-EDM-induced nucleon EDM using chiral quark action.

  19. Computing nucleon EDM on a lattice

    Energy Technology Data Exchange (ETDEWEB)

    Abramczyk, Michael; Izubuchi, Taku

    2017-06-18

    I will discuss briefly recent changes in the methodology of computing the baryon EDM on a lattice. The associated correction substantially reduces presently existing lattice values for the proton and neutron theta-induced EDMs, so that even the most precise previous lattice results become consistent with zero. On one hand, this change removes previous disagreements between these lattice results and the phenomenological estimates of the nucleon EDM. On the other hand, the nucleon EDM becomes much harder to compute on a lattice. In addition, I will review the progress in computing quark chromo-EDM-induced nucleon EDM using chiral quark action.

  20. Surgical Results of Trabeculectomy and Ahmed Valve Implantation Following a Previous Failed Trabeculectomy in Primary Congenital Glaucoma Patients

    OpenAIRE

    Lee, Naeun; Ma, Kyoung Tak; Bae, Hyoung Won; Hong, Samin; Seong, Gong Je; Hong, Young Jae; Kim, Chan Yun

    2015-01-01

    Purpose To compare the surgical results of trabeculectomy and Ahmed glaucoma valve implantation after a previous failed trabeculectomy. Methods A retrospective comparative case series review was performed on 31 eye surgeries in 20 patients with primary congenital glaucoma who underwent trabeculectomy or Ahmed glaucoma valve implantation after a previous failed trabeculectomy with mitomycin C. Results The preoperative mean intraocular pressure was 25.5 mmHg in the trabeculectomy group and 26.9...

  1. Initial water quantification results using neutron computed tomography

    Science.gov (United States)

    Heller, A. K.; Shi, L.; Brenizer, J. S.; Mench, M. M.

    2009-06-01

    Neutron computed tomography is an important imaging tool in the field of non-destructive testing and in fundamental research for many engineering applications. Contrary to X-rays, neutrons can be attenuated by some light materials, such as hydrogen, but can penetrate many heavy materials. Thus, neutron computed tomography is useful in obtaining important three-dimensional information about a sample's interior structure and material properties that other traditional methods cannot provide. The neutron computed tomography system at the Pennsylvania State University's Radiation Science and Engineering Center is being utilized to develop a water quantification technique for investigation of water distribution in fuel cells under normal conditions. A hollow aluminum cylinder test sample filled with a known volume of water was constructed for purposes of testing the quantification technique. Transmission images of the test sample at different angles were easily acquired through the synthesis of a dedicated image acquisition computer driving a rotary table controller and an in-house developed synchronization software package. After data acquisition, Octopus (version 8.2) and VGStudio Max (version 1.2) were used to perform cross-sectional and three-dimensional reconstructions of the sample, respectively. The initial reconstructions and water quantification results are presented.

  2. Computer and compiler effects on code results: status report

    International Nuclear Information System (INIS)

    1996-01-01

    Within the framework of the international effort on the assessment of computer codes, which are designed to describe the overall reactor coolant system (RCS) thermalhydraulic response, core damage progression, and fission product release and transport during severe accidents, there has been a continuous debate as to whether the code results are influenced by different code users or by different computers or compilers. The first aspect, the 'Code User Effect', has been investigated already. In this paper the other aspects will be discussed and proposals are given how to make large system codes insensitive to different computers and compilers. Hardware errors and memory problems are not considered in this report. The codes investigated herein are integrated code systems (e. g. ESTER, MELCOR) and thermalhydraulic system codes with extensions for severe accident simulation (e. g. SCDAP/RELAP, ICARE/CATHARE, ATHLET-CD), and codes to simulate fission product transport (e. g. TRAPMELT, SOPHAEROS). Since all of these codes are programmed in Fortran 77, the discussion herein is based on this programming language although some remarks are made about Fortran 90. Some observations about different code results by using different computers are reported and possible reasons for this unexpected behaviour are listed. Then methods are discussed how to avoid portability problems

  3. Computational fluid dynamics simulations and validations of results

    CSIR Research Space (South Africa)

    Sitek, MA

    2013-09-01

    Full Text Available Wind flow influence on a high-rise building is analyzed. The research covers full-scale tests, wind-tunnel experiments and numerical simulations. In the present paper computational model used in simulations is described and the results, which were...

  4. BaBar computing - From collisions to physics results

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The BaBar experiment at SLAC studies B-physics at the Upsilon(4S) resonance using the high-luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center (SLAC). Taking, processing and analyzing the very large data samples is a significant computing challenge. This presentation will describe the entire BaBar computing chain and illustrate the solutions chosen as well as their evolution with the ever higher luminosity being delivered by PEP-II. This will include data acquisition and software triggering in a high availability, low-deadtime online environment, a prompt, automated calibration pass through the data SLAC and then the full reconstruction of the data that takes place at INFN-Padova within 24 hours. Monte Carlo production takes place in a highly automated fashion in 25+ sites. The resulting real and simulated data is distributed and made available at SLAC and other computing centers. For analysis a much more sophisticated skimming pass has been introduced in the past year, ...

  5. Verification of SACI-2 computer code comparing with experimental results of BIBLIS-A and LOOP-7 computer code

    International Nuclear Information System (INIS)

    Soares, P.A.; Sirimarco, L.F.

    1984-01-01

    SACI-2 is a computer code created to study the dynamic behaviour of a PWR nuclear power plant. To evaluate the quality of its results, SACI-2 was used to recalculate commissioning tests done in BIBLIS-A nuclear power plant and to calculate postulated transients for Angra-2 reactor. The results of SACI-2 computer code from BIBLIS-A showed as much good agreement as those calculated with the KWU Loop 7 computer code for Angra-2. (E.G.) [pt

  6. Initial water quantification results using neutron computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.K. [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States)], E-mail: axh174@psu.edu; Shi, L.; Brenizer, J.S.; Mench, M.M. [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States)

    2009-06-21

    Neutron computed tomography is an important imaging tool in the field of non-destructive testing and in fundamental research for many engineering applications. Contrary to X-rays, neutrons can be attenuated by some light materials, such as hydrogen, but can penetrate many heavy materials. Thus, neutron computed tomography is useful in obtaining important three-dimensional information about a sample's interior structure and material properties that other traditional methods cannot provide. The neutron computed tomography system at Pennsylvania State University's Radiation Science and Engineering Center is being utilized to develop a water quantification technique for investigation of water distribution in fuel cells under normal conditions. A hollow aluminum cylinder test sample filled with a known volume of water was constructed for purposes of testing the quantification technique. Transmission images of the test sample at different angles were easily acquired through the synthesis of a dedicated image acquisition computer driving a rotary table controller and an in-house developed synchronization software package. After data acquisition, Octopus (version 8.2) and VGStudio Max (version 1.2) were used to perform cross-sectional and three-dimensional reconstructions of the sample, respectively. The initial reconstructions and water quantification results are presented.

  7. Validation of thermohydraulic codes by comparison of experimental results with computer simulations

    International Nuclear Information System (INIS)

    Madeira, A.A.; Galetti, M.R.S.; Pontedeiro, A.C.

    1989-01-01

    The results obtained by simulation of three cases from CANON depressurization experience, using the TRAC-PF1 computer code, version 7.6, implanted in the VAX-11/750 computer of Brazilian CNEN, are presented. The CANON experience was chosen as first standard problem in thermo-hydraulic to be discussed at ENFIR for comparing results from different computer codes with results obtained experimentally. The ability of TRAC-PF1 code to prevent the depressurization phase of a loss of primary collant accident in pressurized water reactors is evaluated. (M.C.K.) [pt

  8. Altered reward processing in pathological computer gamers--ERP-results from a semi-natural gaming-design.

    Science.gov (United States)

    Duven, Eva C P; Müller, Kai W; Beutel, Manfred E; Wölfling, Klaus

    2015-01-01

    Internet Gaming Disorder has been added as a research diagnosis in section III for the DSM-V. Previous findings from neuroscientific research indicate an enhanced motivational attention toward cues related to computer games, similar to findings in substance-related addictions. On the other hand in clinical observational studies tolerance effects are reported by patients with Internet Gaming disorder. In the present study we investigated whether an enhanced motivational attention or tolerance effects are present in patients with Internet Gaming Disorder. A clinical sample from the Outpatient Clinic for Behavioral Addictions in Mainz, Germany was recruited, fulfilling the diagnostic criteria for Internet Gaming Disorder. In a semi-natural EEG design participants played a computer game during the recording of event-related potentials to assess reward processing. The results indicated an attenuated P300 for patients with Internet Gaming Disorder in response to rewards in comparison to healthy controls, while the latency of N100 was prolonged and the amplitude of N100 was increased. Our findings support the hypothesis that tolerance effects are present in patients with Internet Gaming Disorder, when actively playing computer games. In addition, the initial orienting toward the gaming reward is suggested to consume more capacity for patients with Internet Gaming Disorder, which has been similarly reported by other studies with other methodological background in disorders of substance-related addictions.

  9. Computation of Quasiperiodic Normally Hyperbolic Invariant Tori: Rigorous Results

    Science.gov (United States)

    Canadell, Marta; Haro, Àlex

    2017-12-01

    The development of efficient methods for detecting quasiperiodic oscillations and computing the corresponding invariant tori is a subject of great importance in dynamical systems and their applications in science and engineering. In this paper, we prove the convergence of a new Newton-like method for computing quasiperiodic normally hyperbolic invariant tori carrying quasiperiodic motion in smooth families of real-analytic dynamical systems. The main result is stated as an a posteriori KAM-like theorem that allows controlling the inner dynamics on the torus with appropriate detuning parameters, in order to obtain a prescribed quasiperiodic motion. The Newton-like method leads to several fast and efficient computational algorithms, which are discussed and tested in a companion paper (Canadell and Haro in J Nonlinear Sci, 2017. doi: 10.1007/s00332-017-9388-z), in which new mechanisms of breakdown are presented.

  10. New computation results for the solar dynamo

    International Nuclear Information System (INIS)

    Csada, I.K.

    1983-01-01

    The analytical solution to the solar dynamo equation leads to a relatively simple algorythm for the computation in terms of kinematic models. The internal and external velocities taken to be in the form of axisymmetric meridional circulation and differential rotation, respectively. Pure radial expanding motions in the corona are also taken into consideration. Numerical results are presented in terms of the velocity parameters for the period of field reversal, decay time, magnitudes and phases of the first four multipoles. (author)

  11. Patterns of students' computer use and relations to their computer and information literacy

    DEFF Research Database (Denmark)

    Bundsgaard, Jeppe; Gerick, Julia

    2017-01-01

    Background: Previous studies have shown that there is a complex relationship between students’ computer and information literacy (CIL) and their use of information and communication technologies (ICT) for both recreational and school use. Methods: This study seeks to dig deeper into these complex...... relations by identifying different patterns of students’ school-related and recreational computer use in the 21 countries participating in the International Computer and Information Literacy Study (ICILS 2013). Results: Latent class analysis (LCA) of the student questionnaire and performance data from......, raising important questions about differences in contexts. Keywords: ICILS, Computer use, Latent class analysis (LCA), Computer and information literacy....

  12. A computer-based measure of resultant achievement motivation.

    Science.gov (United States)

    Blankenship, V

    1987-08-01

    Three experiments were conducted to develop a computer-based measure of individual differences in resultant achievement motivation (RAM) on the basis of level-of-aspiration, achievement motivation, and dynamics-of-action theories. In Experiment 1, the number of atypical shifts and greater responsiveness to incentives on 21 trials with choices among easy, intermediate, and difficult levels of an achievement-oriented game were positively correlated and were found to differentiate the 62 subjects (31 men, 31 women) on the amount of time they spent at a nonachievement task (watching a color design) 1 week later. In Experiment 2, test-retest reliability was established with the use of 67 subjects (15 men, 52 women). Point and no-point trials were offered in blocks, with point trials first for half the subjects and no-point trials first for the other half. Reliability was higher for the atypical-shift measure than for the incentive-responsiveness measure and was higher when points were offered first. In Experiment 3, computer anxiety was manipulated by creating a simulated computer breakdown in the experimental condition. Fifty-nine subjects (13 men, 46 women) were randomly assigned to the experimental condition or to one of two control conditions (an interruption condition and a no-interruption condition). Subjects with low RAM, as demonstrated by a low number of typical shifts, took longer to choose the achievement-oriented task, as predicted by the dynamics-of-action theory. The difference was evident in all conditions and most striking in the computer-breakdown condition. A change of focus from atypical to typical shifts is discussed.

  13. Computing Educator Attitudes about Motivation

    OpenAIRE

    Settle, Amber; Sedlak, Brian

    2016-01-01

    While motivation is of great interest to computing educators, relatively little work has been done on understanding faculty attitudes toward student motivation. Two previous qualitative studies of instructor attitudes found results identical to those from other disciplines, but neither study considered whether instructors perceive student motivation to be more important in certain computing classes. In this work we present quantitative results about the perceived importance of student motivat...

  14. Some gender issues in educational computer use: results of an international comparative survey

    OpenAIRE

    Janssen Reinen, I.A.M.; Plomp, T.

    1993-01-01

    In the framework of the Computers in Education international study of the International Association for the Evaluation of Educational Achievement (IEA), data have been collected concerning the use of computers in 21 countries. This article examines some results regarding the involvement of women in the implementation and use of computers in the educational practice of elementary, lower secondary and upper secondary education in participating countries. The results show that in many countries ...

  15. [Results of the marketing research study "Acceptance of physician's office computer systems"].

    Science.gov (United States)

    Steinhausen, D; Brinkmann, F; Engelhard, A

    1998-01-01

    We report on a market research study on the acceptance of computer systems in surgeries. 11,000 returned questionnaires of surgeons--user and nonuser--were analysed. We found out that most of the surgeons used their computers in a limited way, i.e. as a device for accounting. Concerning the level of utilisation there are differentials of Men-Women, West-East and Young-Old. In this study we also analysed the computer using behaviour of gynaecologic surgeons. As a result two third of all nonusers are not intending to utilise a computer in the future.

  16. Preoperative screening: value of previous tests.

    Science.gov (United States)

    Macpherson, D S; Snow, R; Lofgren, R P

    1990-12-15

    To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.

  17. Computation and experiment results of the grounding model of Three Gorges Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Wen Xishan; Zhang Yuanfang; Yu Jianhui; Chen Cixuan [Wuhan University of Hydraulic and Electrical Engineering (China); Qin Liming; Xu Jun; Shu Lianfu [Yangtze River Water Resources Commission, Wuhan (China)

    1999-07-01

    A model for the computation of the grounding parameters of the grids of Three Gorges Power Plant (TGPP) on the Yangtze River is presented in this paper. Using this model computation and analysis of grounding grids is carried out. The results show that reinforcing the grid of the dam is the main body of current dissipation. It must be reliably welded to form a good grounding grid. The experimental results show that the method and program of the computations are correct. (UK)

  18. Initial results of CyberKnife treatment for recurrent previously irradiated head and neck cancer

    International Nuclear Information System (INIS)

    Himei, Kengo; Katsui, Kuniaki; Yoshida, Atsushi

    2003-01-01

    The purpose of this study was to evaluate the efficacy of CyberKnife for recurrent previously irradiated head and neck cancer. Thirty-one patients with recurrent previously irradiated head and neck cancer were treated with a CyberKnife from July 1999 to March 2002 at Okayama Kyokuto Hospital were retrospectively studied. The accumulated dose was 28-80 Gy (median 60 Gy). The interval between CyberKnife treatment and previous radiotherapy was 0.4-429.5 months (median 16.3 months). Primary lesions were nasopharynx: 7, maxillary sinus: 6, tongue: 5, ethmoid sinus: 3, and others: 1. The pathology was squamous cell carcinoma: 25, adenoid cystic carcinoma: 4, and others: 2. Symptoms were pain: 8, and nasal bleeding: 2. The prescribed dose was 15.0-40.3 Gy (median 32.3 Gy) as for the marginal dose. The response rate (complete response (CR)+partial response (PR)) and local control rate (CR+PR+no change (NC)) was 74% and 94% respectively. Pain disappeared for 4 cases, relief was obtained for 4 cases and no change for 2 cases and nasal bleeding disappeared for 2 cases for an improvement of symptoms. An adverse effects were observed as mucositis in 5 cases and neck swelling in one case. Prognosis of recurrent previously irradiated head and neck cancer was estimated as poor. Our early experience shows that CyberKnife is expected to be feasible treatment for recurrent previously irradiated head and neck cancer, and for the reduction adverse effects and maintenance of useful quality of life (QOL) for patients. (author)

  19. High-Throughput Computational Assessment of Previously Synthesized Semiconductors for Photovoltaic and Photoelectrochemical Devices

    DEFF Research Database (Denmark)

    Kuhar, Korina; Pandey, Mohnish; Thygesen, Kristian Sommer

    2018-01-01

    Using computational screening we identify materials with potential use as light absorbers in photovoltaic or photoelectrochemical devices. The screening focuses on compounds of up to three different chemical elements which are abundant and nontoxic. A prescreening is carried out based on informat...

  20. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  1. Research on Computer-Based Education for Reading Teachers: A 1989 Update. Results of the First National Assessment of Computer Competence.

    Science.gov (United States)

    Balajthy, Ernest

    Results of the 1985-86 National Assessment of Educational Progress (NAEP) survey of American students' knowledge of computers suggest that American schools have a long way to go before computers can be said to have made a significant impact. The survey covered the 3rd, 7th, and 11th grade levels and assessed competence in knowledge of computers,…

  2. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    Energy Technology Data Exchange (ETDEWEB)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A. [and others

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEU codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.

  3. Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment.

    Science.gov (United States)

    Boevé, Anja J; Meijer, Rob R; Albers, Casper J; Beetsma, Yta; Bosker, Roel J

    2015-01-01

    The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration.

  4. Introducing handheld computing into a residency program: preliminary results from qualitative and quantitative inquiry.

    OpenAIRE

    Manning, B.; Gadd, C. S.

    2001-01-01

    Although published reports describe specific handheld computer applications in medical training, we know very little yet about how, and how well, handheld computing fits into the spectrum of information resources available for patient care and physician training. This paper reports preliminary quantitative and qualitative results from an evaluation study designed to track changes in computer usage patterns and computer-related attitudes before and after introduction of handheld computing. Pre...

  5. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.; Yan, Lie

    2014-01-01

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  6. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.

    2014-08-29

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  7. A survey of computational physics introductory computational science

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2008-01-01

    Computational physics is a rapidly growing subfield of computational science, in large part because computers can solve previously intractable problems or simulate natural processes that do not have analytic solutions. The next step beyond Landau's First Course in Scientific Computing and a follow-up to Landau and Páez's Computational Physics, this text presents a broad survey of key topics in computational physics for advanced undergraduates and beginning graduate students, including new discussions of visualization tools, wavelet analysis, molecular dynamics, and computational fluid dynamics

  8. Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment

    Science.gov (United States)

    Boevé, Anja J.; Meijer, Rob R.; Albers, Casper J.; Beetsma, Yta; Bosker, Roel J.

    2015-01-01

    The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration. PMID:26641632

  9. Locating previously unknown patterns in data-mining results: a dual data- and knowledge-mining method

    Directory of Open Access Journals (Sweden)

    Knaus William A

    2006-03-01

    Full Text Available Abstract Background Data mining can be utilized to automate analysis of substantial amounts of data produced in many organizations. However, data mining produces large numbers of rules and patterns, many of which are not useful. Existing methods for pruning uninteresting patterns have only begun to automate the knowledge acquisition step (which is required for subjective measures of interestingness, hence leaving a serious bottleneck. In this paper we propose a method for automatically acquiring knowledge to shorten the pattern list by locating the novel and interesting ones. Methods The dual-mining method is based on automatically comparing the strength of patterns mined from a database with the strength of equivalent patterns mined from a relevant knowledgebase. When these two estimates of pattern strength do not match, a high "surprise score" is assigned to the pattern, identifying the pattern as potentially interesting. The surprise score captures the degree of novelty or interestingness of the mined pattern. In addition, we show how to compute p values for each surprise score, thus filtering out noise and attaching statistical significance. Results We have implemented the dual-mining method using scripts written in Perl and R. We applied the method to a large patient database and a biomedical literature citation knowledgebase. The system estimated association scores for 50,000 patterns, composed of disease entities and lab results, by querying the database and the knowledgebase. It then computed the surprise scores by comparing the pairs of association scores. Finally, the system estimated statistical significance of the scores. Conclusion The dual-mining method eliminates more than 90% of patterns with strong associations, thus identifying them as uninteresting. We found that the pruning of patterns using the surprise score matched the biomedical evidence in the 100 cases that were examined by hand. The method automates the acquisition of

  10. Technical Note. The Concept of a Computer System for Interpretation of Tight Rocks Using X-Ray Computed Tomography Results

    Directory of Open Access Journals (Sweden)

    Habrat Magdalena

    2017-03-01

    Full Text Available The article presents the concept of a computer system for interpreting unconventional oil and gas deposits with the use of X-ray computed tomography results. The functional principles of the solution proposed are presented in the article. The main goal is to design a product which is a complex and useful tool in a form of a specialist computer software for qualitative and quantitative interpretation of images obtained from X-ray computed tomography. It is devoted to the issues of prospecting and identification of unconventional hydrocarbon deposits. The article focuses on the idea of X-ray computed tomography use as a basis for the analysis of tight rocks, considering especially functional principles of the system, which will be developed by the authors. The functional principles include the issues of graphical visualization of rock structure, qualitative and quantitative interpretation of model for visualizing rock samples, interpretation and a description of the parameters within realizing the module of quantitative interpretation.

  11. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  12. Computations for the 1:5 model of the THTR pressure vessel compared with experimental results

    International Nuclear Information System (INIS)

    Stangenberg, F.

    1972-01-01

    In this report experimental results measured at the 1:5-model of the prestressed concrete pressure vessel of the THTR-nuclear power station Schmehausen in 1971, are compared with the results of axis-symmetrical computations. Linear-elastic computations were performed as well as approximate computations for overload pressures taking into consideration the influences of the load history (prestressing, temperature, creep) and the effects of the steel components. (orig.) [de

  13. Effect of computer game playing on baseline laparoscopic simulator skills.

    Science.gov (United States)

    Halvorsen, Fredrik H; Cvancarova, Milada; Fosse, Erik; Mjåland, Odd

    2013-08-01

    Studies examining the possible association between computer game playing and laparoscopic performance in general have yielded conflicting results and neither has a relationship between computer game playing and baseline performance on laparoscopic simulators been established. The aim of this study was to examine the possible association between previous and present computer game playing and baseline performance on a virtual reality laparoscopic performance in a sample of potential future medical students. The participating students completed a questionnaire covering the weekly amount and type of computer game playing activity during the previous year and 3 years ago. They then performed 2 repetitions of 2 tasks ("gallbladder dissection" and "traverse tube") on a virtual reality laparoscopic simulator. Performance on the simulator were then analyzed for association to their computer game experience. Local high school, Norway. Forty-eight students from 2 high school classes volunteered to participate in the study. No association between prior and present computer game playing and baseline performance was found. The results were similar both for prior and present action game playing and prior and present computer game playing in general. Our results indicate that prior and present computer game playing may not affect baseline performance in a virtual reality simulator.

  14. Separation of electron ion ring components (computational simulation and experimental results)

    International Nuclear Information System (INIS)

    Aleksandrov, V.S.; Dolbilov, G.V.; Kazarinov, N.Yu.; Mironov, V.I.; Novikov, V.G.; Perel'shtejn, Eh.A.; Sarantsev, V.P.; Shevtsov, V.F.

    1978-01-01

    The problems of the available polarization value of electron-ion rings in the regime of acceleration and separation of its components at the final stage of acceleration are studied. The results of computational simulation by use of the macroparticle method and experiments on the ring acceleration and separation are given. The comparison of calculation results with experiment is presented

  15. Results of a Research Evaluating Quality of Computer Science Education

    Science.gov (United States)

    Záhorec, Ján; Hašková, Alena; Munk, Michal

    2012-01-01

    The paper presents the results of an international research on a comparative assessment of the current status of computer science education at the secondary level (ISCED 3A) in Slovakia, the Czech Republic, and Belgium. Evaluation was carried out based on 14 specific factors gauging the students' point of view. The authors present qualitative…

  16. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    Energy Technology Data Exchange (ETDEWEB)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  17. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    Science.gov (United States)

    Wen, Qiaoyan; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function. PMID:24982949

  18. Two-cloud-servers-assisted secure outsourcing multiparty computation.

    Science.gov (United States)

    Sun, Yi; Wen, Qiaoyan; Zhang, Yudong; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  19. The Effects of the Previous Outcome on Probabilistic Choice in Rats

    Science.gov (United States)

    Marshall, Andrew T.; Kirkpatrick, Kimberly

    2014-01-01

    This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915

  20. CMS results in the Combined Computing Readiness Challenge CCRC'08

    International Nuclear Information System (INIS)

    Bonacorsi, D.; Bauerdick, L.

    2009-01-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed

  1. Transversity results and computations in symplectic field theory

    International Nuclear Information System (INIS)

    Fabert, Oliver

    2008-01-01

    Although the definition of symplectic field theory suggests that one has to count holomorphic curves in cylindrical manifolds R x V equipped with a cylindrical almost complex structure J, it is already well-known from Gromov-Witten theory that, due to the presence of multiply-covered curves, we in general cannot achieve transversality for all moduli spaces even for generic choices of J. In this thesis we treat the transversality problem of symplectic field theory in two important cases. In the first part of this thesis we are concerned with the rational symplectic field theory of Hamiltonian mapping tori, which is also called the Floer case. For this observe that in the general geometric setup for symplectic field theory, the contact manifolds can be replaced by mapping tori M φ of symplectic manifolds (M,ω M ) with symplectomorphisms φ. While the cylindrical contact homology of M φ is given by the Floer homologies of powers of φ, the other algebraic invariants of symplectic field theory for M φ provide natural generalizations of symplectic Floer homology. For symplectically aspherical M and Hamiltonian φ we study the moduli spaces of rational curves and prove a transversality result, which does not need the polyfold theory by Hofer, Wysocki and Zehnder and allows us to compute the full contact homology of M φ ≅ S 1 x M. The second part of this thesis is devoted to the branched covers of trivial cylinders over closed Reeb orbits, which are the trivial examples of punctured holomorphic curves studied in rational symplectic field theory. Since all moduli spaces of trivial curves with virtual dimension one cannot be regular, we use obstruction bundles in order to find compact perturbations making the Cauchy-Riemann operator transversal to the zero section and show that the algebraic count of elements in the resulting regular moduli spaces is zero. Once the analytical foundations of symplectic field theory are established, our result implies that the

  2. Computational fluid mechanics

    Science.gov (United States)

    Hassan, H. A.

    1993-01-01

    Two papers are included in this progress report. In the first, the compressible Navier-Stokes equations have been used to compute leading edge receptivity of boundary layers over parabolic cylinders. Natural receptivity at the leading edge was simulated and Tollmien-Schlichting waves were observed to develop in response to an acoustic disturbance, applied through the farfield boundary conditions. To facilitate comparison with previous work, all computations were carried out at a free stream Mach number of 0.3. The spatial and temporal behavior of the flowfields are calculated through the use of finite volume algorithms and Runge-Kutta integration. The results are dominated by strong decay of the Tollmien-Schlichting wave due to the presence of the mean flow favorable pressure gradient. The effects of numerical dissipation, forcing frequency, and nose radius are studied. The Strouhal number is shown to have the greatest effect on the unsteady results. In the second paper, a transition model for low-speed flows, previously developed by Young et al., which incorporates first-mode (Tollmien-Schlichting) disturbance information from linear stability theory has been extended to high-speed flow by incorporating the effects of second mode disturbances. The transition model is incorporated into a Reynolds-averaged Navier-Stokes solver with a one-equation turbulence model. Results using a variable turbulent Prandtl number approach demonstrate that the current model accurately reproduces available experimental data for first and second-mode dominated transitional flows. The performance of the present model shows significant improvement over previous transition modeling attempts.

  3. High Speed Civil Transport (HSCT) Isolated Nacelle Transonic Boattail Drag Study and Results Using Computational Fluid Dynamics (CFD)

    Science.gov (United States)

    Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori

    2005-01-01

    Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25 percent of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.

  4. Thermodynamic properties of indan: Experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2016-01-01

    Highlights: • Heat capacities were measured for the temperature range (5 to 445) K. • Vapor pressures were measured for the temperature range (338 to 495) K. • Densities at saturation pressure were measured from T = (323 to 523) K. • Computed and experimentally derived properties for ideal gas entropies are in excellent accord. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Measurements leading to the calculation of thermodynamic properties in the ideal-gas state for indan (Chemical Abstracts registry number [496-11-7], 2,3-dihydro-1H-indene) are reported. Experimental methods were adiabatic heat-capacity calorimetry, differential scanning calorimetry, comparative ebulliometry, and vibrating-tube densitometry. Molar thermodynamic functions (enthalpies, entropies, and Gibbs energies) for the condensed and ideal-gas states were derived from the experimental studies at selected temperatures. Statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d, p) level of theory. Computed ideal-gas properties derived with the rigid-rotor harmonic-oscillator approximation are shown to be in excellent accord with ideal-gas entropies derived from thermophysical property measurements of this research, as well as with experimental heat capacities for the ideal-gas state reported in the literature. Literature spectroscopic studies and ab initio calculations report a range of values for the barrier to ring puckering. Results of the present work are consistent with a large barrier that allows use of the rigid-rotor harmonic-oscillator approximation for ideal-gas entropy and heat-capacity calculations, even with the stringent uncertainty requirements imposed by the calorimetric and physical property measurements reported here. All experimental results are compared with property values reported in the literature.

  5. Discrete ordinates cross-section generation in parallel plane geometry -- 2: Computational results

    International Nuclear Information System (INIS)

    Yavuz, M.

    1998-01-01

    In Ref. 1, the author presented inverse discrete ordinates (S N ) methods for cross-section generation with an arbitrary scattering anisotropy of order L (L ≤ N - 1) in parallel plane geometry. The solution techniques depend on the S N eigensolutions. The eigensolutions are determined by the inverse simplified S N method (ISS N ), which uses the surface Green's function matrices (T and R). Inverse problems are generally designed so that experimentally measured physical quantities can be used in the formulations. In the formulations, although T and R (TR matrices) are measurable quantities, the author does not have such data to check the adequacy and accuracy of the methods. However, it is possible to compute TR matrices by S N methods. The author presents computational results and computationally observed properties

  6. Ground-glass opacity: High-resolution computed tomography and 64-multi-slice computed tomography findings comparison

    International Nuclear Information System (INIS)

    Sergiacomi, Gianluigi; Ciccio, Carmelo; Boi, Luca; Velari, Luca; Crusco, Sonia; Orlacchio, Antonio; Simonetti, Giovanni

    2010-01-01

    Objective: Comparative evaluation of ground-glass opacity using conventional high-resolution computed tomography technique and volumetric computed tomography by 64-row multi-slice scanner, verifying advantage of volumetric acquisition and post-processing technique allowed by 64-row CT scanner. Methods: Thirty-four patients, in which was assessed ground-glass opacity pattern by previous high-resolution computed tomography during a clinical-radiological follow-up for their lung disease, were studied by means of 64-row multi-slice computed tomography. Comparative evaluation of image quality was done by both CT modalities. Results: It was reported good inter-observer agreement (k value 0.78-0.90) in detection of ground-glass opacity with high-resolution computed tomography technique and volumetric Computed Tomography acquisition with moderate increasing of intra-observer agreement (k value 0.46) using volumetric computed tomography than high-resolution computed tomography. Conclusions: In our experience, volumetric computed tomography with 64-row scanner shows good accuracy in detection of ground-glass opacity, providing a better spatial and temporal resolution and advanced post-processing technique than high-resolution computed tomography.

  7. Implementation of an electronic medical record system in previously computer-naïve primary care centres: a pilot study from Cyprus.

    Science.gov (United States)

    Samoutis, George; Soteriades, Elpidoforos S; Kounalakis, Dimitris K; Zachariadou, Theodora; Philalithis, Anastasios; Lionis, Christos

    2007-01-01

    The computer-based electronic medical record (EMR) is an essential new technology in health care, contributing to high-quality patient care and efficient patient management. The majority of southern European countries, however, have not yet implemented universal EMR systems and many efforts are still ongoing. We describe the development of an EMR system and its pilot implementation and evaluation in two previously computer-naïve public primary care centres in Cyprus. One urban and one rural primary care centre along with their personnel (physicians and nurses) were selected to participate. Both qualitative and quantitative evaluation tools were used during the implementation phase. Qualitative data analysis was based on the framework approach, whereas quantitative assessment was based on a nine-item questionnaire and EMR usage parameters. Two public primary care centres participated, and a total often health professionals served as EMR system evaluators. Physicians and nurses rated EMR relatively highly, while patients were the most enthusiastic supporters for the new information system. Major implementation impediments were the physicians' perceptions that EMR usage negatively affected their workflow, physicians' legal concerns, lack of incentives, system breakdowns, software design problems, transition difficulties and lack of familiarity with electronic equipment. The importance of combining qualitative and quantitative evaluation tools is highlighted. More efforts are needed for the universal adoption and routine use of EMR in the primary care system of Cyprus as several barriers to adoption exist; however, none is insurmountable. Computerised systems could improve efficiency and quality of care in Cyprus, benefiting the entire population.

  8. Increasing the trustworthiness of research results: the role of computers in qualitative text analysis

    Science.gov (United States)

    Lynne M. Westphal

    2000-01-01

    By using computer packages designed for qualitative data analysis a researcher can increase trustworthiness (i.e., validity and reliability) of conclusions drawn from qualitative research results. This paper examines trustworthiness issues and therole of computer software (QSR's NUD*IST) in the context of a current research project investigating the social...

  9. Standardized computer-based organized reporting of EEG

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C.

    2017-01-01

    Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted in the se......Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted...... in the second, revised version of SCORE (Standardized Computer-based Organized Reporting of EEG), which is presented in this paper. The revised terminology was implemented in a software package (SCORE EEG), which was tested in clinical practice on 12,160 EEG recordings. Standardized terms implemented in SCORE....... In the end, the diagnostic significance is scored, using a standardized list of terms. SCORE has specific modules for scoring seizures (including seizure semiology and ictal EEG patterns), neonatal recordings (including features specific for this age group), and for Critical Care EEG Terminology. SCORE...

  10. Standardized computer-based organized reporting of EEG

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C.

    2017-01-01

    Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted in the se......Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted...... in the second, revised version of SCORE (Standardized Computer-based Organized Reporting of EEG), which is presented in this paper. The revised terminology was implemented in a software package (SCORE EEG), which was tested in clinical practice on 12,160 EEG recordings. Standardized terms implemented in SCORE...... are used to report the features of clinical relevance, extracted while assessing the EEGs. Selection of the terms is context sensitive: initial choices determine the subsequently presented sets of additional choices. This process automatically generates a report and feeds these features into a database...

  11. Transversity results and computations in symplectic field theory

    Energy Technology Data Exchange (ETDEWEB)

    Fabert, Oliver

    2008-02-21

    Although the definition of symplectic field theory suggests that one has to count holomorphic curves in cylindrical manifolds R x V equipped with a cylindrical almost complex structure J, it is already well-known from Gromov-Witten theory that, due to the presence of multiply-covered curves, we in general cannot achieve transversality for all moduli spaces even for generic choices of J. In this thesis we treat the transversality problem of symplectic field theory in two important cases. In the first part of this thesis we are concerned with the rational symplectic field theory of Hamiltonian mapping tori, which is also called the Floer case. For this observe that in the general geometric setup for symplectic field theory, the contact manifolds can be replaced by mapping tori M{sub {phi}} of symplectic manifolds (M,{omega}{sub M}) with symplectomorphisms {phi}. While the cylindrical contact homology of M{sub {phi}} is given by the Floer homologies of powers of {phi}, the other algebraic invariants of symplectic field theory for M{sub {phi}} provide natural generalizations of symplectic Floer homology. For symplectically aspherical M and Hamiltonian {phi} we study the moduli spaces of rational curves and prove a transversality result, which does not need the polyfold theory by Hofer, Wysocki and Zehnder and allows us to compute the full contact homology of M{sub {phi}} {approx_equal} S{sup 1} x M. The second part of this thesis is devoted to the branched covers of trivial cylinders over closed Reeb orbits, which are the trivial examples of punctured holomorphic curves studied in rational symplectic field theory. Since all moduli spaces of trivial curves with virtual dimension one cannot be regular, we use obstruction bundles in order to find compact perturbations making the Cauchy-Riemann operator transversal to the zero section and show that the algebraic count of elements in the resulting regular moduli spaces is zero. Once the analytical foundations of symplectic

  12. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2014-01-01

    Full Text Available We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users’ public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  13. Surgical results of trabeculectomy and Ahmed valve implantation following a previous failed trabeculectomy in primary congenital glaucoma patients.

    Science.gov (United States)

    Lee, Naeun; Ma, Kyoung Tak; Bae, Hyoung Won; Hong, Samin; Seong, Gong Je; Hong, Young Jae; Kim, Chan Yun

    2015-04-01

    To compare the surgical results of trabeculectomy and Ahmed glaucoma valve implantation after a previous failed trabeculectomy. A retrospective comparative case series review was performed on 31 eye surgeries in 20 patients with primary congenital glaucoma who underwent trabeculectomy or Ahmed glaucoma valve implantation after a previous failed trabeculectomy with mitomycin C. The preoperative mean intraocular pressure was 25.5 mmHg in the trabeculectomy group and 26.9 mmHg in the Ahmed glaucoma valve implantation group (p = 0.73). The 48-month postoperative mean intraocular pressure was 19.6 mmHg in the trabeculectomy group and 20.2 mmHg in the Ahmed glaucoma valve implantation group (p = 0.95). The 12-month trabeculectomy success rate was 69%, compared with 64% for Ahmed glaucoma valve implantation, and the 48-month success rates were 42% and 36% for trabeculectomy and valve implantation, respectively. The success rates following the entire follow-up period were not significantly different between the two groups (p > 0.05 by log rank test). Postoperative complications occurred in 25% of the trabeculectomy-operated eyes and 9% of the Ahmed-implanted eyes (p = 0.38). There was no significant difference in surgical outcome between the trabeculectomy and Ahmed glaucoma valve implantation groups, neither of which had favorable results. However, the trabeculectomy group demonstrated a higher prevalence of adverse complications such as post-operative endophthalmitis.

  14. LHCb Computing Resources: 2012 re-assessment, 2013 request and 2014 forecast

    CERN Document Server

    Graciani Diaz, Ricardo

    2012-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2012 data-taking period, request of computing resource needs for 2013, and a first forecast of the 2014 needs, when restart of data-taking is foreseen. Estimates are based on 2011 experience, as well as on the results of a simulation of the computing model described in the document. Differences in the model and deviations in the estimates from previous presented results are stressed.

  15. TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 2, Assessment and verification results

    International Nuclear Information System (INIS)

    Eyler, L.L.; Trent, D.S.; Budden, M.J.

    1983-09-01

    During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs

  16. 22 CFR 40.91 - Certain aliens previously removed.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...

  17. Building Capacity Through Hands-on Computational Internships to Assure Reproducible Results and Implementation of Digital Documentation in the ICERT REU Program

    Science.gov (United States)

    Gomez, R.; Gentle, J.

    2015-12-01

    Modern data pipelines and computational processes require that meticulous methodologies be applied in order to insure that the source data, algorithms, and results are properly curated, managed and retained while remaining discoverable, accessible, and reproducible. Given the complexity of understanding the scientific problem domain being researched, combined with the overhead of learning to use advanced computing technologies, it becomes paramount that the next generation of scientists and researchers learn to embrace best-practices. The Integrative Computational Education and Research Traineeship (ICERT) is a National Science Foundation (NSF) Research Experience for Undergraduates (REU) Site at the Texas Advanced Computing Center (TACC). During Summer 2015, two ICERT interns joined the 3DDY project. 3DDY converts geospatial datasets into file types that can take advantage of new formats, such as natural user interfaces, interactive visualization, and 3D printing. Mentored by TACC researchers for ten weeks, students with no previous background in computational science learned to use scripts to build the first prototype of the 3DDY application, and leveraged Wrangler, the newest high performance computing (HPC) resource at TACC. Test datasets for quadrangles in central Texas were used to assemble the 3DDY workflow and code. Test files were successfully converted into a stereo lithographic (STL) format, which is amenable for use with a 3D printers. Test files and the scripts were documented and shared using the Figshare site while metadata was documented for the 3DDY application using OntoSoft. These efforts validated a straightforward set of workflows to transform geospatial data and established the first prototype version of 3DDY. Adding the data and software management procedures helped students realize a broader set of tangible results (e.g. Figshare entries), better document their progress and the final state of their work for the research group and community

  18. First results from a combined analysis of CERN computing infrastructure metrics

    Science.gov (United States)

    Duellmann, Dirk; Nieke, Christian

    2017-10-01

    The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.

  19. Everolimus for Previously Treated Advanced Gastric Cancer: Results of the Randomized, Double-Blind, Phase III GRANITE-1 Study

    Science.gov (United States)

    Ohtsu, Atsushi; Ajani, Jaffer A.; Bai, Yu-Xian; Bang, Yung-Jue; Chung, Hyun-Cheol; Pan, Hong-Ming; Sahmoud, Tarek; Shen, Lin; Yeh, Kun-Huei; Chin, Keisho; Muro, Kei; Kim, Yeul Hong; Ferry, David; Tebbutt, Niall C.; Al-Batran, Salah-Eddin; Smith, Heind; Costantini, Chiara; Rizvi, Syed; Lebwohl, David; Van Cutsem, Eric

    2013-01-01

    Purpose The oral mammalian target of rapamycin inhibitor everolimus demonstrated promising efficacy in a phase II study of pretreated advanced gastric cancer. This international, double-blind, phase III study compared everolimus efficacy and safety with that of best supportive care (BSC) in previously treated advanced gastric cancer. Patients and Methods Patients with advanced gastric cancer that progressed after one or two lines of systemic chemotherapy were randomly assigned to everolimus 10 mg/d (assignment schedule: 2:1) or matching placebo, both given with BSC. Randomization was stratified by previous chemotherapy lines (one v two) and region (Asia v rest of the world [ROW]). Treatment continued until disease progression or intolerable toxicity. Primary end point was overall survival (OS). Secondary end points included progression-free survival (PFS), overall response rate, and safety. Results Six hundred fifty-six patients (median age, 62.0 years; 73.6% male) were enrolled. Median OS was 5.4 months with everolimus and 4.3 months with placebo (hazard ratio, 0.90; 95% CI, 0.75 to 1.08; P = .124). Median PFS was 1.7 months and 1.4 months in the everolimus and placebo arms, respectively (hazard ratio, 0.66; 95% CI, 0.56 to 0.78). Common grade 3/4 adverse events included anemia, decreased appetite, and fatigue. The safety profile was similar in patients enrolled in Asia versus ROW. Conclusion Compared with BSC, everolimus did not significantly improve overall survival for advanced gastric cancer that progressed after one or two lines of previous systemic chemotherapy. The safety profile observed for everolimus was consistent with that observed for everolimus in other cancers. PMID:24043745

  20. Computer processing of the Δlambda/lambda measured results

    International Nuclear Information System (INIS)

    Draguniene, V.J.; Makariuniene, E.K.

    1979-01-01

    For the processing of the experimental data on the influence of the chemical environment on the radioactive decay constants, five programs have been written in the Fortran language in the version for the monitoring system DUBNA on the BESM-6 computer. Each program corresponds to a definite stage of data processing and acquirement of the definite answer. The first and second programs are calculation of the ratio of the pulse numbers measured with different sources and calculation of the mean value of dispersions. The third program is the averaging of the ratios of the pulse numbers. The fourth and the fifth are determination of the change of the radioactive decay constant. The created programs for the processing of the measurement results permit the processing of the experimental data beginning from the values of pulse numbers obtained directly in the experiments. The programs allow to treat a file of the experimental results, to calculated various errors in all the stages of the calculations. Printing of the obtained results is convenient for usage

  1. Operating Wireless Sensor Nodes without Energy Storage: Experimental Results with Transient Computing

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2016-12-01

    Full Text Available Energy harvesting is increasingly used for powering wireless sensor network nodes. Recently, it has been suggested to combine it with the concept of transient computing whereby the wireless sensor nodes operate without energy storage capabilities. This new combined approach brings benefits, for instance ultra-low power nodes and reduced maintenance, but also raises new challenges, foremost dealing with nodes that may be left without power for various time periods. Although transient computing has been demonstrated on microcontrollers, reports on experiments with wireless sensor nodes are still scarce in the literature. In this paper, we describe our experiments with solar, thermal, and RF energy harvesting sources that are used to power sensor nodes (including wireless ones without energy storage, but with transient computing capabilities. The results show that the selected solar and thermal energy sources can operate both the wired and wireless nodes without energy storage, whereas in our specific implementation, the developed RF energy source can only be used for the selected nodes without wireless connectivity.

  2. Computational needs survey of NASA automation and robotics missions. Volume 1: Survey and results

    Science.gov (United States)

    Davis, Gloria J.

    1991-01-01

    NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is that mission computing requirements are frequently unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. A preliminary set of advanced mission computational processing requirements of automation and robotics (A&R) systems are provided for use by NASA, industry, and academic communities. These results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implementation capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Volume one includes the survey and results. Volume two contains the appendixes.

  3. Natural Computing in Computational Finance Volume 4

    CERN Document Server

    O’Neill, Michael; Maringer, Dietmar

    2012-01-01

    This book follows on from Natural Computing in Computational Finance  Volumes I, II and III.   As in the previous volumes of this series, the  book consists of a series of  chapters each of  which was selected following a rigorous, peer-reviewed, selection process.  The chapters illustrate the application of a range of cutting-edge natural  computing and agent-based methodologies in computational finance and economics.  The applications explored include  option model calibration, financial trend reversal detection, enhanced indexation, algorithmic trading,  corporate payout determination and agent-based modeling of liquidity costs, and trade strategy adaptation.  While describing cutting edge applications, the chapters are  written so that they are accessible to a wide audience. Hence, they should be of interest  to academics, students and practitioners in the fields of computational finance and  economics.  

  4. [Computer-assisted analysis of the results of training in internal medicine].

    Science.gov (United States)

    Vrbová, H; Spunda, M

    1991-06-01

    Analysis of the results of teaching of clinical disciplines has in the long run an impact on the standard and value of medical care. It requires processing of quantitative and qualitative data. The selection of indicators which will be followed up and procedures used for their processing are of fundamental importance. The submitted investigation is an example how to use possibilities to process results of effectiveness analysis in teaching internal medicine by means of computer technique. As an indicator of effectiveness the authors selected the percentage of students who had an opportunity during the given period of their studies to observe a certain pathological condition, and as method of data collection a survey by means of questionnaires was used. The task permits to differentiate the students' experience (whether the student examined the patient himself or whether the patient was only demonstrated) and it makes it possible to differentiate the place of observation (at the university teaching hospital or regional non-teaching hospital attachment). The task permits also to form sub-groups of respondents to combine them as desired and to compare their results. The described computer programme support comprises primary processing of the output of the questionnaire survey. The questionnaires are transformed and stored by groups of respondents in data files of suitable format (programme SDFORM); the processing of results is described as well as their presentation as output listing or on the display in the interactive way (SDRESULT programme). Using the above programmes, the authors processed the results of a survey made among students during and after completion of the studies in a series of 70 recommended pathological conditions. As an example the authors compare results of observations in 20 selected pathological conditions important for the diagnosis and therapy in primary care in the final stage of the medical course in 1981 and 1985.

  5. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-01-01

    computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result

  6. [Excessive computer usage in adolescents--results of a psychometric evaluation].

    Science.gov (United States)

    Grüsser, Sabine M; Thalemann, Ralf; Albrecht, Ulrike; Thalemann, Carolin N

    2005-03-01

    Excessive computer and video game playing among children is being critically discussed from a pedagogic and public health point of view. To date, no reliable data for this phenomenon in Germany exists. In the present study, the excessive usage of computer and video games is seen as a rewarding behavior which can, due to learning mechanisms, become a prominent and inadequate strategy for children to cope with negative emotions like frustration, uneasiness and fears. In the survey, 323 children ranging in age from 11 to 14 years were asked about their video game playing behavior. Criteria for excessive computer and video game playing were developed in accordance with the criteria for dependency and pathological gambling (DSM-IV, ICD-10). Data show that 9.3% (N = 30) of the children fulfill all criteria for excessive computer and video game playing. Furthermore, these children differ from their class mates with respect to watching television, communication patterns, the ability to concentrate in school lectures and the preferred strategies coping with negative emotions. In accordance with findings in studies about substance-related addiction, data suggest that excessive computer and video game players use their excessive rewarding behavior specifically as an inadequate stress coping strategy.

  7. Aortic pseudoaneurysm detected on external jugular venous distention following a Bentall procedure 10 years previously.

    Science.gov (United States)

    Fukunaga, Naoto; Shomura, Yu; Nasu, Michihiro; Okada, Yukikatsu

    2010-11-01

    An asymptomatic 49-year-old woman was admitted for the purpose of surgery for aortic pseudoaneurysm. She had Marfan syndrome and had undergone an emergent Bentall procedure 10 years previously. About six months previously, she could palpate distended bilateral external jugular veins, which became distended only in a supine position and without any other symptoms. Enhanced computed tomography revealed an aortic pseudoaneurysm originating from a previous distal anastomosis site. During induction of general anesthesia in a supine position, bilateral external jugular venous distention was remarkable. Immediately after a successful operation, distention completely resolved. The present case emphasizes the importance of physical examination leading to a diagnosis of asymptomatic life-threatening diseases in patients with a history of previous aortic surgery.

  8. Spatial Computing and Spatial Practices

    DEFF Research Database (Denmark)

    Brodersen, Anders; Büsher, Monika; Christensen, Michael

    2007-01-01

    The gathering momentum behind the research agendas of pervasive, ubiquitous and ambient computing, set in motion by Mark Weiser (1991), offer dramatic opportunities for information systems design. They raise the possibility of "putting computation where it belongs" by exploding computing power out...... the "disappearing computer" we have, therefore, carried over from previous research an interdisciplinary perspective, and a focus on the sociality of action (Suchman 1987)....

  9. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  10. Model Infrastruktur dan Manajemen Platform Server Berbasis Cloud Computing

    Directory of Open Access Journals (Sweden)

    Mulki Indana Zulfa

    2017-11-01

    Full Text Available Cloud computing is a new technology that is still very rapidly growing. This technology makes the Internet as the main media for the management of data and applications remotely. Cloud computing allows users to run an application without having to think about infrastructure and its platforms. Other technical aspects such as memory, storage, backup and restore, can be done very easily. This research is intended to modeling the infrastructure and management of computer platform in computer network of Faculty of Engineering, University of Jenderal Soedirman. The first stage in this research is literature study, by finding out the implementation model in previous research. Then the result will be combined with a new approach to existing resources and try to implement directly on the existing server network. The results showed that the implementation of cloud computing technology is able to replace the existing platform network.

  11. Applied Computational Fluid Dynamics at NASA Ames Research Center

    Science.gov (United States)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  12. Frictional lichenified dermatosis from prolonged use of a computer mouse: Case report and review of the literature of computer-related dermatoses.

    Science.gov (United States)

    Ghasri, Pedram; Feldman, Steven R

    2010-12-15

    Despite the increasing reliance on computers and the associated health risks, computer-related dermatoses remain under-represented in the literature. This term collectively refers to four groups of cutaneous pathologies: 1) allergic contact dermatitis from exposure to certain chemicals in computer accessories, 2) various friction-induced hand lesions resulting from prolonged computer use, 3) erythema ab igne from placement of the laptop on the skin, and 4) "screen dermatitis" from excessive exposure to visual display terminals (VDTs). Within this review we also present a case of a friction-induced lichenified dermatosis in the dominant wrist of a 24-year-old female that was caused by excessive use of her computer mouse. More importantly, we review the literature of all previously reported cases of computer-related dermatoses, so as to promote recognition and appropriate management by both patients and physicians.

  13. An analysis of true- and false-positive results of vocal fold uptake in positron emission tomography-computed tomography imaging.

    Science.gov (United States)

    Seymour, N; Burkill, G; Harries, M

    2018-03-01

    Positron emission tomography-computed tomography with fluorine-18 fluorodeoxy-D-glucose has a major role in the investigation of head and neck cancers. Fluorine-18 fluorodeoxy-D-glucose is not a tumour-specific tracer and can also accumulate in benign pathology. Therefore, positron emission tomography-computed tomography scan interpretation difficulties are common in the head and neck, which can produce false-positive results. This study aimed to investigate patients detected as having abnormal vocal fold uptake on fluorine-18 fluorodeoxy-D-glucose positron emission tomography-computed tomography. Positron emission tomography-computed tomography scans were identified over a 15-month period where reports contained evidence of unilateral vocal fold uptake or vocal fold pathology. Patients' notes and laryngoscopy results were analysed. Forty-six patients were identified as having abnormal vocal fold uptake on positron emission tomography-computed tomography. Twenty-three patients underwent positron emission tomography-computed tomography and flexible laryngoscopy: 61 per cent of patients had true-positive positron emission tomography-computed tomography scans and 39 per cent had false-positive scan results. Most patients referred to ENT for abnormal findings on positron emission tomography-computed tomography scans had true-positive findings. Asymmetrical fluorine-18 fluorodeoxy-D-glucose uptake should raise suspicion of vocal fold pathology, accepting a false-positive rate of approximately 40 per cent.

  14. The Critical Exponent is Computable for Automatic Sequences

    Directory of Open Access Journals (Sweden)

    Jeffrey Shallit

    2011-08-01

    Full Text Available The critical exponent of an infinite word is defined to be the supremum of the exponent of each of its factors. For k-automatic sequences, we show that this critical exponent is always either a rational number or infinite, and its value is computable. This generalizes or recovers previous results of Krieger and others. Our technique is applicable to other situations; e.g., the computation of the optimal recurrence constant for a linearly recurrent k-automatic sequence.

  15. Computer evaluation of the results of batch fermentations

    Energy Technology Data Exchange (ETDEWEB)

    Nyeste, L; Sevella, B

    1980-01-01

    A useful aid to the mathematical modeling of fermentation systems, for the kinetic evaluation of batch fermentations, is described. The generalized logistic equation may be used to describe the growth curves, substrate consumption, and product formation. A computer process was developed to fit the equation to experimental points, automatically determining the equation constants on the basis of the iteration algorithm of the method of non-linear least squares. By fitting the process to different master programs of various fermentations, the complex kinetic evaluation of fermentations becomes possible. Based on the analysis easily treatable generalized logistic equation, it is possible to calculate by computer different kinetic characteristics, e.g. rates, special rates, yields, etc. The possibility of committing subjective errors was reduced to a minimum. Employment of the method is demonstrated on some fermentation processes and problems arising in the course of application are discussed.

  16. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  17. Calculating buoy response for a wave energy converter—A comparison of two computational methods and experimental results

    Directory of Open Access Journals (Sweden)

    Linnea Sjökvist

    2017-05-01

    Full Text Available When designing a wave power plant, reliable and fast simulation tools are required. Computational fluid dynamics (CFD software provides high accuracy but with a very high computational cost, and in operational, moderate sea states, linear potential flow theories may be sufficient to model the hydrodynamics. In this paper, a model is built in COMSOL Multiphysics to solve for the hydrodynamic parameters of a point-absorbing wave energy device. The results are compared with a linear model where the hydrodynamical parameters are computed using WAMIT, and to experimental results from the Lysekil research site. The agreement with experimental data is good for both numerical models.

  18. Flexibility of Bricard's linkages and other structures via resultants and computer algebra.

    Science.gov (United States)

    Lewis, Robert H; Coutsias, Evangelos A

    2016-07-01

    Flexibility of structures is extremely important for chemistry and robotics. Following our earlier work, we study flexibility using polynomial equations, resultants, and a symbolic algorithm of our creation that analyzes the resultant. We show that the software solves a classic arrangement of quadrilaterals in the plane due to Bricard. We fill in several gaps in Bricard's work and discover new flexible arrangements that he was apparently unaware of. This provides strong evidence for the maturity of the software, and is a wonderful example of mathematical discovery via computer assisted experiment.

  19. Cloud Computing (SaaS Adoption as a Strategic Technology: Results of an Empirical Study

    Directory of Open Access Journals (Sweden)

    Pedro R. Palos-Sanchez

    2017-01-01

    Full Text Available The present study empirically analyzes the factors that determine the adoption of cloud computing (SaaS model in firms where this strategy is considered strategic for executing their activity. A research model has been developed to evaluate the factors that influence the intention of using cloud computing that combines the variables found in the technology acceptance model (TAM with other external variables such as top management support, training, communication, organization size, and technological complexity. Data compiled from 150 companies in Andalusia (Spain are used to test the formulated hypotheses. The results of this study reflect what critical factors should be considered and how they are interrelated. They also show the organizational demands that must be considered by those companies wishing to implement a real management model adopted to the digital economy, especially those related to cloud computing.

  20. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  1. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Cheng, Siu-Wing

    2014-09-01

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (logn)logr) time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected O(n√h+1log2n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (logn) logr + r 4/3 + ε ) time for any ε > 0. On degenerate input, our time bound increases to O(n (logn) logr + r 17/11 + ε ).

  2. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-05-06

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (log n) log r + r^(4/3 + ε)) time for any ε > 0. On degenerate input, our time bound increases to O(n (log n) log r + r^(17/11 + ε))

  3. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Cheng, Siu-Wing; Mencel, Liam A.; Vigneron, Antoine E.

    2014-01-01

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (logn)logr) time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected O(n√h+1log2n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (logn) logr + r 4/3 + ε ) time for any ε > 0. On degenerate input, our time bound increases to O(n (logn) logr + r 17/11 + ε ).

  4. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  5. Technique and results of the spinal computed tomography in the diagnosis of cervical disc disease

    International Nuclear Information System (INIS)

    Artmann, H.; Salbeck, R.; Grau, H.

    1985-01-01

    We give a description of a technique of the patient's positioning with traction of the arms during the cervical spinal computed tomography which allows to draw the shoulders downwards by about one to three cervical segments. By this method the quality of the images can be improved in 96% in the cervical segment 6/7 and in 81% in the cervical/thoracal segment 7/1 to such a degree that a reliable judgement of the soft parts in the spinal canal becomes possible. The diagnostic reliability of the computed tomography of the cervical disc herniation is thus improved so that the necessity of a myelography is decreasing. The results of 396 cervical spinal computed tomographies are presented. (orig.) [de

  6. Thermodynamic properties of 1-naphthol: Mutual validation of experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range 5 K to 445 K. • Vapor pressures were measured for the temperature range 370 K to 570 K. • Computed and derived properties for ideal gas entropies are in excellent accord. • The enthalpy of combustion was measured and shown to be consistent with reliable literature values. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Thermodynamic properties for 1-naphthol (Chemical Abstracts registry number [90-15-3]) in the ideal-gas state are reported based on both experimental and computational methods. Measured properties included the triple-point temperature, enthalpy of fusion, and heat capacities for the crystal and liquid phases by adiabatic calorimetry; vapor pressures by inclined-piston manometry and comparative ebulliometry; and the enthalpy of combustion of the crystal phase by oxygen bomb calorimetry. Critical properties were estimated. Entropies for the ideal-gas state were derived from the experimental studies for the temperature range 298.15 ⩽ T/K ⩽ 600, and independent statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. The mutual validation of the independent experimental and computed results is achieved with a scaling factor of 0.975 applied to the calculated vibrational frequencies. This same scaling factor was successfully applied in the analysis of results for other polycyclic molecules, as described in a series of recent articles by this research group. This article reports the first extension of this approach to a hydroxy-aromatic compound. All experimental results are compared with property values reported in the literature. Thermodynamic consistency between properties is used to show that several studies in the literature are erroneous. The enthalpy of combustion for 1-naphthol was also measured in this research, and excellent

  7. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    Science.gov (United States)

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  8. General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  9. Standardized computer-based organized reporting of EEG SCORE - Second version

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C

    2017-01-01

    Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted in the se......Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted...... in the second, revised version of SCORE (Standardized Computer-based Organized Reporting of EEG), which is presented in this paper. The revised terminology was implemented in a software package (SCORE EEG), which was tested in clinical practice on 12,160 EEG recordings. Standardized terms implemented in SCORE....... In the end, the diagnostic significance is scored, using a standardized list of terms. SCORE has specific modules for scoring seizures (including seizure semiology and ictal EEG patterns), neonatal recordings (including features specific for this age group), and for Critical Care EEG Terminology. SCORE...

  10. Dissociation in decision bias mechanism between probabilistic information and previous decision

    Directory of Open Access Journals (Sweden)

    Yoshiyuki eKaneko

    2015-05-01

    Full Text Available Target detection performance is known to be influenced by events in the previous trials. It has not been clear, however, whether this bias effect is due to the previous sensory stimulus, motor response, or decision. Also it remains open whether or not the previous trial effect emerges via the same mechanism as the effect of knowledge about the target probability. In the present study, we asked normal human subjects to make a decision about the presence or absence of a visual target. We presented a pre-cue indicating the target probability before the stimulus, and also a decision-response mapping cue after the stimulus so as to tease apart the effect of decision from that of motor response. We found that the target detection performance was significantly affected by the probability cue in the current trial and also by the decision in the previous trial. While the information about the target probability modulated the decision criteria, the previous decision modulated the sensitivity to target-relevant sensory signals (d-prime. Using functional magnetic resonance imaging, we also found that activation in the left intraparietal sulcus was decreased when the probability cue indicated a high probability of the target. By contrast, activation in the right inferior frontal gyrus was increased when the subjects made a target-present decision in the previous trial, but this change was observed specifically when the target was present in the current trial. Activation in these regions was associated with individual-difference in the decision computation parameters. We argue that the previous decision biases the target detection performance by modulating the processing of target-selective information, and this mechanism is distinct from modulation of decision criteria due to expectation of a target.

  11. Dissociation in decision bias mechanism between probabilistic information and previous decision

    Science.gov (United States)

    Kaneko, Yoshiyuki; Sakai, Katsuyuki

    2015-01-01

    Target detection performance is known to be influenced by events in the previous trials. It has not been clear, however, whether this bias effect is due to the previous sensory stimulus, motor response, or decision. Also it remains open whether or not the previous trial effect emerges via the same mechanism as the effect of knowledge about the target probability. In the present study, we asked normal human subjects to make a decision about the presence or absence of a visual target. We presented a pre-cue indicating the target probability before the stimulus, and also a decision-response mapping cue after the stimulus so as to tease apart the effect of decision from that of motor response. We found that the target detection performance was significantly affected by the probability cue in the current trial and also by the decision in the previous trial. While the information about the target probability modulated the decision criteria, the previous decision modulated the sensitivity to target-relevant sensory signals (d-prime). Using functional magnetic resonance imaging (fMRI), we also found that activation in the left intraparietal sulcus (IPS) was decreased when the probability cue indicated a high probability of the target. By contrast, activation in the right inferior frontal gyrus (IFG) was increased when the subjects made a target-present decision in the previous trial, but this change was observed specifically when the target was present in the current trial. Activation in these regions was associated with individual-difference in the decision computation parameters. We argue that the previous decision biases the target detection performance by modulating the processing of target-selective information, and this mechanism is distinct from modulation of decision criteria due to expectation of a target. PMID:25999844

  12. Results of the First National Assessment of Computer Competence (The Printout).

    Science.gov (United States)

    Balajthy, Ernest

    1988-01-01

    Discusses the findings of the National Assessment of Educational Progress 1985-86 survey of American students' computer competence, focusing on findings of interest to reading teachers who use computers. (MM)

  13. LHCb Computing Resources: 2011 re-assessment, 2012 request and 2013 forecast

    CERN Document Server

    Graciani, R

    2011-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2011 data taking period, request of computing resource needs for 2012 data taking period and a first forecast of the 2013 needs, when no data taking is foreseen. Estimates are based on 2010 experienced and last updates from LHC schedule, as well as on a new implementation of the computing model simulation tool. Differences in the model and deviations in the estimates from previous presented results are stressed.

  14. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...

  15. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  16. Editorial for special section of grid computing journal on “Cloud Computing and Services Science‿

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Ivanov, Ivan I.

    This editorial briefly discusses characteristics, technology developments and challenges of cloud computing. It then introduces the papers included in the special issue on "Cloud Computing and Services Science" and positions the work reported in these papers with respect to the previously mentioned

  17. Computer games: a double-edged sword?

    Science.gov (United States)

    Sun, De-Lin; Ma, Ning; Bao, Min; Chen, Xang-Chuan; Zhang, Da-Ren

    2008-10-01

    Excessive computer game playing (ECGP) has already become a serious social problem. However, limited data from experimental lab studies are available about the negative consequences of ECGP on players' cognitive characteristics. In the present study, we compared three groups of participants (current ECGP participants, previous ECGP participants, and control participants) on a Multiple Object Tracking (MOT) task. The previous ECGP participants performed significantly better than the control participants, which suggested a facilitation effect of computer games on visuospatial abilities. Moreover, the current ECGP participants performed significantly worse than the previous ECGP participants. This more important finding indicates that ECGP may be related to cognitive deficits. Implications of this study are discussed.

  18. Engagement, Persistence, and Gender in Computer Science: Results of a Smartphone ESM Study.

    Science.gov (United States)

    Milesi, Carolina; Perez-Felkner, Lara; Brown, Kevin; Schneider, Barbara

    2017-01-01

    While the underrepresentation of women in the fast-growing STEM field of computer science (CS) has been much studied, no consensus exists on the key factors influencing this widening gender gap. Possible suspects include gender differences in aptitude, interest, and academic environment. Our study contributes to this literature by applying student engagement research to study the experiences of college students studying CS, to assess the degree to which differences in men and women's engagement may help account for gender inequity in the field. Specifically, we use the Experience Sampling Method (ESM) to evaluate in real-time the engagement of college students during varied activities and environments. Over the course of a full week in fall semester and a full week in spring semester, 165 students majoring in CS at two Research I universities were "beeped" several times a day via a smartphone app prompting them to fill out a short questionnaire including open-ended and scaled items. These responses were paired with administrative and over 2 years of transcript data provided by their institutions. We used mean comparisons and logistic regression analysis to compare enrollment and persistence patterns among CS men and women. Results suggest that despite the obstacles associated with women's underrepresentation in computer science, women are more likely to continue taking computer science courses when they felt challenged and skilled in their initial computer science classes. We discuss implications for further research.

  19. Engagement, Persistence, and Gender in Computer Science: Results of a Smartphone ESM Study

    Science.gov (United States)

    Milesi, Carolina; Perez-Felkner, Lara; Brown, Kevin; Schneider, Barbara

    2017-01-01

    While the underrepresentation of women in the fast-growing STEM field of computer science (CS) has been much studied, no consensus exists on the key factors influencing this widening gender gap. Possible suspects include gender differences in aptitude, interest, and academic environment. Our study contributes to this literature by applying student engagement research to study the experiences of college students studying CS, to assess the degree to which differences in men and women's engagement may help account for gender inequity in the field. Specifically, we use the Experience Sampling Method (ESM) to evaluate in real-time the engagement of college students during varied activities and environments. Over the course of a full week in fall semester and a full week in spring semester, 165 students majoring in CS at two Research I universities were “beeped” several times a day via a smartphone app prompting them to fill out a short questionnaire including open-ended and scaled items. These responses were paired with administrative and over 2 years of transcript data provided by their institutions. We used mean comparisons and logistic regression analysis to compare enrollment and persistence patterns among CS men and women. Results suggest that despite the obstacles associated with women's underrepresentation in computer science, women are more likely to continue taking computer science courses when they felt challenged and skilled in their initial computer science classes. We discuss implications for further research. PMID:28487664

  20. Engagement, Persistence, and Gender in Computer Science: Results of a Smartphone ESM Study

    Directory of Open Access Journals (Sweden)

    Carolina Milesi

    2017-04-01

    Full Text Available While the underrepresentation of women in the fast-growing STEM field of computer science (CS has been much studied, no consensus exists on the key factors influencing this widening gender gap. Possible suspects include gender differences in aptitude, interest, and academic environment. Our study contributes to this literature by applying student engagement research to study the experiences of college students studying CS, to assess the degree to which differences in men and women's engagement may help account for gender inequity in the field. Specifically, we use the Experience Sampling Method (ESM to evaluate in real-time the engagement of college students during varied activities and environments. Over the course of a full week in fall semester and a full week in spring semester, 165 students majoring in CS at two Research I universities were “beeped” several times a day via a smartphone app prompting them to fill out a short questionnaire including open-ended and scaled items. These responses were paired with administrative and over 2 years of transcript data provided by their institutions. We used mean comparisons and logistic regression analysis to compare enrollment and persistence patterns among CS men and women. Results suggest that despite the obstacles associated with women's underrepresentation in computer science, women are more likely to continue taking computer science courses when they felt challenged and skilled in their initial computer science classes. We discuss implications for further research.

  1. Computing in Hydraulic Engineering Education

    Science.gov (United States)

    Duan, J. G.

    2011-12-01

    Civil engineers, pioneers of our civilization, are rarely perceived as leaders and innovators in modern society because of retardations in technology innovation. This crisis has resulted in the decline of the prestige of civil engineering profession, reduction of federal funding on deteriorating infrastructures, and problems with attracting the most talented high-school students. Infusion of cutting-edge computer technology and stimulating creativity and innovation therefore are the critical challenge to civil engineering education. To better prepare our graduates to innovate, this paper discussed the adaption of problem-based collaborative learning technique and integration of civil engineering computing into a traditional civil engineering curriculum. Three interconnected courses: Open Channel Flow, Computational Hydraulics, and Sedimentation Engineering, were developed with emphasis on computational simulations. In Open Channel flow, the focuses are principles of free surface flow and the application of computational models. This prepares students to the 2nd course, Computational Hydraulics, that introduce the fundamental principles of computational hydraulics, including finite difference and finite element methods. This course complements the Open Channel Flow class to provide students with in-depth understandings of computational methods. The 3rd course, Sedimentation Engineering, covers the fundamentals of sediment transport and river engineering, so students can apply the knowledge and programming skills gained from previous courses to develop computational models for simulating sediment transport. These courses effectively equipped students with important skills and knowledge to complete thesis and dissertation research.

  2. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  3. New results on classical problems in computational geometry in the plane

    DEFF Research Database (Denmark)

    Abrahamsen, Mikkel

    In this thesis, we revisit three classical problems in computational geometry in the plane. An obstacle that often occurs as a subproblem in more complicated problems is to compute the common tangents of two disjoint, simple polygons. For instance, the common tangents turn up in problems related...... to visibility, collision avoidance, shortest paths, etc. We provide a remarkably simple algorithm to compute all (at most four) common tangents of two disjoint simple polygons. Given each polygon as a read-only array of its corners in cyclic order, the algorithm runs in linear time and constant workspace...... and is the first to achieve the two complexity bounds simultaneously. The set of common tangents provides basic information about the convex hulls of the polygons—whether they are nested, overlapping, or disjoint—and our algorithm thus also decides this relationship. One of the best-known problems in computational...

  4. Ergonomics standards and guidelines for computer workstation design and the impact on users' health - a review.

    Science.gov (United States)

    Woo, E H C; White, P; Lai, C W K

    2016-03-01

    This paper presents an overview of global ergonomics standards and guidelines for design of computer workstations, with particular focus on their inconsistency and associated health risk impact. Overall, considerable disagreements were found in the design specifications of computer workstations globally, particularly in relation to the results from previous ergonomics research and the outcomes from current ergonomics standards and guidelines. To cope with the rapid advancement in computer technology, this article provides justifications and suggestions for modifications in the current ergonomics standards and guidelines for the design of computer workstations. Practitioner Summary: A research gap exists in ergonomics standards and guidelines for computer workstations. We explore the validity and generalisability of ergonomics recommendations by comparing previous ergonomics research through to recommendations and outcomes from current ergonomics standards and guidelines.

  5. Standardized computer-based organized reporting of EEG SCORE - Second version

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C

    2017-01-01

    Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted in the se...

  6. Degenerative dementia: nosological aspects and results of single photon emission computed tomography

    International Nuclear Information System (INIS)

    Dubois, B.; Habert, M.O.

    1999-01-01

    Ten years ago, the diagnosis discussion of a dementia case for the old patient was limited to two pathologies: the Alzheimer illness and the Pick illness. During these last years, the frame of these primary degenerative dementia has fallen into pieces. The different diseases and the results got with single photon emission computed tomography are discussed. for example: fronto-temporal dementia, primary progressive aphasia, progressive apraxia, visio-spatial dysfunction, dementia at Lewy's bodies, or cortico-basal degeneration. (N.C.)

  7. Computing aggregate properties of preimages for 2D cellular automata.

    Science.gov (United States)

    Beer, Randall D

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm-incremental aggregation-that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  8. Results of work of neurological clinic in first year of computer tomograph application

    Energy Technology Data Exchange (ETDEWEB)

    Volejnik, V; Nettl, S; Heger, L [Karlova Univ., Hradec Kralove (Czechoslovakia). Lekarska Fakulta

    1980-11-01

    The results are analyzed of one year's use of a computer tomograph (CT) by a department of neurology. Detailed comparisons with corresponding PEG and CT findings showed the accuracy of CT examinations in the descriptions of the width of the subarachnoid spaces and of the ventricular system. The advantages of CT are assessed from the medical, economic, and ethical points of view.

  9. Results of work of neurological clinic in first year of computer tomograph application

    International Nuclear Information System (INIS)

    Volejnik, V.; Nettl, S.; Heger, L.

    1980-01-01

    The results are analyzed of one year's use of a computer tomograph (CT) by a department of neurology. Detailed comparisons with corresponding PEG and CT findings showed the accuracy of CT examinations in the descriptions of the width of the subarachnoid spaces and of the ventricular system. The advantages of CT are assessed from the medical, economic, and ethical points of view. (author)

  10. Recent computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Taku [Department of Chemistry for Materials, and The Center of Ultimate Technology on nano-Electronics, Mie University (Japan); Center for Theoretical and Computational Chemistry, Department of Chemistry, University of Oslo (Norway)

    2015-12-31

    Now we can investigate quantum phenomena for the real materials and molecules, and can design functional materials by computation, due to the previous developments of quantum theory and calculation methods. As there still exist the limit and problem in theory, the cooperation between theory and computation is getting more important to clarify the unknown quantum mechanism, and discover more efficient functional materials. It would be next-generation standard. Finally, our theoretical methodology for boundary solid is introduced.

  11. Recent computational chemistry

    International Nuclear Information System (INIS)

    Onishi, Taku

    2015-01-01

    Now we can investigate quantum phenomena for the real materials and molecules, and can design functional materials by computation, due to the previous developments of quantum theory and calculation methods. As there still exist the limit and problem in theory, the cooperation between theory and computation is getting more important to clarify the unknown quantum mechanism, and discover more efficient functional materials. It would be next-generation standard. Finally, our theoretical methodology for boundary solid is introduced

  12. Optimization and large scale computation of an entropy-based moment closure

    Science.gov (United States)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  13. Secondary recurrent miscarriage is associated with previous male birth.

    LENUS (Irish Health Repository)

    Ooi, Poh Veh

    2012-01-31

    Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.

  14. Secondary recurrent miscarriage is associated with previous male birth.

    LENUS (Irish Health Repository)

    Ooi, Poh Veh

    2011-01-01

    Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.

  15. Computational aeroelasticity using a pressure-based solver

    Science.gov (United States)

    Kamakoti, Ramji

    A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.

  16. Computing and data handling recent experiences at Fermilab and SLAC

    International Nuclear Information System (INIS)

    Cooper, P.S.

    1990-01-01

    Computing has become evermore central to the doing of high energy physics. There are now major second and third generation experiments for which the largest single cost is computing. At the same time the availability of ''cheap'' computing has made possible experiments which were previously considered infeasible. The result of this trend has been an explosion of computing and computing needs. I will review here the magnitude of the problem, as seen at Fermilab and SLAC, and the present methods for dealing with it. I will then undertake the dangerous assignment of projecting the needs and solutions forthcoming in the next few years at both laboratories. I will concentrate on the ''offline'' problem; the process of turning terabytes of data tapes into pages of physics journals. 5 refs., 4 figs., 4 tabs

  17. Computer security

    CERN Document Server

    Gollmann, Dieter

    2011-01-01

    A completely up-to-date resource on computer security Assuming no previous experience in the field of computer security, this must-have book walks you through the many essential aspects of this vast topic, from the newest advances in software and technology to the most recent information on Web applications security. This new edition includes sections on Windows NT, CORBA, and Java and discusses cross-site scripting and JavaScript hacking as well as SQL injection. Serving as a helpful introduction, this self-study guide is a wonderful starting point for examining the variety of competing sec

  18. Cryptographically Secure Multiparty Computation and Distributed Auctions Using Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Anunay Kulshrestha

    2017-12-01

    Full Text Available We introduce a robust framework that allows for cryptographically secure multiparty computations, such as distributed private value auctions. The security is guaranteed by two-sided authentication of all network connections, homomorphically encrypted bids, and the publication of zero-knowledge proofs of every computation. This also allows a non-participant verifier to verify the result of any such computation using only the information broadcasted on the network by each individual bidder. Building on previous work on such systems, we design and implement an extensible framework that puts the described ideas to practice. Apart from the actual implementation of the framework, our biggest contribution is the level of protection we are able to guarantee from attacks described in previous work. In order to provide guidance to users of the library, we analyze the use of zero knowledge proofs in ensuring the correct behavior of each node in a computation. We also describe the usage of the library to perform a private-value distributed auction, as well as the other challenges in implementing the protocol, such as auction registration and certificate distribution. Finally, we provide performance statistics on our implementation of the auction.

  19. CADRIGS--computer aided design reliability interactive graphics system

    International Nuclear Information System (INIS)

    Kwik, R.J.; Polizzi, L.M.; Sticco, S.; Gerrard, P.B.; Yeater, M.L.; Hockenbury, R.W.; Phillips, M.A.

    1982-01-01

    An integrated reliability analysis program combining graphic representation of fault trees, automated data base loadings and reference, and automated construction of reliability code input files was developed. The functional specifications for CADRIGS, the computer aided design reliability interactive graphics system, are presented. Previously developed fault tree segments used in auxiliary feedwater system safety analysis were constructed on CADRIGS and, when combined, yielded results identical to those resulting from manual input to the same reliability codes

  20. An Algebra-Based Introductory Computational Neuroscience Course with Lab.

    Science.gov (United States)

    Fink, Christian G

    2017-01-01

    A course in computational neuroscience has been developed at Ohio Wesleyan University which requires no previous experience with calculus or computer programming, and which exposes students to theoretical models of neural information processing and techniques for analyzing neural data. The exploration of theoretical models of neural processes is conducted in the classroom portion of the course, while data analysis techniques are covered in lab. Students learn to program in MATLAB and are offered the opportunity to conclude the course with a final project in which they explore a topic of their choice within computational neuroscience. Results from a questionnaire administered at the beginning and end of the course indicate significant gains in student facility with core concepts in computational neuroscience, as well as with analysis techniques applied to neural data.

  1. Runtime analysis of the (1+1) EA on computing unique input output sequences

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Yao, Xin

    2010-01-01

    Computing unique input output (UIO) sequences is a fundamental and hard problem in conformance testing of finite state machines (FSM). Previous experimental research has shown that evolutionary algorithms (EAs) can be applied successfully to find UIOs for some FSMs. However, before EAs can...... in the theoretical analysis, and the variability of the runtime. The numerical results fit well with the theoretical results, even for small problem instance sizes. Together, these results provide a first theoretical characterisation of the potential and limitations of the (1 + 1) EA on the problem of computing UIOs....

  2. On-the-Fly Computation of Bisimilarity Distances

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2017-01-01

    of Desharnais et al. between discrete-time Markov chains as an optimal solution of a linear program that can be solved by using the ellipsoid method. Inspired by their result, we propose a novel linear program characterization to compute the distance in the continuous-time setting. Differently from previous......We propose a distance between continuous-time Markov chains (CTMCs) and study the problem of computing it by comparing three different algorithmic methodologies: iterative, linear program, and on-the-fly. In a work presented at FoSSaCS'12, Chen et al. characterized the bisimilarity distance...... proposals, ours has a number of constraints that is bounded by a polynomial in the size of the CTMC. This, in particular, proves that the distance we propose can be computed in polynomial time. Despite its theoretical importance, the proposed linear program characterization turns out to be inefficient...

  3. Cognitive impairment and computer tomography image in patients with arterial hypertension -preliminary results

    International Nuclear Information System (INIS)

    Yaneva-Sirakova, T.; Tarnovska-Kadreva, R.; Traykov, L.; Zlatareva, D.

    2012-01-01

    Arterial hypertension is the leading risk factor for cognitive impairment, but it is developed only in some of the patients with pour control. On the other hand, not all of the patents with white matter changes have cognitive deficit. There may be a variety of reasons for this: the accuracy of methods for blood pressure measurement, the specific brain localization or some other reason. Here are the preliminary results of a study (or the potential correlation between self-measured, office-, ambulatory monitored blood pressure, central aortic blood pressure, minimal cognitive impairment and the specific brain image on contrast computer tomography. We expect to answer, the question whether central aortic or self-measured blood pressure have the leading role for the development of cognitive impairment in the presence of a specific neuroimaging finding, as well as what is the prerequisite for the clinical manifestation of cognitive dysfunction in patients with computer tomographic pathology. (authors)

  4. A research program in empirical computer science

    Science.gov (United States)

    Knight, J. C.

    1991-01-01

    During the grant reporting period our primary activities have been to begin preparation for the establishment of a research program in experimental computer science. The focus of research in this program will be safety-critical systems. Many questions that arise in the effort to improve software dependability can only be addressed empirically. For example, there is no way to predict the performance of the various proposed approaches to building fault-tolerant software. Performance models, though valuable, are parameterized and cannot be used to make quantitative predictions without experimental determination of underlying distributions. In the past, experimentation has been able to shed some light on the practical benefits and limitations of software fault tolerance. It is common, also, for experimentation to reveal new questions or new aspects of problems that were previously unknown. A good example is the Consistent Comparison Problem that was revealed by experimentation and subsequently studied in depth. The result was a clear understanding of a previously unknown problem with software fault tolerance. The purpose of a research program in empirical computer science is to perform controlled experiments in the area of real-time, embedded control systems. The goal of the various experiments will be to determine better approaches to the construction of the software for computing systems that have to be relied upon. As such it will validate research concepts from other sources, provide new research results, and facilitate the transition of research results from concepts to practical procedures that can be applied with low risk to NASA flight projects. The target of experimentation will be the production software development activities undertaken by any organization prepared to contribute to the research program. Experimental goals, procedures, data analysis and result reporting will be performed for the most part by the University of Virginia.

  5. A new computer-based counselling system for the promotion of physical activity in patients with chronic diseases--results from a pilot study.

    Science.gov (United States)

    Becker, Annette; Herzberg, Dominikus; Marsden, Nicola; Thomanek, Sabine; Jung, Hartmut; Leonhardt, Corinna

    2011-05-01

    To develop a computer-based counselling system (CBCS) for the improvement of attitudes towards physical activity in chronically ill patients and to pilot its efficacy and acceptance in primary care. The system is tailored to patients' disease and motivational stage. During a pilot study in five German general practices, patients answered questions before, directly and 6 weeks after using the CBCS. Outcome criteria were attitudes and self-efficacy. Qualitative interviews were performed to identify acceptance indicators. Seventy-nine patients participated (mean age: 64.5 years, 53% males; 38% without previous computer experience). Patients' affective and cognitive attitudes changed significantly, self-efficacy showed only minor changes. Patients mentioned no difficulties in interacting with the CBCS. However, perception of the system's usefulness was inconsistent. Computer-based counselling for physical activity related attitudes in patients with chronic diseases is feasible, but the circumstances of use with respect to the target group and its integration into the management process have to be clarified in future studies. This study adds to the understanding of computer-based counselling in primary health care. Acceptance indicators identified in this study will be validated as part of a questionnaire on technology acceptability in a subsequent study. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Computational chemistry

    Science.gov (United States)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  7. Comparison of Experimental Surface and Flow Field Measurements to Computational Results of the Juncture Flow Model

    Science.gov (United States)

    Roozeboom, Nettie H.; Lee, Henry C.; Simurda, Laura J.; Zilliac, Gregory G.; Pulliam, Thomas H.

    2016-01-01

    Wing-body juncture flow fields on commercial aircraft configurations are challenging to compute accurately. The NASA Advanced Air Vehicle Program's juncture flow committee is designing an experiment to provide data to improve Computational Fluid Dynamics (CFD) modeling in the juncture flow region. Preliminary design of the model was done using CFD, yet CFD tends to over-predict the separation in the juncture flow region. Risk reduction wind tunnel tests were requisitioned by the committee to obtain a better understanding of the flow characteristics of the designed models. NASA Ames Research Center's Fluid Mechanics Lab performed one of the risk reduction tests. The results of one case, accompanied by CFD simulations, are presented in this paper. Experimental results suggest the wall mounted wind tunnel model produces a thicker boundary layer on the fuselage than the CFD predictions, resulting in a larger wing horseshoe vortex suppressing the side of body separation in the juncture flow region. Compared to experimental results, CFD predicts a thinner boundary layer on the fuselage generates a weaker wing horseshoe vortex resulting in a larger side of body separation.

  8. Partial safety factor calibration from stochastic finite element computation of welded joint with random geometries

    International Nuclear Information System (INIS)

    Schoefs, Franck; Chevreuil, Mathilde; Pasqualini, Olivier; Cazuguel, Mikaël

    2016-01-01

    Welded joints are used in various structures and infrastructures like bridges, ships and offshore structures, and are submitted to cyclic stresses. Their fatigue behaviour is an industrial key issue to deal with and still offers original research subjects. One of the available methods relies on the computing of the stress concentration factor. Even if some studies were previously driven to evaluate this factor onto some cases of welded structures, the shape of the weld joint is generally idealized through a deterministic parametric geometry. Previous experimental works however have shown that this shape plays a key role in the lifetime assessment. We propose in this paper a methodology for computing the stress concentration factor in presence of random geometries of welded joints. In view to make the results available by engineers, this method merges stochastic computation and semi-probabilistic analysis by computing partial safety factors with a dedicated method. - Highlights: • Numerical computation of stress concentration factor with random geometry of weld. • Real data are used for probabilistic modelling. • Identification of partial safety factor from SFEM computation in case of random geometries.

  9. Multiparty Computations

    DEFF Research Database (Denmark)

    Dziembowski, Stefan

    here and discuss other problems caused by the adaptiveness. All protocols in the thesis are formally specified and the proofs of their security are given. [1]Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, and Tal Rabin. Efficient multiparty computations with dishonest minority......In this thesis we study a problem of doing Verifiable Secret Sharing (VSS) and Multiparty Computations in a model where private channels between the players and a broadcast channel is available. The adversary is active, adaptive and has an unbounded computing power. The thesis is based on two...... to a polynomial time black-box reduction, the complexity of adaptively secure VSS is the same as that of ordinary secret sharing (SS), where security is only required against a passive, static adversary. Previously, such a connection was only known for linear secret sharing and VSS schemes. We then show...

  10. Bomb-Pulse Chlorine-36 At The Proposed Yucca Mountain Repository Horizon: An Investigation Of Previous Conflicting Results And Collection Of New Data

    International Nuclear Information System (INIS)

    J. Cizdziel

    2006-01-01

    Previous studies by scientists at Los Alamos National Laboratory (LANL) found elevated ratios of chlorine-36 to total chloride ( 36 Cl/Cl) in samples of rock collected from the Exploratory Studies Facility (ESF) and the Enhanced Characterization of the Repository Block (ECRB) at Yucca Mountain as the tunnels were excavated. The data were interpreted as an indication that fluids containing 'bomb-pulse' 36 Cl reached the repository horizon in the ∼50 years since the peak period of above-ground nuclear testing. Moreover, the data support the concept that so-called fast pathways for infiltration not only exist but are active, possibly through a combination of porous media, faults and/or other geologic features. Due to the significance of 36 Cl data to conceptual models of unsaturated zone flow and transport, the United States Geological Survey (USGS) was requested by the Department of Energy (DOE) to design and implement a study to validate the LANL findings. The USGS chose to drill new boreholes at select locations across zones where bomb-pulse ratios had previously been identified. The drill cores were analyzed at Lawrence Livermore National Laboratory (LLNL) for 36 Cl/Cl using both active and passive leaches, with the USGS/LLNL concluding that the active leach extracted too much rock-Cl and the passive leach did not show bomb-pulse ratios. Because consensus was not reached between the USGS/LLNL and LANL on several fundamental points, including the conceptual strategy for sampling, interpretation and use of tritium ( 3 H) data, and the importance and interpretation of blanks, in addition to the presence or absence of bomb-pulse 36 Cl, an evaluation by an independent entity, the University of Nevada, Las Vegas (UNLV), using new samples was initiated. This report is the result of that study. The overall objectives of the UNLV study were to investigate the source or sources of the conflicting results from the previous validation study, and to obtain additional data to

  11. Computational Science: Ensuring America's Competitiveness

    National Research Council Canada - National Science Library

    Reed, Daniel A; Bajcsy, Ruzena; Fernandez, Manuel A; Griffiths, Jose-Marie; Mott, Randall D; Dongarra, J. J; Johnson, Chris R; Inouye, Alan S; Miner, William; Matzke, Martha K; Ponick, Terry L

    2005-01-01

    ... previously deemed intractable. Yet, despite the great opportunities and needs, universities and the Federal government have not effectively recognized the strategic significance of computational science in either...

  12. Feature Extraction on Brain Computer Interfaces using Discrete Dyadic Wavelet Transform: Preliminary Results

    International Nuclear Information System (INIS)

    Gareis, I; Gentiletti, G; Acevedo, R; Rufiner, L

    2011-01-01

    The purpose of this work is to evaluate different feature extraction alternatives to detect the event related evoked potential signal on brain computer interfaces, trying to minimize the time employed and the classification error, in terms of sensibility and specificity of the method, looking for alternatives to coherent averaging. In this context the results obtained performing the feature extraction using discrete dyadic wavelet transform using different mother wavelets are presented. For the classification a single layer perceptron was used. The results obtained with and without the wavelet decomposition were compared; showing an improvement on the classification rate, the specificity and the sensibility for the feature vectors obtained using some mother wavelets.

  13. Performance of various mathematical methods for computer-aided processing of radioimmunoassay results

    International Nuclear Information System (INIS)

    Vogt, W.; Sandel, P.; Langfelder, Ch.; Knedel, M.

    1978-01-01

    The performance of 6 algorithms were compared for computer aided determination of radioimmunological end results. These were weighted and unweighted linear logit log regression; quadratic logit log regression, smoothing spline interpolation with a large and small smoothing factor, respectively, and polygonal interpolation and the manual curve fitting on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotrophin (HCS)). Great store was set by the accuracy of the approximation at the intermediate points on the curve, i.e. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results, while the weighted logit log regression proved superior in the case of HCS. (Auth.)

  14. CT-guided Irreversible Electroporation in an Acute Porcine Liver Model: Effect of Previous Transarterial Iodized Oil Tissue Marking on Technical Parameters, 3D Computed Tomographic Rendering of the Electroporation Zone, and Histopathology

    International Nuclear Information System (INIS)

    Sommer, C. M.; Fritz, S.; Vollherbst, D.; Zelzer, S.; Wachter, M. F.; Bellemann, N.; Gockner, T.; Mokry, T.; Schmitz, A.; Aulmann, S.; Stampfl, U.; Pereira, P.; Kauczor, H. U.; Werner, J.; Radeleff, B. A.

    2015-01-01

    PurposeTo evaluate the effect of previous transarterial iodized oil tissue marking (ITM) on technical parameters, three-dimensional (3D) computed tomographic (CT) rendering of the electroporation zone, and histopathology after CT-guided irreversible electroporation (IRE) in an acute porcine liver model as a potential strategy to improve IRE performance.MethodsAfter Ethics Committee approval was obtained, in five landrace pigs, two IREs of the right and left liver (RL and LL) were performed under CT guidance with identical electroporation parameters. Before IRE, transarterial marking of the LL was performed with iodized oil. Nonenhanced and contrast-enhanced CT examinations followed. One hour after IRE, animals were killed and livers collected. Mean resulting voltage and amperage during IRE were assessed. For 3D CT rendering of the electroporation zone, parameters for size and shape were analyzed. Quantitative data were compared by the Mann–Whitney test. Histopathological differences were assessed.ResultsMean resulting voltage and amperage were 2,545.3 ± 66.0 V and 26.1 ± 1.8 A for RL, and 2,537.3 ± 69.0 V and 27.7 ± 1.8 A for LL without significant differences. Short axis, volume, and sphericity index were 16.5 ± 4.4 mm, 8.6 ± 3.2 cm 3 , and 1.7 ± 0.3 for RL, and 18.2 ± 3.4 mm, 9.8 ± 3.8 cm 3 , and 1.7 ± 0.3 for LL without significant differences. For RL and LL, the electroporation zone consisted of severely widened hepatic sinusoids containing erythrocytes and showed homogeneous apoptosis. For LL, iodized oil could be detected in the center and at the rim of the electroporation zone.ConclusionThere is no adverse effect of previous ITM on technical parameters, 3D CT rendering of the electroporation zone, and histopathology after CT-guided IRE of the liver

  15. CT-guided Irreversible Electroporation in an Acute Porcine Liver Model: Effect of Previous Transarterial Iodized Oil Tissue Marking on Technical Parameters, 3D Computed Tomographic Rendering of the Electroporation Zone, and Histopathology

    Energy Technology Data Exchange (ETDEWEB)

    Sommer, C. M., E-mail: christof.sommer@med.uni-heidelberg.de [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology (Germany); Fritz, S., E-mail: stefan.fritz@med.uni-heidelberg.de [University Hospital Heidelberg, Department of General Visceral and Transplantation Surgery (Germany); Vollherbst, D., E-mail: dominikvollherbst@web.de [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology (Germany); Zelzer, S., E-mail: s.zelzer@dkfz-heidelberg.de [German Cancer Research Center (dkfz), Medical and Biological Informatics (Germany); Wachter, M. F., E-mail: fredericwachter@googlemail.com; Bellemann, N., E-mail: nadine.bellemann@med.uni-heidelberg.de; Gockner, T., E-mail: theresa.gockner@med.uni-heidelberg.de; Mokry, T., E-mail: theresa.mokry@med.uni-heidelberg.de; Schmitz, A., E-mail: anne.schmitz@med.uni-heidelberg.de [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology (Germany); Aulmann, S., E-mail: sebastian.aulmann@mail.com [University Hospital Heidelberg, Department of General Pathology (Germany); Stampfl, U., E-mail: ulrike.stampfl@med.uni-heidelberg.de [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology (Germany); Pereira, P., E-mail: philippe.pereira@slk-kliniken.de [SLK Kliniken Heilbronn GmbH, Clinic for Radiology, Minimally-invasive Therapies and Nuclear Medicine (Germany); Kauczor, H. U., E-mail: hu.kauczor@med.uni-heidelberg.de [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology (Germany); Werner, J., E-mail: jens.werner@med.uni-heidelberg.de [University Hospital Heidelberg, Department of General Visceral and Transplantation Surgery (Germany); Radeleff, B. A., E-mail: boris.radeleff@med.uni-heidelberg.de [University Hospital Heidelberg, Department of Diagnostic and Interventional Radiology (Germany)

    2015-02-15

    PurposeTo evaluate the effect of previous transarterial iodized oil tissue marking (ITM) on technical parameters, three-dimensional (3D) computed tomographic (CT) rendering of the electroporation zone, and histopathology after CT-guided irreversible electroporation (IRE) in an acute porcine liver model as a potential strategy to improve IRE performance.MethodsAfter Ethics Committee approval was obtained, in five landrace pigs, two IREs of the right and left liver (RL and LL) were performed under CT guidance with identical electroporation parameters. Before IRE, transarterial marking of the LL was performed with iodized oil. Nonenhanced and contrast-enhanced CT examinations followed. One hour after IRE, animals were killed and livers collected. Mean resulting voltage and amperage during IRE were assessed. For 3D CT rendering of the electroporation zone, parameters for size and shape were analyzed. Quantitative data were compared by the Mann–Whitney test. Histopathological differences were assessed.ResultsMean resulting voltage and amperage were 2,545.3 ± 66.0 V and 26.1 ± 1.8 A for RL, and 2,537.3 ± 69.0 V and 27.7 ± 1.8 A for LL without significant differences. Short axis, volume, and sphericity index were 16.5 ± 4.4 mm, 8.6 ± 3.2 cm{sup 3}, and 1.7 ± 0.3 for RL, and 18.2 ± 3.4 mm, 9.8 ± 3.8 cm{sup 3}, and 1.7 ± 0.3 for LL without significant differences. For RL and LL, the electroporation zone consisted of severely widened hepatic sinusoids containing erythrocytes and showed homogeneous apoptosis. For LL, iodized oil could be detected in the center and at the rim of the electroporation zone.ConclusionThere is no adverse effect of previous ITM on technical parameters, 3D CT rendering of the electroporation zone, and histopathology after CT-guided IRE of the liver.

  16. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  17. Computer scientist looks at reliability computations

    International Nuclear Information System (INIS)

    Rosenthal, A.

    1975-01-01

    Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored

  18. A dynamical-systems approach for computing ice-affected streamflow

    Science.gov (United States)

    Holtschlag, David J.

    1996-01-01

    A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.

  19. Research directions in computer engineering. Report of a workshop

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, H

    1982-09-01

    The results of a workshop held in November 1981 in Washington, DC, to outline research directions for computer engineering are reported upon. The purpose of the workshop was to provide guidance to government research funding agencies, as well as to universities and industry, as to the directions which computer engineering research should take for the next five to ten years. A select group of computer engineers was assembled, drawn from all over the United States and with expertise in virtually every aspect of today's computer technology. Industrial organisations and universities were represented in roughly equal numbers. The panel proceeded to provide a sharper definition of computer engineering than had been in popular use previously, to identify the social and national needs which provide the basis for encouraging research, to probe for obstacles to research and seek means of overcoming them and to delineate high-priority areas in which computer engineering research should be fostered. These included experimental software engineering, architectures in support of programming style, computer graphics, pattern recognition. VLSI design tools, machine intelligence, programmable automation, architectures for speech and signal processing, computer architecture and robotics. 13 references.

  20. Two-dimensional computer simulation of hypervelocity impact cratering: some preliminary results for Meteor Crater, Arizona

    Energy Technology Data Exchange (ETDEWEB)

    Bryan, J.B.; Burton, D.E.; Cunningham, M.E.; Lettis, L.A. Jr.

    1978-06-01

    A computational approach used for subsurface explosion cratering was extended to hypervelocity impact cratering. Meteor (Barringer) Crater, Arizona, was selected for the first computer simulation because it is one of the most thoroughly studied craters. It is also an excellent example of a simple, bowl-shaped crater and is one of the youngest terrestrial impact craters. Initial conditions for this calculation included a meteorite impact velocity of 15 km/s, meteorite mass of 1.67 x 10/sup 8/ kg, with a corresponding kinetic energy of 1.88 x 10/sup 16/ J (4.5 megatons). A two-dimensional Eulerian finite difference code called SOIL was used for this simulation of a cylindrical iron projectile impacting at normal incidence into a limestone target. For this initial calculation, a Tillotson equation-of-state description for iron and limestone was used with no shear strength. Results obtained for this preliminary calculation of the formation of Meteor Crater are in good agreement with field measurements. A color movie based on this calculation was produced using computer-generated graphics. 19 figures, 5 tables, 63 references.

  1. Two-dimensional computer simulation of hypervelocity impact cratering: some preliminary results for Meteor Crater, Arizona

    International Nuclear Information System (INIS)

    Bryan, J.B.; Burton, D.E.; Cunningham, M.E.; Lettis, L.A. Jr.

    1978-06-01

    A computational approach used for subsurface explosion cratering was extended to hypervelocity impact cratering. Meteor (Barringer) Crater, Arizona, was selected for the first computer simulation because it is one of the most thoroughly studied craters. It is also an excellent example of a simple, bowl-shaped crater and is one of the youngest terrestrial impact craters. Initial conditions for this calculation included a meteorite impact velocity of 15 km/s, meteorite mass of 1.67 x 10 8 kg, with a corresponding kinetic energy of 1.88 x 10 16 J (4.5 megatons). A two-dimensional Eulerian finite difference code called SOIL was used for this simulation of a cylindrical iron projectile impacting at normal incidence into a limestone target. For this initial calculation, a Tillotson equation-of-state description for iron and limestone was used with no shear strength. Results obtained for this preliminary calculation of the formation of Meteor Crater are in good agreement with field measurements. A color movie based on this calculation was produced using computer-generated graphics. 19 figures, 5 tables, 63 references

  2. TSOAK-M1: a computer code to determine tritium reaction/adsorption/release parameters from experimental results of air-detritiation tests

    International Nuclear Information System (INIS)

    Land, R.H.; Maroni, V.A.; Minkoff, M.

    1979-01-01

    A computer code has been developed which permits the determination of tritium reaction (T 2 to HTO)/adsorption/release and instrument correction parameters from enclosure (building) - detritiation test data. The code is based on a simplified model which treats each parameter as a normalized time-independent constant throughout the data-unfolding steps. Because of the complicated four-dimensional mathematical surface generated by the resulting differential equation system, occasional local-minima effects are observed, but these effects can be overcome in most instances by selecting a series of trial guesses for the initial parameter values and observing the reproducibility of final parameter values for cases where the best overall fit to experimental data is achieved. The code was then used to analyze existing small-cubicle test data with good success, and the resulting normalized parameters were employed to evaluate hypothetical reactor-building detritiation scenarios. It was concluded from the latter evaluation that the complications associated with moisture formation, adsorption, and release, particularly in terms of extended cleanup times, may not be as great as was previously thought. It is recommended that the validity of the TSOAK-M1 model be tested using data from detritiation tests conducted on large experimental enclosures (5 to 10 cm 3 ) and, if possible, actual facility buildings

  3. Positron computed tomography: current state, clinical results and future trends

    International Nuclear Information System (INIS)

    Schelbert, H.R.; Phelps, M.E.; Kuhl, D.E.

    1980-01-01

    An overview is presented of positron computed tomography: its advantages over single photon emission tomography, its use in metabolic studies of the heart and chemical investigation of the brain, and future trends

  4. Investigation of previously derived Hyades, Coma, and M67 reddenings

    International Nuclear Information System (INIS)

    Taylor, B.J.

    1980-01-01

    New Hyades polarimetry and field star photometry have been obtained to check the Hyades reddening, which was found to be nonzero in a previous paper. The new Hyades polarimetry implies essentially zero reddening; this is also true of polarimetry published by Behr (which was incorrectly interpreted in the previous paper). Four photometric techniques which are presumed to be insensitive to blanketing are used to compare the Hyades to nearby field stars; these four techniques also yield essentially zero reddening. When all of these results are combined with others which the author has previously published and a simultaneous solution for the Hyades, Coma, and M67 reddenings is made, the results are E (B-V) =3 +- 2 (sigma) mmag, -1 +- 3 (sigma) mmag, and 46 +- 6 (sigma) mmag, respectively. No support for a nonzero Hyades reddening is offered by the new results. When the newly obtained reddenings for the Hyades, Coma, and M67 are compared with results from techniques given by Crawford and by users of the David Dunlap Observatory photometric system, no differences between the new and other reddenings are found which are larger than about 2 sigma. The author had previously found that the M67 main-sequence stars have about the same blanketing as that of Coma and less blanketing than the Hyades; this conclusion is essentially unchanged by the revised reddenings

  5. Is Cup Positioning Challenged in Hips Previously Treated With Periacetabular Osteotomy?

    DEFF Research Database (Denmark)

    Hartig-Andreasen, Charlotte; Stilling, Maiken; Søballe, Kjeld

    2014-01-01

    After periacetabular osteotomy (PAO), some patients develop osteoarthritis with need of a total hip arthroplasty (THA). We evaluated the outcome of THA following PAO and explored factors associated with inferior cup position and increased polyethylene wear. Follow-up were performed 4 to 10years...... after THA in 34 patients (38 hips) with previous PAO. Computer analysis evaluated cup position and wear rates. No patient had dislocations or revision surgery. Median scores were: Harris hip 96, Oxford hip 38 and WOMAC 78. Mean cup anteversion and abduction angles were 22(o) (range 7°-43°) and 45......° (range 28°-65°). Outliers of cup abduction were associated with persisting dysplasia (CE...

  6. Positron Computed Tomography: Current State, Clinical Results and Future Trends

    Science.gov (United States)

    Schelbert, H. R.; Phelps, M. E.; Kuhl, D. E.

    1980-09-01

    An overview is presented of positron computed tomography: its advantages over single photon emission tomography, its use in metabolic studies of the heart and chemical investigation of the brain, and future trends. (ACR)

  7. Positron computed tomography: current state, clinical results and future trends

    Energy Technology Data Exchange (ETDEWEB)

    Schelbert, H.R.; Phelps, M.E.; Kuhl, D.E.

    1980-09-01

    An overview is presented of positron computed tomography: its advantages over single photon emission tomography, its use in metabolic studies of the heart and chemical investigation of the brain, and future trends. (ACR)

  8. Energy-resolved computed tomography: first experimental results

    International Nuclear Information System (INIS)

    Shikhaliev, Polad M

    2008-01-01

    First experimental results with energy-resolved computed tomography (CT) are reported. The contrast-to-noise ratio (CNR) in CT has been improved with x-ray energy weighting for the first time. Further, x-ray energy weighting improved the CNR in material decomposition CT when applied to CT projections prior to dual-energy subtraction. The existing CT systems use an energy (charge) integrating x-ray detector that provides a signal proportional to the energy of the x-ray photon. Thus, the x-ray photons with lower energies are scored less than those with higher energies. This underestimates contribution of lower energy photons that would provide higher contrast. The highest CNR can be achieved if the x-ray photons are scored by a factor that would increase as the x-ray energy decreases. This could be performed by detecting each x-ray photon separately and measuring its energy. The energy selective CT data could then be saved, and any weighting factor could be applied digitally to a detected x-ray photon. The CT system includes a photon counting detector with linear arrays of pixels made from cadmium zinc telluride (CZT) semiconductor. A cylindrical phantom with 10.2 cm diameter made from tissue-equivalent material was used for CT imaging. The phantom included contrast elements representing calcifications, iodine, adipose and glandular tissue. The x-ray tube voltage was 120 kVp. The energy selective CT data were acquired, and used to generate energy-weighted and material-selective CT images. The energy-weighted and material decomposition CT images were generated using a single CT scan at a fixed x-ray tube voltage. For material decomposition the x-ray spectrum was digitally spilt into low- and high-energy parts and dual-energy subtraction was applied. The x-ray energy weighting resulted in CNR improvement of calcifications and iodine by a factor of 1.40 and 1.63, respectively, as compared to conventional charge integrating CT. The x-ray energy weighting was also applied

  9. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  10. COMPARISON OF THE RESULTS OF BLOOD GLUCOSE SELFMONITORING AND CONTINUOUS GLUCOSE MONITORING IN PREGNANT WOMEN WITH PREVIOUS DIABETES MELLITUS

    Directory of Open Access Journals (Sweden)

    A. V. Dreval'

    2015-01-01

    Full Text Available Background: Pregnancy is one of the indications for continuous glucose monitoring (CGM. The data on its efficiency in pregnant women are contradictory.Aim: To compare the results of blood glucose self-monitoring (SMBG and CGM in pregnant women with previous diabetes mellitus.Materials and methods: We performed a cross-sectional comparative study of glycemia in 18 pregnant women with previous type 1 (87.8% of patients and type 2 diabetes (22.2% of patients with various degrees of glycemic control. Their age was 27.7 ± 4.9 year. At study entry, the patients were at 17.2 ± 6.1 weeks of gestation. CGM and SMBG were performed in and by all patients for the duration of 5.4 ± 1.5 days. Depending on their HbA1c levels, all patients were divided into two groups: group 1 – 12 women with the HbA1c above the target (8.5 ± 1%, and group 2 – 6 women with the HbA1c levels within the target (5.6 ± 0.3%.Results: According to SMBG results, women from group 2 had above-the-target glycemia levels before breakfast, at 1 hour after breakfast and at bedtime: 6.2 ± 1.6, 8.7 ± 2.1, and 5.7 ± 1.9 mmol/L, respectively. According to CGM, patients from group 1 had higher postprandial glycemia than those from group 2 (8.0 ± 2.1 and 6.9 ± 1.8 mmol/L, respectively, p = 0.03. The analysis of glycemia during the day time revealed significant difference between the groups only at 1 hour after dinner (7.1 ± 1.4 mmol/L in group 1 and 5.8 ± 0.9 mmol/L in group 2, р = 0.041 and the difference was close to significant before lunch (6.0 ± 2.2 mmol/L in group 1 and 4.8 ± 1.0 mmol/L in group 2, р = 0.053. Comparison of SMBG and CGM results demonstrated significant difference only at one timepoint (at 1 hour after lunch and only in group 1: median glycemia was 7.4 [6.9; 8.1] mmol/L by SMBG and 6 [5.4; 6.6] mmol/L by CGM measurement (р = 0.001. Lower median values by CGM measurement could be explained by averaging of three successive measurements carried out in the

  11. The quantum computer game: citizen science

    Science.gov (United States)

    Damgaard, Sidse; Mølmer, Klaus; Sherson, Jacob

    2013-05-01

    Progress in the field of quantum computation is hampered by daunting technical challenges. Here we present an alternative approach to solving these by enlisting the aid of computer players around the world. We have previously examined a quantum computation architecture involving ultracold atoms in optical lattices and strongly focused tweezers of light. In The Quantum Computer Game (see http://www.scienceathome.org/), we have encapsulated the time-dependent Schrödinger equation for the problem in a graphical user interface allowing for easy user input. Players can then search the parameter space with real-time graphical feedback in a game context with a global high-score that rewards short gate times and robustness to experimental errors. The game which is still in a demo version has so far been tried by several hundred players. Extensions of the approach to other models such as Gross-Pitaevskii and Bose-Hubbard are currently under development. The game has also been incorporated into science education at high-school and university level as an alternative method for teaching quantum mechanics. Initial quantitative evaluation results are very positive. AU Ideas Center for Community Driven Research, CODER.

  12. Computer use and ulnar neuropathy: results from a case-referent study

    DEFF Research Database (Denmark)

    Andersen, JH; Frost, P.; Fuglsang-Frederiksen, A.

    2012-01-01

    We aimed to evaluate associations between vocational computer use and 1) ulnar neuropathy, and 2) ulnar neuropathy- like symptoms as distinguished by electroneurography. We identified all patients aged 18-65 years, examined at the Department of Neurophysiology on suspicion of ulnar neuropathy, 2001...... was performed by conditional logistic regression.There were a negative association between daily hours of computer use and the two outcomes of interest. Participants who reported their elbow to be in contact with their working table for 2 hours or more during the workday had an elevated risk for ulnar...

  13. Computer codes for RF cavity design

    International Nuclear Information System (INIS)

    Ko, K.

    1992-08-01

    In RF cavity design, numerical modeling is assuming an increasingly important role with the help of sophisticated computer codes and powerful yet affordable computers. A description of the cavity codes in use in the accelerator community has been given previously. The present paper will address the latest developments and discuss their applications to cavity toning and matching problems

  14. Consolidation of cloud computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  15. Implant breast reconstruction after salvage mastectomy in previously irradiated patients.

    Science.gov (United States)

    Persichetti, Paolo; Cagli, Barbara; Simone, Pierfranco; Cogliandro, Annalisa; Fortunato, Lucio; Altomare, Vittorio; Trodella, Lucio

    2009-04-01

    The most common surgical approach in case of local tumor recurrence after quadrantectomy and radiotherapy is salvage mastectomy. Breast reconstruction is the subsequent phase of the treatment and the plastic surgeon has to operate on previously irradiated and manipulated tissues. The medical literature highlights that breast reconstruction with tissue expanders is not a pursuable option, considering previous radiotherapy a contraindication. The purpose of this retrospective study is to evaluate the influence of previous radiotherapy on 2-stage breast reconstruction (tissue expander/implant). Only patients with analogous timing of radiation therapy and the same demolitive and reconstructive procedures were recruited. The results of this study prove that, after salvage mastectomy in previously irradiated patients, implant reconstruction is still possible. Further comparative studies are, of course, advisable to draw any conclusion on the possibility to perform implant reconstruction in previously irradiated patients.

  16. 8th International Conference on Bio-Inspired Computing : Theories and Applications

    CERN Document Server

    Pan, Linqiang; Fang, Xianwen

    2013-01-01

    International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA) is one of the flagship conferences on Bio-Computing, bringing together the world’s leading scientists from different areas of Natural Computing. Since 2006, the conferences have taken place at Wuhan (2006), Zhengzhou (2007), Adelaide (2008), Beijing (2009), Liverpool & Changsha (2010), Malaysia (2011) and India (2012). Following the successes of previous events, the 8th conference is organized and hosted by Anhui University of Science and Technology in China. This conference aims to provide a high-level international forum that researchers with different backgrounds and who are working in the related areas can use to present their latest results and exchange ideas. Additionally, the growing trend in Emergent Systems has resulted in the inclusion of two other closely related fields in the BIC-TA 2013 event, namely Complex Systems and Computational Neuroscience. These proceedings are intended for researchers in the fiel...

  17. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  18. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  19. Code and papers: computing publication patterns in the LHC era

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Publications in scholarly journals establish the body of knowledge deriving from scientific research; they also play a fundamental role in the career path of scientists and in the evaluation criteria of funding agencies. This presentation reviews the evolution of computing-oriented publications in HEP following the start of operation of LHC. Quantitative analyses are illustrated, which document the production of scholarly papers on computing-related topics by HEP experiments and core tools projects (including distributed computing R&D), and the citations they receive. Several scientometric indicators are analyzed to characterize the role of computing in HEP literature. Distinctive features of scholarly publication production in the software-oriented and hardware-oriented experimental HEP communities are highlighted. Current patterns and trends are compared to the situation in previous generations' HEP experiments at LEP, Tevatron and B-factories. The results of this scientometric analysis document objec...

  20. Kidnapping Detection and Recognition in Previous Unknown Environment

    Directory of Open Access Journals (Sweden)

    Yang Tian

    2017-01-01

    Full Text Available An unaware event referred to as kidnapping makes the estimation result of localization incorrect. In a previous unknown environment, incorrect localization result causes incorrect mapping result in Simultaneous Localization and Mapping (SLAM by kidnapping. In this situation, the explored area and unexplored area are divided to make the kidnapping recovery difficult. To provide sufficient information on kidnapping, a framework to judge whether kidnapping has occurred and to identify the type of kidnapping with filter-based SLAM is proposed. The framework is called double kidnapping detection and recognition (DKDR by performing two checks before and after the “update” process with different metrics in real time. To explain one of the principles of DKDR, we describe a property of filter-based SLAM that corrects the mapping result of the environment using the current observations after the “update” process. Two classical filter-based SLAM algorithms, Extend Kalman Filter (EKF SLAM and Particle Filter (PF SLAM, are modified to show that DKDR can be simply and widely applied in existing filter-based SLAM algorithms. Furthermore, a technique to determine the adapted thresholds of metrics in real time without previous data is presented. Both simulated and experimental results demonstrate the validity and accuracy of the proposed method.

  1. Computational fluid dynamics in three dimensional angiography: Preliminary hemodynamic results of various proximal geometry

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ha Youn; Park, Sung Tae; Bae, Won Kyoung; Goo, Dong Erk [Dept. of Radiology, Soonchunhyang University Hospital, Seoul (Korea, Republic of)

    2014-12-15

    We studied the influence of proximal geometry on the results of computational fluid dynamics (CFD). We made five models of different proximal geometry from three dimensional angiography of 63-year-old women with intracranial aneurysm. CFD results were analyzed as peak systolic velocity (PSV) at inlet and outlet as well as flow velocity profile at proximal level of internal carotid artery (ICA) aneurysm. Modified model of cavernous one with proximal tubing showed faster PSV at outlet than that at inlet. The PSV of outlets of other models were slower than that of inlets. The flow velocity profiles at immediate proximal to ICA aneurysm showed similar patterns in all models, suggesting that proximal vessel geometries could affect CFD results.

  2. Two-dimensional computer simulation of hypervelocity impact cratering: some preliminary results for Meteor Crater, Arizona

    International Nuclear Information System (INIS)

    Bryan, J.B.; Burton, D.E.; Cunningham, M.E.; Lettis, L.A. Jr.

    1978-04-01

    A computational approach used for subsurface explosion cratering has been extended to hypervelocity impact cratering. Meteor (Barringer) Crater, Arizona, was selected for our first computer simulation because it was the most thoroughly studied. It is also an excellent example of a simple, bowl-shaped crater and is one of the youngest terrestrial impact craters. Shoemaker estimates that the impact occurred about 20,000 to 30,000 years ago [Roddy (1977)]. Initial conditions for this calculation included a meteorite impact velocity of 15 km/s. meteorite mass of 1.57E + 08 kg, with a corresponding kinetic energy of 1.88E + 16 J (4.5 megatons). A two-dimensional Eulerian finite difference code called SOIL was used for this simulation of a cylindrical iron projectile impacting at normal incidence into a limestone target. For this initial calculation a Tillotson equation-of-state description for iron and limestone was used with no shear strength. A color movie based on this calculation was produced using computer-generated graphics. Results obtained for this preliminary calculation of the formation of Meteor Crater, Arizona, are in good agreement with Meteor Crater Measurements

  3. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  4. Comparison of tests of accommodation for computer users.

    Science.gov (United States)

    Kolker, David; Hutchinson, Robert; Nilsen, Erik

    2002-04-01

    With the increased use of computers in the workplace and at home, optometrists are finding more patients presenting with symptoms of Computer Vision Syndrome. Among these symptomatic individuals, research supports that accommodative disorders are the most common vision finding. A prepresbyopic group (N= 30) and a presbyopic group (N = 30) were selected from a private practice. Assignment to a group was determined by age, accommodative amplitude, and near visual acuity with their distance prescription. Each subject was given a thorough vision and ocular health examination, then administered several nearpoint tests of accommodation at a computer working distance. All the tests produced similar results in the presbyopic group. For the prepresbyopic group, the tests yielded very different results. To effectively treat symptomatic VDT users, optometrists must assess the accommodative system along with the binocular and refractive status. For presbyopic patients, all nearpoint tests studied will yield virtually the same result. However, the method of testing accommodation, as well as the test stimulus presented, will yield significantly different responses for prepresbyopic patients. Previous research indicates that a majority of patients prefer the higher plus prescription yielded by the Gaussian image test.

  5. Evaluation of an interdisciplinary re-isolation policy for patients with previous Clostridium difficile diarrhea.

    Science.gov (United States)

    Boone, N; Eagan, J A; Gillern, P; Armstrong, D; Sepkowitz, K A

    1998-12-01

    Diarrhea caused by Clostridium difficile is increasingly recognized as a nosocomial problem. The effectiveness and cost of a new program to decrease nosocomial spread by identifying patients scheduled for readmission who were previously positive for toxin was evaluated. The Memorial Sloan-Kettering Cancer Center is a 410-bed comprehensive cancer center in New York City. Many patients are readmitted during their course of cancer therapy. In 1995 as a result of concern about the nosocomial spread of C difficile, we implemented a policy that all patients who were positive for C difficile toxin in the previous 6 months with no subsequent toxin-negative stool as an outpatient would be placed into contact isolation on readmission pending evaluation of stool specimens. Patients who were previously positive for C difficile toxin were identified to infection control and admitting office databases via computer. Admitting personnel contacted infection control with all readmissions to determine whether a private room was required. Between July 1, 1995, and June 30, 1996, 47 patients who were previously positive for C difficile toxin were readmitted. Before their first scheduled readmission, the specimens for 15 (32%) of these patients were negative for C difficile toxin. They were subsequently cleared as outpatients and were readmitted without isolation. Workup of the remaining 32 patients revealed that the specimens for 7 patients were positive for C difficile toxin and 86 isolation days were used. An additional 25 patients used 107 isolation days and were either cleared after a negative specimen was obtained in-house or discharged without having an appropriate specimen sent. Four patients (9%) had reoccurring C difficile after having toxin-negative stools. We estimate (because outpatient specimens were not collected) the cost incurred at $48,500 annually, including the incremental cost of hospital isolation and equipment. Our policy to control the spread of nosocomial C

  6. Computer codes for RF cavity design

    International Nuclear Information System (INIS)

    Ko, K.

    1992-01-01

    In RF cavity design, numerical modeling is assuming an increasingly important role with the help of sophisticated computer codes and powerful yet affordable computers. A description of the cavity codes in use in the accelerator community has been given previously. The present paper will address the latest developments and discuss their applications to cavity tuning and matching problems. (Author) 8 refs., 10 figs

  7. Successful initiation of and management through a distributed computer upgrade

    International Nuclear Information System (INIS)

    Barich, F.T.; Crawford, T.H.

    1995-01-01

    Processing capacity, the lack of data analysis tools, obsolescence, and spare parts issues are forcing utilities to upgrade or replace their plant computer systems with newer, larger systems. As a result, the utility faces an increasing number of new technologies, such as fiber optics and communication standards (FDDI, ATM, etc.), Graphic User Interface using X-Windows, and distributed architectures that eliminate the host based computer. Technologies such as these, if properly applied, can greatly enhance the capabilities and functions of the existing system. Besides this, the utility also faces funtionality previously not available through the plant computer, such as integrated plant monitoring and digital controls, voice, imaging, etc. With computing technology vastly changing from traditional host systems, the utility confronts the question, open-quotes what are my needs (now and for the future), and what new system can meet those needs most effectively?close quotes. This paper describes the management process necessary to define the needs and then carry out a successful computer replacement project

  8. Parallel computing techniques for rotorcraft aerodynamics

    Science.gov (United States)

    Ekici, Kivanc

    The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).

  9. SD-CAS: Spin Dynamics by Computer Algebra System.

    Science.gov (United States)

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Generalized Bell-inequality experiments and computation

    Energy Technology Data Exchange (ETDEWEB)

    Hoban, Matty J. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD (United Kingdom); Wallman, Joel J. [School of Physics, The University of Sydney, Sydney, New South Wales 2006 (Australia); Browne, Dan E. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom)

    2011-12-15

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice of two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.

  11. Generalized Bell-inequality experiments and computation

    International Nuclear Information System (INIS)

    Hoban, Matty J.; Wallman, Joel J.; Browne, Dan E.

    2011-01-01

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice of two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.

  12. Computer-aided design and computer science technology

    Science.gov (United States)

    Fulton, R. E.; Voigt, S. J.

    1976-01-01

    A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.

  13. Computing programme SPEGTAR and user's guide

    International Nuclear Information System (INIS)

    Altiparmakov, D.; Bosevski, T.

    1974-01-01

    Computer code SPEGTAR is one dimensional multigroup code for calculating neutron transport in multi zone cylindrical geometry. Neutron flux distribution is calculated by solving integral transport equation by the method of first collision probability previously developed for both one-group and multi group cases. According to spatial multigroup distribution of neutron flux, integral values of nuclear constants are determined by numerical integration for all material zones and all energy groups. These results are used for determining the parameters of homogenized reactor cell and condensation of energy groups suitable for reactor overall calculation

  14. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  15. Non-unitary probabilistic quantum computing circuit and method

    Science.gov (United States)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  16. CHPS IN CLOUD COMPUTING ENVIRONMENT

    OpenAIRE

    K.L.Giridas; A.Shajin Nargunam

    2012-01-01

    Workflow have been utilized to characterize a various form of applications concerning high processing and storage space demands. So, to make the cloud computing environment more eco-friendly,our research project was aiming in reducing E-waste accumulated by computers. In a hybrid cloud, the user has flexibility offered by public cloud resources that can be combined to the private resources pool as required. Our previous work described the process of combining the low range and mid range proce...

  17. Unconditionally verifiable blind quantum computation

    Science.gov (United States)

    Fitzsimons, Joseph F.; Kashefi, Elham

    2017-07-01

    Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.

  18. HEDPIN: a computer program to estimate pinwise power density

    International Nuclear Information System (INIS)

    Cappiello, M.W.

    1976-05-01

    A description is given of the digital computer program, HEDPIN. This program, modeled after a previously developed program, POWPIN, provides a means of estimating the pinwise power density distribution in fast reactor triangular pitched pin bundles. The capability also exists for computing any reaction rate of interest at the respective pin positions within an assembly. HEDPIN was developed in support of FTR fuel and test management as well as fast reactor core design and core characterization planning and analysis. The results of a test devised to check out HEDPIN's computational method are given, and the realm of application is discussed. Nearly all programming is in FORTRAN IV. Variable dimensioning is employed to make efficient use of core memory and maintain short running time for small problems. Input instructions, sample problem, and a program listing are also given

  19. International Conference of Intelligence Computation and Evolutionary Computation ICEC 2012

    CERN Document Server

    Intelligence Computation and Evolutionary Computation

    2013-01-01

    2012 International Conference of Intelligence Computation and Evolutionary Computation (ICEC 2012) is held on July 7, 2012 in Wuhan, China. This conference is sponsored by Information Technology & Industrial Engineering Research Center.  ICEC 2012 is a forum for presentation of new research results of intelligent computation and evolutionary computation. Cross-fertilization of intelligent computation, evolutionary computation, evolvable hardware and newly emerging technologies is strongly encouraged. The forum aims to bring together researchers, developers, and users from around the world in both industry and academia for sharing state-of-art results, for exploring new areas of research and development, and to discuss emerging issues facing intelligent computation and evolutionary computation.

  20. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  1. Effective Computer-Aided Assessment of Mathematics; Principles, Practice and Results

    Science.gov (United States)

    Greenhow, Martin

    2015-01-01

    This article outlines some key issues for writing effective computer-aided assessment (CAA) questions in subjects with substantial mathematical or statistical content, especially the importance of control of random parameters and the encoding of wrong methods of solution (mal-rules) commonly used by students. The pros and cons of using CAA and…

  2. Computer simulation of backscattered alpha particles

    International Nuclear Information System (INIS)

    Sanchez, A. Martin; Bland, C.J.; Timon, A. Fernandez

    2000-01-01

    Alpha-particle spectrometry forms an important aspect of radionuclide metrology. Accurate measurements require corrections to be made for factors such as self-absorption within the source and backscattering from the backing material. The theory of the latter phenomenon has only received limited attention. Furthermore the experimental verification of these theoretical results requires adequate counting statistics for a variety of sources with different activities. These problems could be resolved by computer simulations of the various interactions which occur as alpha-particles move through different materials. The pioneering work of Ziegler and his coworkers over several years, has provided the sophisticated software (SRIM) which has enabled us to obtain the results presented here. These results are compared with theoretical and experimental values obtained previously

  3. OT-Combiners Via Secure Computation

    DEFF Research Database (Denmark)

    Harnik, Danny; Ishai, Yuval; Kushilevitz, Eyal

    2008-01-01

    of faulty candidates (t = Ω(n)). Previous OT-combiners required either ω(n) or poly(k) calls to the n candidates, where k is a security parameter, and produced only a single secure OT. We demonstrate the usefulness of the latter result by presenting several applications that are of independent interest......An OT-combiner implements a secure oblivious transfer (OT) protocol using oracle access to n OT-candidates of which at most t may be faulty. We introduce a new general approach for combining OTs by making a simple and modular use of protocols for secure computation. Specifically, we obtain an OT......, strengthen the security, and improve the efficiency of previous OT-combiners. In particular, we obtain the first constant-rate OT-combiners in which the number of secure OTs being produced is a constant fraction of the total number of calls to the OT-candidates, while still tolerating a constant fraction...

  4. Filtration theory using computer simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Corey, I. [Lawrence Livermore National Lab., CA (United States)

    1997-08-01

    We have used commercially available fluid dynamics codes based on Navier-Stokes theory and the Langevin particle equation of motion to compute the particle capture efficiency and pressure drop through selected two- and three-dimensional fiber arrays. The approach we used was to first compute the air velocity vector field throughout a defined region containing the fiber matrix. The particle capture in the fiber matrix is then computed by superimposing the Langevin particle equation of motion over the flow velocity field. Using the Langevin equation combines the particle Brownian motion, inertia and interception mechanisms in a single equation. In contrast, most previous investigations treat the different capture mechanisms separately. We have computed the particle capture efficiency and the pressure drop through one, 2-D and two, 3-D fiber matrix elements. 5 refs., 11 figs.

  5. [Computed tomography with computer-assisted detection of pulmonary nodules in dogs and cats].

    Science.gov (United States)

    Niesterok, C; Piesnack, S; Köhler, C; Ludewig, E; Alef, M; Kiefer, I

    2015-01-01

    The aim of this study was to assess the potential benefit of computer-assisted detection (CAD) of pulmonary nodules in veterinary medicine. Therefore, the CAD rate was compared to the detection rates of two individual examiners in terms of its sensitivity and false-positive findings. We included 51 dogs and 16 cats with pulmonary nodules previously diagnosed by computed tomography. First, the number of nodules ≥ 3 mm was recorded for each patient by two independent examiners. Subsequently, each examiner used the CAD software for automated nodule detection. With the knowledge of the CAD results, a final consensus decision on the number of nodules was achieved. The software used was a commercially available CAD program. The sensitivity of examiner 1 was 89.2%, while that of examiner 2 reached 87.4%. CAD had a sensitivity of 69.4%. With CAD, the sensitivity of examiner 1 increased to 94.7% and that of examiner 2 to 90.8%. The CAD-system, which we used in our study, had a moderate sensitivity of 69.4%. Despite its severe limitations, with a high level of false-positive and false-negative results, CAD increased the examiners' sensitivity. Therefore, its supportive role in diagnostics appears to be evident.

  6. Flow around an oscillating cylinder: computational issues

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Fengjian; Gallardo, José P; Pettersen, Bjørnar [Department of Marine Technology, Norwegian University of Science and Technology, NO-7491 Trondheim (Norway); Andersson, Helge I, E-mail: fengjian.jiang@ntnu.no [Department of Energy and Process Engineering, Norwegian University of Science and Technology, NO-7491 Trondheim (Norway)

    2017-10-15

    We consider different computational issues related to the three-dimensionalities of the flow around an oscillating circular cylinder. The full time-dependent Navier–Stokes equations are directly solved in a moving reference frame by introducing a forcing term. The choice of quantitative validation criteria is discussed and discrepancies of previously published results are addressed. The development of Honji vortices shows that short simulation times may lead to incorrect quasi-stable vortex patterns. The viscous decay of already established Honji vortices is also examined. (paper)

  7. Stochastic Collocation Applications in Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    Dragan Poljak

    2018-01-01

    Full Text Available The paper reviews the application of deterministic-stochastic models in some areas of computational electromagnetics. Namely, in certain problems there is an uncertainty in the input data set as some properties of a system are partly or entirely unknown. Thus, a simple stochastic collocation (SC method is used to determine relevant statistics about given responses. The SC approach also provides the assessment of related confidence intervals in the set of calculated numerical results. The expansion of statistical output in terms of mean and variance over a polynomial basis, via SC method, is shown to be robust and efficient approach providing a satisfactory convergence rate. This review paper provides certain computational examples from the previous work by the authors illustrating successful application of SC technique in the areas of ground penetrating radar (GPR, human exposure to electromagnetic fields, and buried lines and grounding systems.

  8. Fast Virtual Fractional Flow Reserve Based Upon Steady-State Computational Fluid Dynamics Analysis: Results From the VIRTU-Fast Study.

    Science.gov (United States)

    Morris, Paul D; Silva Soto, Daniel Alejandro; Feher, Jeroen F A; Rafiroiu, Dan; Lungu, Angela; Varma, Susheel; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2017-08-01

    Fractional flow reserve (FFR)-guided percutaneous intervention is superior to standard assessment but remains underused. The authors have developed a novel "pseudotransient" analysis protocol for computing virtual fractional flow reserve (vFFR) based upon angiographic images and steady-state computational fluid dynamics. This protocol generates vFFR results in 189 s (cf >24 h for transient analysis) using a desktop PC, with <1% error relative to that of full-transient computational fluid dynamics analysis. Sensitivity analysis demonstrated that physiological lesion significance was influenced less by coronary or lesion anatomy (33%) and more by microvascular physiology (59%). If coronary microvascular resistance can be estimated, vFFR can be accurately computed in less time than it takes to make invasive measurements.

  9. Three-dimensional Computational Fluid Dynamics Investigation of a Spinning Helicopter Slung Load

    Science.gov (United States)

    Theorn, J. N.; Duque, E. P. N.; Cicolani, L.; Halsey, R.

    2005-01-01

    After performing steady-state Computational Fluid Dynamics (CFD) calculations using OVERFLOW to validate the CFD method against static wind-tunnel data of a box-shaped cargo container, the same setup was used to investigate unsteady flow with a moving body. Results were compared to flight test data previously collected in which the container is spinning.

  10. On the Integration of Computer Algebra Systems (CAS) by Canadian Mathematicians: Results of a National Survey

    Science.gov (United States)

    Buteau, Chantal; Jarvis, Daniel H.; Lavicza, Zsolt

    2014-01-01

    In this article, we outline the findings of a Canadian survey study (N = 302) that focused on the extent of computer algebra systems (CAS)-based technology use in postsecondary mathematics instruction. Results suggest that a considerable number of Canadian mathematicians use CAS in research and teaching. CAS use in research was found to be the…

  11. Results of computer assisted mini-incision subvastus approach for total knee arthroplasty.

    Science.gov (United States)

    Turajane, Thana; Larbpaiboonpong, Viroj; Kongtharvonskul, Jatupon; Maungsiri, Samart

    2009-12-01

    Mini-incision subvastus approach is soft tissue preservation of the knee. Advantages of the mini-incision subvastus approach included reduced blood loss, reduced pain, self rehabilitation and faster recovery. However, the improved visualization, component alignment, and more blood preservation have been debatable to achieve the better outcome and preventing early failure of the Total Knee Arthroplasty (TKA). The computer navigation has been introduced to improve alignment and blood loss. The purpose of this study was to evaluate the short term outcomes of the combination of computer assisted mini-incision subvastus approach for Total Knee Arthroplasty (CMS-TKA). A prospective case series of the initial 80 patients who underwent computer assisted mini-incision subvastus approach for CMS-TKA from January 2007 to October 2008 was carried out. The patients' conditions were classified into 2 groups, the simple OA knee (varus deformity was less than 15 degree, BMI was less than 20%, no associated deformities) and the complex deformity (varus deformity was more than 15 degrees, BMI more was than 20%, associated with flexion contractor). There were 59 patients in group 1 and 21 patients in group 2. Of the 80 knees, 38 were on the left and 42 on the right. The results of CMS-TKA [the mean (range)] in group 1: group 2 were respectively shown as the incision length [10.88 (8-13): 11.92 (10-14], the operation time [118 (111.88-125.12): 131 (119.29-143.71) minutes, lateral releases (0 in both groups), postoperative range of motion in flexion [94.5 (90-100): 95.25 (90-105) degree] and extension [1.75 (0-5): 1.5 (0-5) degree] Blood loss in 24 hours [489.09 (414.7-563.48): 520 (503.46-636.54) ml] and blood transfusion [1 (0-1) unit? in both groups], Tibiofemoral angle preoperative [Varus = 4 (varus 0-10): Varus = 17.14 (varus 15.7-18.5) degree, Tibiofemoral angle postoperative [Valgus = 1.38 (Valgus 0-4): Valgus = 2.85 (valgus 2.1-3.5) degree], Tibiofemoral angle outlier (85% both

  12. Effects of periodic boundary conditions on equilibrium properties of computer simulated fluids. II. Application to simple liquids

    International Nuclear Information System (INIS)

    Pratt, L.R.; Haan, S.W.

    1981-01-01

    The theory of the previous paper is used to predict anomalous size effects observed for computer simulated liquid Ar. The theoretical results for the boundary condition induced anisotropy of two-particle correlations are found to be large, and in excellent agreement with the computer experimental data of Mandell for densities near the Ar triple point density. The agreement is less good at higher densities

  13. First principle calculations of effective exchange integrals: Comparison between SR (BS) and MR computational results

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, Kizashi [Institute for Nano Science Design Center, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan and TOYOTA Physical and Chemical Research Institute, Nagakute, Aichi, 480-1192 (Japan); Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Yamada, Satoru; Isobe, Hiroshi; Okumura, Mitsutaka [Department of Chemistry, Graduate School of Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 (Japan)

    2015-01-22

    First principle calculations of effective exchange integrals (J) in the Heisenberg model for diradical species were performed by both symmetry-adapted (SA) multi-reference (MR) and broken-symmetry (BS) single reference (SR) methods. Mukherjee-type (Mk) state specific (SS) MR coupled-cluster (CC) calculations by the use of natural orbital (NO) references of ROHF, UHF, UDFT and CASSCF solutions were carried out to elucidate J values for di- and poly-radical species. Spin-unrestricted Hartree Fock (UHF) based coupled-cluster (CC) computations were also performed to these species. Comparison between UHF-NO(UNO)-MkMRCC and BS UHF-CC computational results indicated that spin-contamination of UHF-CC solutions still remains at the SD level. In order to eliminate the spin contamination, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed corrected the error to yield good agreement with MkMRCC in energy. The CC double with spin-unrestricted Brueckner's orbital (UBD) was furthermore employed for these species, showing that spin-contamination involved in UHF solutions is largely suppressed, and therefore AP scheme for UBCCD removed easily the rest of spin-contamination. We also performed spin-unrestricted pure- and hybrid-density functional theory (UDFT) calculations of diradical and polyradical species. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid (H) UDFT. HUDFT calculations followed by AP, HUDFT(AP), yielded the S-T gaps that were qualitatively in good agreement with those of MkMRCCSD, UHF-CC(AP) and UB-CC(AP). Thus a systematic comparison among MkMRCCSD, UCC(AP) UBD(AP) and UDFT(AP) was performed concerning with the first principle calculations of J values in di- and poly-radical species. It was found that BS (AP) methods reproduce MkMRCCSD results, indicating their applicability to large exchange coupled systems.

  14. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  15. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  16. COMPUTATIONAL THINKING

    Directory of Open Access Journals (Sweden)

    Evgeniy K. Khenner

    2016-01-01

    Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with

  17. An Introduction to Quantum Computing, Without the Physics

    OpenAIRE

    Nannicini, Giacomo

    2017-01-01

    This paper is a gentle but rigorous introduction to quantum computing intended for discrete mathematicians. Starting from a small set of assumptions on the behavior of quantum computing devices, we analyze their main characteristics, stressing the differences with classical computers, and finally describe two well-known algorithms (Simon's algorithm and Grover's algorithm) using the formalism developed in previous sections. This paper does not touch on the physics of the devices, and therefor...

  18. Finding New Math Identities by Computer

    Science.gov (United States)

    Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Recently a number of interesting new mathematical identities have been discovered by means of numerical searches on high performance computers, using some newly discovered algorithms. These include the following: pi = ((sup oo)(sub k=0))(Sigma) (1 / 16) (sup k) ((4 / 8k+1) - (2 / 8k+4) - (1 / 8k+5) - (1 / 8k+6)) and ((17 pi(exp 4)) / 360) = ((sup oo)(sub k=1))(Sigma) (1 + (1/2) + (1/3) + ... + (1/k))(exp 2) k(exp -2), zeta(3, 1, 3, 1, ..., 3, 1) = (2 pi(exp 4m) / (4m+2)! where m = number of (3,1) pairs. and where zeta(n1,n2,...,nr) = (sub k1 (is greater than) k2 (is greater than) ... (is greater than) kr)(Sigma) (1 / (k1 (sup n1) k2 (sup n2) ... kr (sup nr). The first identity is remarkable in that it permits one to compute the n-th binary or hexadecimal digit of pu directly, without computing any of the previous digits, and without using multiple precision arithmetic. Recently the ten billionth hexadecimal digit of pi was computed using this formula. The third identity has connections to quantum field theory. (The first and second of these been formally established; the third is affirmed by numerical evidence only.) The background and results of this work will be described, including an overview of the algorithms and computer techniques used in these studies.

  19. Computational Investigation of Amine–Oxygen Exciplex Formation

    Science.gov (United States)

    Haupert, Levi M.; Simpson, Garth J.; Slipchenko, Lyudmila V.

    2012-01-01

    It has been suggested that fluorescence from amine-containing dendrimer compounds could be the result of a charge transfer between amine groups and molecular oxygen [Chu, C.-C.; Imae, T. Macromol. Rapid Commun. 2009, 30, 89.]. In this paper we employ equation-of-motion coupled cluster computational methods to study the electronic structure of an ammonia–oxygen model complex to examine this possibility. The results reveal several bound electronic states with charge transfer character with emission energies generally consistent with previous observations. However, further work involving confinement, solvent, and amine structure effects will be necessary for more rigorous examination of the charge transfer fluorescence hypothesis. PMID:21812447

  20. The rheology of concentrated dispersions: structure changes and shear thickening in experiments and computer simulations

    NARCIS (Netherlands)

    Boersma, W.H.; Laven, J.; Stein, H.N.; Moldenaers, P.; Keunings, R.

    1992-01-01

    The flow-induced changes in the microstructure and rheol. of very concd., shear thickening dispersions are studied. Results obtained for polystyrene sphere dispersions are compared with previous data and computer simulations to give better insight into the processes occurring in the dispersions. [on

  1. A New Soft Computing Method for K-Harmonic Means Clustering.

    Science.gov (United States)

    Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe

    2016-01-01

    The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.

  2. Computational techniques in gamma-ray skyshine analysis

    International Nuclear Information System (INIS)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified to use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs

  3. Approximator: Predicting Interruptibility in Software Development with Commodity Computers

    DEFF Research Database (Denmark)

    Tell, Paolo; Jalaliniya, Shahram; Andersen, Kristian S. M.

    2015-01-01

    Assessing the presence and availability of a remote colleague is key in coordination in global software development but is not easily done using existing computer-mediated channels. Previous research has shown that automated estimation of interruptibility is feasible and can achieve a precision....... These early but promising results represent a starting point for designing tools with support for interruptibility capable of improving distributed awareness and cooperation to be used in global software development....

  4. Computer access security code system

    Science.gov (United States)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  5. DUBNA-GRAN SASSO: Satellite computer link

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    In April a 64 kbit/s computer communication link was set up between the Joint Institute for Nuclear Research (JINR), Dubna (Russia) and Gran Sasso (Italy) Laboratories via nearby ground satellite stations using the INTELSAT V satellite. Previously the international community of Dubna's experimentalists and theorists (high energy physics, condensed matter physics, low energy nuclear and neutron physics, accelerator and applied nuclear physics) had no effective computer links with scientific centres worldwide

  6. Morphological Computation: Synergy of Body and Brain

    Directory of Open Access Journals (Sweden)

    Keyan Ghazi-Zahedi

    2017-08-01

    Full Text Available There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that would otherwise have to be conducted by the brain. In this paper, we argue for a synergistic perspective, and by that we mean that Morphological Computation is a process which requires a close interaction of body and brain. Based on a model of the sensorimotor loop, we study a new measure of synergistic information and show that it is more reliable in cases in which there is no synergistic information, compared to previous results. Furthermore, we discuss an algorithm that allows the calculation of the measure in non-trivial (non-binary systems.

  7. Quantum Computing's Classical Problem, Classical Computing's Quantum Problem

    OpenAIRE

    Van Meter, Rodney

    2013-01-01

    Tasked with the challenge to build better and better computers, quantum computing and classical computing face the same conundrum: the success of classical computing systems. Small quantum computing systems have been demonstrated, and intermediate-scale systems are on the horizon, capable of calculating numeric results or simulating physical systems far beyond what humans can do by hand. However, to be commercially viable, they must surpass what our wildly successful, highly advanced classica...

  8. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    Energy Technology Data Exchange (ETDEWEB)

    Reed, Daniel [University of Iowa; Berzins, Martin [University of Utah; Pennington, Robert; Sarkar, Vivek [Rice University; Taylor, Valerie [Texas A& M University

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  9. Left ventricular asynergy score as an indicator of previous myocardial infarction

    International Nuclear Information System (INIS)

    Backman, C.; Jacobsson, K.A.; Linderholm, H.; Osterman, G.

    1986-01-01

    Sixty-eight patients with coronary heart disease (CHD) i.e. a hisotry of angina of effort and/or previous 'possible infarction' were examined inter alia with ECG and cinecardioangiography. A system of scoring was designed which allowed a semiquantitative estimate of the left ventricular asynergy from cinecardioangiography - the left ventricular motion score (LVMS). The LVMS was associated with the presence of a previous myocardial infarction (MI), as indicated by the history and ECG findings. The ECG changes specific for a previous MI were associated with high LVMS values and unspecific or absent ECG changes with low LVMS values. Decision thresholds for ECG changes and asynergy in diagnosing a previous MI were evaluated by means of a ROC analysis. The accuracy of ECG in detecting a previous MI was slightly higher when asynergy indicated a 'true MI' than when autopsy result did so in a comparable group. Therefore the accuracy of asynergy (LVMS ≥ 1) in detecting a previous MI or myocardial fibrosis in patients with CHD should be at least comparable with that of autopsy (scar > 1 cm). (orig.)

  10. Reaction Diffusion Voronoi Diagrams: From Sensors Data to Computing

    Directory of Open Access Journals (Sweden)

    Alejandro Vázquez-Otero

    2015-05-01

    Full Text Available In this paper, a new method to solve computational problems using reaction diffusion (RD systems is presented. The novelty relies on the use of a model configuration that tailors its spatiotemporal dynamics to develop Voronoi diagrams (VD as a part of the system’s natural evolution. The proposed framework is deployed in a solution of related robotic problems, where the generalized VD are used to identify topological places in a grid map of the environment that is created from sensor measurements. The ability of the RD-based computation to integrate external information, like a grid map representing the environment in the model computational grid, permits a direct integration of sensor data into the model dynamics. The experimental results indicate that this method exhibits significantly less sensitivity to noisy data than the standard algorithms for determining VD in a grid. In addition, previous drawbacks of the computational algorithms based on RD models, like the generation of volatile solutions by means of excitable waves, are now overcome by final stable states.

  11. Approaching Gender Parity: Women in Computer Science at Afghanistan's Kabul University

    Science.gov (United States)

    Plane, Jandelyn

    2010-01-01

    This study explores the representation of women in computer science at the tertiary level through data collected about undergraduate computer science education at Kabul University in Afghanistan. Previous studies have theorized reasons for underrepresentation of women in computer science, and while many of these reasons are indeed present in…

  12. Techniques for animation of CFD results. [computational fluid dynamics

    Science.gov (United States)

    Horowitz, Jay; Hanson, Jeffery C.

    1992-01-01

    Video animation is becoming increasingly vital to the computational fluid dynamics researcher, not just for presentation, but for recording and comparing dynamic visualizations that are beyond the current capabilities of even the most powerful graphic workstation. To meet these needs, Lewis Research Center has recently established a facility to provide users with easy access to advanced video animation capabilities. However, producing animation that is both visually effective and scientifically accurate involves various technological and aesthetic considerations that must be understood both by the researcher and those supporting the visualization process. These considerations include: scan conversion, color conversion, and spatial ambiguities.

  13. Comparison of Swedish and Norwegian Use of Cone-Beam Computed Tomography: a Questionnaire Study

    Directory of Open Access Journals (Sweden)

    Jerker Edén Strindberg

    2015-12-01

    Full Text Available Objectives: Cone-beam computed tomography in dentistry can be used in some countries by other dentists than specialists in radiology. The frequency of buying cone-beam computed tomography to examine patients is rapidly growing, thus knowledge of how to use it is very important. The aim was to compare the outcome of an investigation on the use of cone-beam computed tomography in Sweden with a previous Norwegian study, regarding specifically technical aspects. Material and Methods: The questionnaire contained 45 questions, including 35 comparable questions to Norwegian clinics one year previous. Results were based on inter-comparison of the outcome from each of the two questionnaire studies. Results: Responses rate was 71% in Sweden. There, most of cone-beam computed tomography (CBCT examinations performed by dental nurses, while in Norway by specialists. More than two-thirds of the CBCT units had a scout image function, regularly used in both Sweden (79% and Norway (75%. In Sweden 4% and in Norway 41% of the respondents did not wait for the report from the radiographic specialist before initiating treatment. Conclusions: The bilateral comparison showed an overall similarity between the two countries. The survey gave explicit and important knowledge of the need for education and training of the whole team, since radiation dose to the patient could vary a lot for the same kind of radiographic examination. It is essential to establish quality assurance protocols with defined responsibilities in the team in order to maintain high diagnostic accuracy for all examinations when using cone-beam computed tomography for patient examinations.

  14. Impact of computer use on children's vision.

    Science.gov (United States)

    Kozeis, N

    2009-10-01

    Today, millions of children use computers on a daily basis. Extensive viewing of the computer screen can lead to eye discomfort, fatigue, blurred vision and headaches, dry eyes and other symptoms of eyestrain. These symptoms may be caused by poor lighting, glare, an improper work station set-up, vision problems of which the person was not previously aware, or a combination of these factors. Children can experience many of the same symptoms related to computer use as adults. However, some unique aspects of how children use computers may make them more susceptible than adults to the development of these problems. In this study, the most common eye symptoms related to computer use in childhood, the possible causes and ways to avoid them are reviewed.

  15. Impossibility results for distributed computing

    CERN Document Server

    Attiya, Hagit

    2014-01-01

    To understand the power of distributed systems, it is necessary to understand their inherent limitations: what problems cannot be solved in particular systems, or without sufficient resources (such as time or space). This book presents key techniques for proving such impossibility results and applies them to a variety of different problems in a variety of different system models. Insights gained from these results are highlighted, aspects of a problem that make it difficult are isolated, features of an architecture that make it inadequate for solving certain problems efficiently are identified

  16. Solving a Hamiltonian Path Problem with a bacterial computer

    Directory of Open Access Journals (Sweden)

    Treece Jessica

    2009-07-01

    Full Text Available Abstract Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node

  17. Solving a Hamiltonian Path Problem with a bacterial computer

    Science.gov (United States)

    Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T

    2009-01-01

    Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof

  18. Computers in nuclear medicine - current trends and future directions

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    Previously, a decision to purchase computing equipment for nuclear medicine usually required evaluation of the 'local' needs. With the advent of Pacs and state of the art computer techniques for image acquisition and manipulation, purchase and subsequent application is to become much more complex. Some of the current trends and future possibilities which may influence the choice and operation of computers within and outside the nuclear medicine environment is discussed. (author)

  19. SAR: A fast computer for Camac data acquisition

    International Nuclear Information System (INIS)

    Bricaud, B.; Faivre, J.C.; Pain, J.

    1979-01-01

    This paper describes a special data acquisition and processing facility developed for Nuclear Physics experiments at intermediate energy installed at SATURNE (France) and at CERN (Geneva, Switzerland). Previously, we used a PDP 11/45 computer which was connected to the experiments through a Camac Branch highway. In a typical experiment (340 words per event), the computer limited the data acquisition rate at 4 μsec for each 16-bit transfer and the on-line data reduction at 20 events per second only. The initial goal of this project was to increase these two performances. Previous known acquisition processors were limited by the memory capacity these systems could support. Most of the time the data reduction was done on the host mini computer. Higher memory size can be designed with new fast RAM (Intel 2147) and the data processing can now take place on the front end processor

  20. Interfacing external quantum devices to a universal quantum computer.

    Directory of Open Access Journals (Sweden)

    Antonio A Lagana

    Full Text Available We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer.

  1. Highlights from the previous volumes

    Science.gov (United States)

    Vergini Eduardo, G.; Pan, Y.; al., Vardi R. et; al., Akkermans Eric et; et al.

    2014-01-01

    Semiclassical propagation up to the Heisenberg time Superconductivity and magnetic order in the half-Heusler compound ErPdBi An experimental evidence-based computational paradigm for new logic-gates in neuronal activity Universality in the symmetric exclusion process and diffusive systems

  2. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Di Girolamo, Alessandro; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall

    2016-01-01

    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. ...

  3. Computed tomography study of otitis media

    International Nuclear Information System (INIS)

    Bahia, Paulo Roberto Valle; Marchiori, Edson

    1997-01-01

    The findings of computed tomography (CT) of 89 patients clinically suspected of having otitis media were studied in this work. Such results were compared to clinical diagnosis, otoscopy, surgical findings and previous data. Among the results of our analysis, we studied seven patients with acute otitis media and 83 patients with chronic otitis media. The patients with acute otitis media have undergone CT examinations to evaluate possible spread to central nervous system. The diagnosis of cholesteatoma, its extension and complications were the main indication. for chronic otitis media study. The main findings of the cholesteatomatous otitis were the occupation of the epitympanun, the bony wall destruction and the ossicular chain erosion. The CT demonstrated a great sensibility to diagnose the cholesteatoma. (author)

  4. Towards a real time computation of the dose in a phantom segmented into homogeneous meshes

    International Nuclear Information System (INIS)

    Blanpain, B.

    2009-10-01

    Automatic radiation therapy treatment planning necessitates a very fast computation of the dose delivered to the patient. We propose to compute the dose by segmenting the patient's phantom into homogeneous meshes, and by associating, to the meshes, projections to dose distributions pre-computed in homogeneous phantoms, along with weights managing heterogeneities. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. This method is very fast, in particular when there are few points of interest (several hundreds). In this case, results are obtained in less than one second. With such performances, practical realization of automatic treatment planning becomes practically feasible. (author)

  5. Speed test results and hardware/software study of computational speed problem, appendix D

    Science.gov (United States)

    1984-01-01

    The HP9845C is a desktop computer which is tested and evaluated for processing speed. A study was made to determine the availability and approximate cost of computers and/or hardware accessories necessary to meet the 20 ms sample period speed requirements. Additional requirements were that the control algorithm could be programmed in a high language and that the machine have sufficient storage to store the data from a complete experiment.

  6. Associated computational plasticity schemes for nonassociated frictional materials

    DEFF Research Database (Denmark)

    Krabbenhoft, K.; Karim, M. R.; Lyamin, A. V.

    2012-01-01

    A new methodology for computational plasticity of nonassociated frictional materials is presented. The new approach is inspired by the micromechanical origins of friction and results in a set of governing equations similar to those of standard associated plasticity. As such, procedures previously...... developed for associated plasticity are applicable with minor modification. This is illustrated by adaptation of the standard implicit scheme. Moreover, the governing equations can be cast in terms of a variational principle, which after discretization is solved by means of a newly developed second...

  7. Quality of human-computer interaction - results of a national usability survey of hospital-IT in Germany

    Directory of Open Access Journals (Sweden)

    Bundschuh Bettina B

    2011-11-01

    Full Text Available Abstract Background Due to the increasing functionality of medical information systems, it is hard to imagine day to day work in hospitals without IT support. Therefore, the design of dialogues between humans and information systems is one of the most important issues to be addressed in health care. This survey presents an analysis of the current quality level of human-computer interaction of healthcare-IT in German hospitals, focused on the users' point of view. Methods To evaluate the usability of clinical-IT according to the design principles of EN ISO 9241-10 the IsoMetrics Inventory, an assessment tool, was used. The focus of this paper has been put on suitability for task, training effort and conformity with user expectations, differentiated by information systems. Effectiveness has been evaluated with the focus on interoperability and functionality of different IT systems. Results 4521 persons from 371 hospitals visited the start page of the study, while 1003 persons from 158 hospitals completed the questionnaire. The results show relevant variations between different information systems. Conclusions Specialised information systems with defined functionality received better assessments than clinical information systems in general. This could be attributed to the improved customisation of these specialised systems for specific working environments. The results can be used as reference data for evaluation and benchmarking of human computer engineering in clinical health IT context for future studies.

  8. Secure Data Access Control for Fog Computing Based on Multi-Authority Attribute-Based Signcryption with Computation Outsourcing and Attribute Revocation.

    Science.gov (United States)

    Xu, Qian; Tan, Chengxiang; Fan, Zhijie; Zhu, Wenye; Xiao, Ya; Cheng, Fujia

    2018-05-17

    Nowadays, fog computing provides computation, storage, and application services to end users in the Internet of Things. One of the major concerns in fog computing systems is how fine-grained access control can be imposed. As a logical combination of attribute-based encryption and attribute-based signature, Attribute-based Signcryption (ABSC) can provide confidentiality and anonymous authentication for sensitive data and is more efficient than traditional "encrypt-then-sign" or "sign-then-encrypt" strategy. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention recently. However, in many existing ABSC systems, the computation cost required for the end users in signcryption and designcryption is linear with the complexity of signing and encryption access policy. Moreover, only a single authority that is responsible for attribute management and key generation exists in the previous proposed ABSC schemes, whereas in reality, mostly, different authorities monitor different attributes of the user. In this paper, we propose OMDAC-ABSC, a novel data access control scheme based on Ciphertext-Policy ABSC, to provide data confidentiality, fine-grained control, and anonymous authentication in a multi-authority fog computing system. The signcryption and designcryption overhead for the user is significantly reduced by outsourcing the undesirable computation operations to fog nodes. The proposed scheme is proven to be secure in the standard model and can provide attribute revocation and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.

  9. Passive acoustic monitoring using a towed hydrophone array results in identification of a previously unknown beaked whale habitat.

    Science.gov (United States)

    Yack, Tina M; Barlow, Jay; Calambokidis, John; Southall, Brandon; Coates, Shannon

    2013-09-01

    Beaked whales are diverse and species rich taxa. They spend the vast majority of their time submerged, regularly diving to depths of hundreds to thousands of meters, typically occur in small groups, and behave inconspicuously at the surface. These factors make them extremely difficult to detect using standard visual survey methods. However, recent advancements in acoustic detection capabilities have made passive acoustic monitoring (PAM) a viable alternative. Beaked whales can be discriminated from other odontocetes by the unique characteristics of their echolocation clicks. In 2009 and 2010, PAM methods using towed hydrophone arrays were tested. These methods proved highly effective for real-time detection of beaked whales in the Southern California Bight (SCB) and were subsequently implemented in 2011 to successfully detect and track beaked whales during the ongoing Southern California Behavioral Response Study. The three year field effort has resulted in (1) the successful classification and tracking of Cuvier's (Ziphius cavirostris), Baird's (Berardius bairdii), and unidentified Mesoplodon beaked whale species and (2) the identification of areas of previously unknown beaked whale habitat use. Identification of habitat use areas will contribute to a better understanding of the complex relationship between beaked whale distribution, occurrence, and preferred habitat characteristics on a relatively small spatial scale. These findings will also provide information that can be used to promote more effective management and conservation of beaked whales in the SCB, a heavily used Naval operation and training region.

  10. Computer science. Heads-up limit hold'em poker is solved.

    Science.gov (United States)

    Bowling, Michael; Burch, Neil; Johanson, Michael; Tammelin, Oskari

    2015-01-09

    Poker is a family of games that exhibit imperfect information, where players do not have full knowledge of past events. Whereas many perfect-information games have been solved (e.g., Connect Four and checkers), no nontrivial imperfect-information game played competitively by humans has previously been solved. Here, we announce that heads-up limit Texas hold'em is now essentially weakly solved. Furthermore, this computation formally proves the common wisdom that the dealer in the game holds a substantial advantage. This result was enabled by a new algorithm, CFR(+), which is capable of solving extensive-form games orders of magnitude larger than previously possible. Copyright © 2015, American Association for the Advancement of Science.

  11. Computed tomography of the chest in blunt thoracic trauma: results of a prospective study

    International Nuclear Information System (INIS)

    Blostein, P.; Hodgman, C.

    1998-01-01

    Blunt thoracic injuries detected by computed tomography of the chest infrequently require immediate therapy. If immediate therapy is needed, findings will be visible on plain roentgenograms or on clinical exam. Routine Computed Tomography of the chest in blunt trauma is not recommended but may be helpful in selected cases. (N.C.)

  12. MELMRK 2.0: A description of computer models and results of code testing

    International Nuclear Information System (INIS)

    Wittman, R.S.; Denny, V.; Mertol, A.

    1992-01-01

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly

  13. Previously unknown species of Aspergillus.

    Science.gov (United States)

    Gautier, M; Normand, A-C; Ranque, S

    2016-08-01

    The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and

  14. Towards Reproducibility in Computational Hydrology

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit

    2017-04-01

    Reproducibility is a foundational principle in scientific research. The ability to independently re-run an experiment helps to verify the legitimacy of individual findings, and evolve (or reject) hypotheses and models of how environmental systems function, and move them from specific circumstances to more general theory. Yet in computational hydrology (and in environmental science more widely) the code and data that produces published results are not regularly made available, and even if they are made available, there remains a multitude of generally unreported choices that an individual scientist may have made that impact the study result. This situation strongly inhibits the ability of our community to reproduce and verify previous findings, as all the information and boundary conditions required to set up a computational experiment simply cannot be reported in an article's text alone. In Hutton et al 2016 [1], we argue that a cultural change is required in the computational hydrological community, in order to advance and make more robust the process of knowledge creation and hypothesis testing. We need to adopt common standards and infrastructures to: (1) make code readable and re-useable; (2) create well-documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; (3) make code and workflows available, easy to find, and easy to interpret, using code and code metadata repositories. To create change we argue for improved graduate training in these areas. In this talk we reflect on our progress in achieving reproducible, open science in computational hydrology, which are relevant to the broader computational geoscience community. In particular, we draw on our experience in the Switch-On (EU funded) virtual water science laboratory (http://www.switch-on-vwsl.eu/participate/), which is an open platform for collaboration in hydrological experiments (e.g. [2]). While we use computational hydrology as

  15. Enhancing Trusted Cloud Computing Platform for Infrastructure as a Service

    Directory of Open Access Journals (Sweden)

    KIM, H.

    2017-02-01

    Full Text Available The characteristics of cloud computing including on-demand self-service, resource pooling, and rapid elasticity have made it grow in popularity. However, security concerns still obstruct widespread adoption of cloud computing in the industry. Especially, security risks related to virtual machine make cloud users worry about exposure of their private data in IaaS environment. In this paper, we propose an enhanced trusted cloud computing platform to provide confidentiality and integrity of the user's data and computation. The presented platform provides secure and efficient virtual machine management protocols not only to protect against eavesdropping and tampering during transfer but also to guarantee the virtual machine is hosted only on the trusted cloud nodes against inside attackers. The protocols utilize both symmetric key operations and public key operations together with efficient node authentication model, hence both the computational cost for cryptographic operations and the communication steps are significantly reduced. As a result, the simulation shows the performance of the proposed platform is approximately doubled compared to the previous platforms. The proposed platform eliminates cloud users' worry above by providing confidentiality and integrity of their private data with better performance, and thus it contributes to wider industry adoption of cloud computing.

  16. Embedding Topical Elements of Parallel Programming, Computer Graphics, and Artificial Intelligence across the Undergraduate CS Required Courses

    Directory of Open Access Journals (Sweden)

    James Wolfer

    2015-02-01

    Full Text Available Traditionally, topics such as parallel computing, computer graphics, and artificial intelligence have been taught as stand-alone courses in the computing curriculum. Often these are elective courses, limiting the material to the subset of students choosing to take the course. Recently there has been movement to distribute topics across the curriculum in order to ensure that all graduates have been exposed to concepts such as parallel computing. Previous work described an attempt to systematically weave a tapestry of topics into the undergraduate computing curriculum. This paper reviews that work and expands it with representative examples of assignments, demonstrations, and results as well as describing how the tools and examples deployed for these classes have a residual effect on classes such as Comptuer Literacy.

  17. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  18. Influence from cavity decay on geometric quantum computation in the large-detuning cavity QED model

    International Nuclear Information System (INIS)

    Chen Changyong; Zhang Xiaolong; Deng Zhijiao; Gao Kelin; Feng Mang

    2006-01-01

    We introduce a general displacement operator to investigate the unconventional geometric quantum computation with dissipation under the model of many identical three-level atoms in a cavity, driven by a classical field. Our concrete calculation is made for the case of two atoms, based on a previous scheme [S.-B. Zheng, Phys. Rev. A 70, 052320 (2004)] for the large-detuning interaction of the atoms with the cavity mode. The analytical results we present will be helpful for experimental realization of geometric quantum computation in real cavities

  19. Study of basic computer competence among public health nurses in Taiwan.

    Science.gov (United States)

    Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling

    2004-03-01

    Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future.

  20. Learning Universal Computations with Spikes

    Science.gov (United States)

    Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin

    2016-01-01

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381

  1. Towards pattern generation and chaotic series prediction with photonic reservoir computers

    Science.gov (United States)

    Antonik, Piotr; Hermans, Michiel; Duport, François; Haelterman, Marc; Massar, Serge

    2016-03-01

    Reservoir Computing is a bio-inspired computing paradigm for processing time dependent signals that is particularly well suited for analog implementations. Our team has demonstrated several photonic reservoir computers with performance comparable to digital algorithms on a series of benchmark tasks such as channel equalisation and speech recognition. Recently, we showed that our opto-electronic reservoir computer could be trained online with a simple gradient descent algorithm programmed on an FPGA chip. This setup makes it in principle possible to feed the output signal back into the reservoir, and thus highly enrich the dynamics of the system. This will allow to tackle complex prediction tasks in hardware, such as pattern generation and chaotic and financial series prediction, which have so far only been studied in digital implementations. Here we report simulation results of our opto-electronic setup with an FPGA chip and output feedback applied to pattern generation and Mackey-Glass chaotic series prediction. The simulations take into account the major aspects of our experimental setup. We find that pattern generation can be easily implemented on the current setup with very good results. The Mackey-Glass series prediction task is more complex and requires a large reservoir and more elaborate training algorithm. With these adjustments promising result are obtained, and we now know what improvements are needed to match previously reported numerical results. These simulation results will serve as basis of comparison for experiments we will carry out in the coming months.

  2. Limits of computational white-light holography

    International Nuclear Information System (INIS)

    Mader, Sebastian; Kozacki, Tomasz; Tompkin, Wayne

    2013-01-01

    Recently, computational holograms are being used in applications, where previously conventional holograms were applied. Compared to conventional holography, computational holography is based on imaging of virtual objects instead of real objects, which renders them somewhat more flexibility. Here, computational holograms are calculated based on the superposition of point sources, which are placed at the mesh vertices of arbitrary 3D models. The computed holograms have full parallax and exhibit a problem in viewing that we have called g hosting , which is linked to the viewing of computational holograms based on 3D models close to the image plane. Experimental white-light reconstruction of these holograms showed significant blurring, which is explained here based on simulations of the lateral as well as the axial resolution of a point image with respect to the source spectrum and image distance. In accordance with these simulations, an upper limit of the distance to the image plane is determined, which ensures high quality imaging.

  3. Total hip arthroplasty after a previous pelvic osteotomy: A systematic review and meta-analysis.

    Science.gov (United States)

    Shigemura, T; Yamamoto, Y; Murata, Y; Sato, T; Tsuchiya, R; Wada, Y

    2018-06-01

    There are several reports regarding total hip arthroplasty (THA) after a previous pelvic osteotomy (PO). However, to our knowledge, until now there has been no formal systematic review and meta-analysis published to summarize the clinical results of THA after a previous PO. Therefore, we conducted a systematic review and meta-analysis of results of THA after a previous PO. We focus on these questions as follows: does a previous PO affect the results of subsequent THA, such as clinical outcomes, operative time, operative blood loss, and radiological parameters. Using PubMed, Web of Science, and Cochrane Library, we searched for relevant original papers. The pooling of data was performed using RevMan software (version 5.3, Cochrane Collaboration, Oxford, UK). A p-value50%, significant heterogeneity was assumed and a random-effects model was applied for the meta-analysis. A fixed-effects model was applied in the absence of significant heterogeneity. Eleven studies were included in this meta-analysis. The pooled results indicated that there was no significant difference in postoperative Merle D'Aubigne-Postel score (I 2 =0%, SMD=-0.15, 95% CI: -0.36 to 0.06, p=0.17), postoperative Harris hip score (I 2 =60%, SMD=-0.23, 95% CI: -0.50 to 0.05, p=0.10), operative time (I 2 =86%, SMD=0.37, 95% CI: -0.09 to 0.82, p=0.11), operative blood loss (I 2 =82%, SMD=0.23, 95% CI: -0.17 to 0.63, p=0.25), and cup abduction angle (I 2 =43%, SMD=-0.08, 95% CI: -0.25 to 0.09, p=0.38) between THA with and without a previous PO. However, cup anteversion angle of THA with a previous PO was significantly smaller than that of without a previous PO (I 2 =77%, SMD=-0.63, 95% CI: -1.13 to -0.13, p=0.01). Systematic review and meta-analysis of results of THA after a previous PO was performed. A previous PO did not affect the results of subsequent THA, except for cup anteversion. Because of the low quality evidence currently available, high-quality randomized controlled trials are required

  4. Computational aspects of feedback in neural circuits.

    Directory of Open Access Journals (Sweden)

    Wolfgang Maass

    2007-01-01

    Full Text Available It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also

  5. Previous utilization of service does not improve timely booking in ...

    African Journals Online (AJOL)

    Previous utilization of service does not improve timely booking in antenatal care: Cross sectional study ... Journal Home > Vol 24, No 3 (2010) > ... Results: Past experience on antenatal care service utilization did not come out as a predictor for ...

  6. Limits on efficient computation in the physical world

    Science.gov (United States)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  7. Results of application of automatic computation of static corrections on data from the South Banat Terrain

    Science.gov (United States)

    Milojević, Slavka; Stojanovic, Vojislav

    2017-04-01

    Due to the continuous development of the seismic acquisition and processing method, the increase of the signal/fault ratio always represents a current target. The correct application of the latest software solutions improves the processing results and justifies their development. A correct computation and application of static corrections represents one of the most important tasks in pre-processing. This phase is of great importance for further processing steps. Static corrections are applied to seismic data in order to compensate the effects of irregular topography, the difference between the levels of source points and receipt in relation to the level of reduction, of close to the low-velocity surface layer (weathering correction), or any reasons that influence the spatial and temporal position of seismic routes. The refraction statics method is the most common method for computation of static corrections. It is successful in resolving of both the long-period statics problems and determining of the difference in the statics caused by abrupt lateral changes in velocity in close to the surface layer. XtremeGeo FlatironsTM is a program whose main purpose is computation of static correction through a refraction statics method and allows the application of the following procedures: picking of first arrivals, checking of geometry, multiple methods for analysis and modelling of statics, analysis of the refractor anisotropy and tomography (Eikonal Tomography). The exploration area is located on the southern edge of the Pannonian Plain, in the plain area with altitudes of 50 to 195 meters. The largest part of the exploration area covers Deliblato Sands, where the geological structure of the terrain and high difference in altitudes significantly affects the calculation of static correction. Software XtremeGeo FlatironsTM has powerful visualization and tools for statistical analysis which contributes to significantly more accurate assessment of geometry close to the surface

  8. Fire Risk Scoping Study: Investigation of nuclear power plant fire risk, including previously unaddressed issues

    International Nuclear Information System (INIS)

    Lambright, J.A.; Nowlen, S.P.; Nicolette, V.F.; Bohn, M.P.

    1989-01-01

    An investigation of nuclear power plant fire risk issues raised as a result of the USNRC sponsored Fire Protection Research Program at Sandia National Laboratories has been performed. The specific objectives of this study were (1) to review and requantify fire risk scenarios from four fire probabilistic risk assessments (PRAs) in light of updated data bases made available as a result of USNRC sponsored Fire Protection Research Program and updated computer fire modeling capabilities, (2) to identify potentially significant fire risk issues that have not been previously addressed in a fire risk context and to quantify the potential impact of those identified fire risk issues where possible, and (3) to review current fire regulations and plant implementation practices for relevance to the identified unaddressed fire risk issues. In performance of the fire risk scenario requantifications several important insights were gained. It was found that utilization of a more extensive operational experience base resulted in both fire occurrence frequencies and fire duration times (i.e., time required for fire suppression) increasing significantly over those assumed in the original works. Additionally, some thermal damage threshold limits assumed in the original works were identified as being nonconservative based on more recent experimental data. Finally, application of the COMPBRN III fire growth model resulted in calculation of considerably longer fire damage times than those calculated in the original works using COMPBRN I. 14 refs., 2 figs., 16 tabs

  9. Preparing computers for affective communication: a psychophysiological concept and preliminary results.

    Science.gov (United States)

    Whang, Min Cheol; Lim, Joa Sang; Boucsein, Wolfram

    Despite rapid advances in technology, computers remain incapable of responding to human emotions. An exploratory study was conducted to find out what physiological parameters might be useful to differentiate among 4 emotional states, based on 2 dimensions: pleasantness versus unpleasantness and arousal versus relaxation. The 4 emotions were induced by exposing 26 undergraduate students to different combinations of olfactory and auditory stimuli, selected in a pretest from 12 stimuli by subjective ratings of arousal and valence. Changes in electroencephalographic (EEG), heart rate variability, and electrodermal measures were used to differentiate the 4 emotions. EEG activity separates pleasantness from unpleasantness only in the aroused but not in the relaxed domain, where electrodermal parameters are the differentiating ones. All three classes of parameters contribute to a separation between arousal and relaxation in the positive valence domain, whereas the latency of the electrodermal response is the only differentiating parameter in the negative domain. We discuss how such a psychophysiological approach may be incorporated into a systemic model of a computer responsive to affective communication from the user.

  10. FDG-PET and CT patterns of bone metastases and their relationship to previously administered anti-cancer therapy

    International Nuclear Information System (INIS)

    Israel, Ora; Bar-Shalom, Rachel; Keidar, Zohar; Goldberg, Anat; Nachtigal, Alicia; Militianu, Daniela; Fogelman, Ignac

    2006-01-01

    To assess 18 F-fluorodeoxyglucose (FDG) uptake in bone metastases in patients with and without previous treatment, and compare positive positron emission tomography (PET) with osteolytic or osteoblastic changes on computed tomography (CT). One hundred and thirty-one FDG-PET/CT studies were reviewed for bone metastases. A total of 294 lesions were found in 76 patients, 81 in untreated patients and 213 in previously treated patients. PET was assessed for abnormal FDG uptake localised by PET/CT to the skeleton. CT was evaluated for bone metastases and for blastic or lytic pattern. The relationship between the presence and pattern of bone metastases on PET and CT, and prior treatment was statistically analysed using the chi-square test. PET identified 174 (59%) metastases, while CT detected 280 (95%). FDG-avid metastases included 74/81 (91%) untreated and 100/213 (47%) treated lesions (p<0.001). On CT there were 76/81 (94%) untreated and 204/213 (96%) treated metastases (p NS). In untreated patients, 85% of lesions were seen on both PET and CT (26 blastic, 43 lytic). In treated patients, 53% of lesions were seen only on CT (95 blastic, 18 lytic). Of the osteoblastic metastases, 65/174 (37%) were PET positive and 98/120 (82%), PET negative (p<0.001). The results of the present study indicate that when imaging bone metastases, prior treatment can alter the relationship between PET and CT findings. Most untreated bone metastases are PET positive and lytic on CT, while in previously treated patients most lesions are PET negative and blastic on CT. PET and CT therefore appear to be complementary in the assessment of bone metastases. (orig.)

  11. Gender Digital Divide and Challenges in Undergraduate Computer Science Programs

    Science.gov (United States)

    Stoilescu, Dorian; McDougall, Douglas

    2011-01-01

    Previous research revealed a reduced number of female students registered in computer science studies. In addition, the female students feel isolated, have reduced confidence, and underperform. This article explores differences between female and male students in undergraduate computer science programs in a mid-size university in Ontario. Based on…

  12. Spaceborne computer executive routine functional design specification. Volume 2: Computer executive design for space station/base

    Science.gov (United States)

    Kennedy, J. R.; Fitzpatrick, W. S.

    1971-01-01

    The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.

  13. Parallel quantum computing in a single ensemble quantum computer

    International Nuclear Information System (INIS)

    Long Guilu; Xiao, L.

    2004-01-01

    We propose a parallel quantum computing mode for ensemble quantum computer. In this mode, some qubits are in pure states while other qubits are in mixed states. It enables a single ensemble quantum computer to perform 'single-instruction-multidata' type of parallel computation. Parallel quantum computing can provide additional speedup in Grover's algorithm and Shor's algorithm. In addition, it also makes a fuller use of qubit resources in an ensemble quantum computer. As a result, some qubits discarded in the preparation of an effective pure state in the Schulman-Varizani and the Cleve-DiVincenzo algorithms can be reutilized

  14. The impact of optimize solar radiation received on the levels and energy disposal of levels on architectural design result by using computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Rezaei, Davood; Farajzadeh Khosroshahi, Samaneh; Sadegh Falahat, Mohammad [Zanjan University (Iran, Islamic Republic of)], email: d_rezaei@znu.ac.ir, email: ronas_66@yahoo.com, email: Safalahat@yahoo.com

    2011-07-01

    In order to minimize the energy consumption of a building it is important to achieve optimum solar energy. The aim of this paper is to introduce the use of computer modeling in the early stages of design to optimize solar radiation received and energy disposal in an architectural design. Computer modeling was performed on 2 different projects located in Los Angeles, USA, using ECOTECT software. Changes were made to the designs following analysis of the modeling results and a subsequent analysis was carried out on the optimized designs. Results showed that the computer simulation allows the designer to set the analysis criteria and improve the energy performance of a building before it is constructed; moreover, it can be used for a wide range of optimization levels. This study pointed out that computer simulation should be performed in the design stage to optimize a building's energy performance.

  15. First results with twisted mass fermions towards the computation of parton distribution functions on the lattice

    International Nuclear Information System (INIS)

    Alexandrou, Constantia; Cyprus Institute, Nicosia; Deutsches Elektronen-Synchrotron; Cichy, Krzysztof; Poznan Univ.; Drach, Vincent; Garcia-Ramos, Elena; Humboldt-Universitaet, Berlin; Hadjiyiannakou, Kyriakos; Jansen, Karl; Steffens, Fernanda; Wiese, Christian

    2014-11-01

    We report on our exploratory study for the evaluation of the parton distribution functions from lattice QCD, based on a new method proposed in Ref.∝arXiv:1305.1539. Using the example of the nucleon, we compare two different methods to compute the matrix elements needed, and investigate the application of gauge link smearing. We also present first results from a large production ensemble and discuss the future challenges related to this method.

  16. Analysis of previous perceptual and motor experience in breaststroke kick learning

    Directory of Open Access Journals (Sweden)

    Ried Bettina

    2015-12-01

    Full Text Available One of the variables that influence motor learning is the learner’s previous experience, which may provide perceptual and motor elements to be transferred to a novel motor skill. For swimming skills, several motor experiences may prove effective. Purpose. The aim was to analyse the influence of previous experience in playing in water, swimming lessons, and music or dance lessons on learning the breaststroke kick. Methods. The study involved 39 Physical Education students possessing basic swimming skills, but not the breaststroke, who performed 400 acquisition trials followed by 50 retention and 50 transfer trials, during which stroke index as well as rhythmic and spatial configuration indices were mapped, and answered a yes/no questionnaire regarding previous experience. Data were analysed by ANOVA (p = 0.05 and the effect size (Cohen’s d ≥0.8 indicating large effect size. Results. The whole sample improved their stroke index and spatial configuration index, but not their rhythmic configuration index. Although differences between groups were not significant, two types of experience showed large practical effects on learning: childhood water playing experience only showed major practically relevant positive effects, and no experience in any of the three fields hampered the learning process. Conclusions. The results point towards diverse impact of previous experience regarding rhythmic activities, swimming lessons, and especially with playing in water during childhood, on learning the breaststroke kick.

  17. Computer-assisted comparison of analysis and test results in transportation experiments

    International Nuclear Information System (INIS)

    Knight, R.D.; Ammerman, D.J.; Koski, J.A.

    1998-01-01

    As a part of its ongoing research efforts, Sandia National Laboratories' Transportation Surety Center investigates the integrity of various containment methods for hazardous materials transport, subject to anomalous structural and thermal events such as free-fall impacts, collisions, and fires in both open and confined areas. Since it is not possible to conduct field experiments for every set of possible conditions under which an actual transportation accident might occur, accurate modeling methods must be developed which will yield reliable simulations of the effects of accident events under various scenarios. This requires computer software which is capable of assimilating and processing data from experiments performed as benchmarks, as well as data obtained from numerical models that simulate the experiment. Software tools which can present all of these results in a meaningful and useful way to the analyst are a critical aspect of this process. The purpose of this work is to provide software resources on a long term basis, and to ensure that the data visualization capabilities of the Center keep pace with advancing technology. This will provide leverage for its modeling and analysis abilities in a rapidly evolving hardware/software environment

  18. Symbolic computation and solitons of the nonlinear Schroedinger equation in inhomogeneous optical fiber media

    International Nuclear Information System (INIS)

    Li Biao; Chen Yong

    2007-01-01

    In this paper, the inhomogeneous nonlinear Schroedinger equation with the loss/gain and the frequency chirping is investigated. With the help of symbolic computation, three families of exact analytical solutions are presented by employing the extended projective Riccati equation method. From our results, many previous known results of nonlinear Schroedinger equation obtained by some authors can be recovered by means of some suitable selections of the arbitrary functions and arbitrary constants. Of optical and physical interests, soliton propagation and soliton interaction are discussed and simulated by computer, which include snake-soliton propagation and snake-solitons interaction, boomerang-like soliton propagation and boomerang-like solitons interaction, dispersion managed (DM) bright (dark) soliton propagation and DM solitons interaction

  19. Fault-tolerant quantum computing in the Pauli or Clifford frame with slow error diagnostics

    Directory of Open Access Journals (Sweden)

    Christopher Chamberland

    2018-01-01

    Full Text Available We consider the problem of fault-tolerant quantum computation in the presence of slow error diagnostics, either caused by measurement latencies or slow decoding algorithms. Our scheme offers a few improvements over previously existing solutions, for instance it does not require active error correction and results in a reduced error-correction overhead when error diagnostics is much slower than the gate time. In addition, we adapt our protocol to cases where the underlying error correction strategy chooses the optimal correction amongst all Clifford gates instead of the usual Pauli gates. The resulting Clifford frame protocol is of independent interest as it can increase error thresholds and could find applications in other areas of quantum computation.

  20. The determination of surface of powders by BET method using nitrogen and krypton with computer calculation of the results

    International Nuclear Information System (INIS)

    Dembinski, W.; Zlotowski, T.

    1973-01-01

    A computer program written in FORTRAN language for calculations of final results of specific surface analysis based on BET theory has been described. Two gases - nitrogen and krypton were used. A technical description of measuring apparaturs is presented as well as theoretical basis of the calculations together with statistical analysis of the results for uranium compounds powders. (author)

  1. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2013-01-01

    The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Optimization techniques featured throughout the text. It covers parallelism in depth with...

  2. Computer ray tracing speeds.

    Science.gov (United States)

    Robb, P; Pawlowski, B

    1990-05-01

    The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.

  3. Embracing the Cloud: Six Ways to Look at the Shift to Cloud Computing

    Science.gov (United States)

    Ullman, David F.; Haggerty, Blake

    2010-01-01

    Cloud computing is the latest paradigm shift for the delivery of IT services. Where previous paradigms (centralized, decentralized, distributed) were based on fairly straightforward approaches to technology and its management, cloud computing is radical in comparison. The literature on cloud computing, however, suffers from many divergent…

  4. Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons

    Directory of Open Access Journals (Sweden)

    Ernestina Martel

    2018-06-01

    Full Text Available Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA, suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.

  5. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  6. Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy.

    Science.gov (United States)

    Morimoto, Satoshi; Remijn, Gerard B; Nakajima, Yoshitaka

    2016-01-01

    Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.

  7. Previous climatic alterations are caused by the sun

    International Nuclear Information System (INIS)

    Groenaas, Sigbjoern

    2003-01-01

    The article surveys the scientific results of previous research into the contribution of the sun to climatic alterations. The author concludes that there is evidence of eight cold periods after the last ice age and that the alterations largely were due to climate effects from the sun. However, these effects are only causing a fraction of the registered global warming. It is assumed that the human activities are contributing to the rest of the greenhouse effect

  8. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  9. [Fatal amnioinfusion with previous choriocarcinoma in a parturient woman].

    Science.gov (United States)

    Hrgović, Z; Bukovic, D; Mrcela, M; Hrgović, I; Siebzehnrübl, E; Karelovic, D

    2004-04-01

    The case of 36-year-old tercipare is described who developed choriocharcinoma in a previous pregnancy. During the first term labour the patient developed cardiac arrest, so reanimation and sectio cesarea was performed. A male new-born was delivered in good condition, but even after intensive therapy and reanimation occurred death of parturient woman with picture of disseminate intravascular coagulopathia (DIK). On autopsy and on histology there was no sign of malignant disease, so it was not possible to connect previous choricarcinoma with amniotic fluid embolism. Maybe was place of choriocarcinoma "locus minoris resistentiae" which later resulted with failure in placentation what was hard to prove. On autopsy we found embolia of lung with a microthrombosis of terminal circulation with punctiformis bleeding in mucous, what stands for DIK.

  10. 3D computation of the shape of etched tracks in CR-39 for oblique particle incidence and comparison with experimental results

    International Nuclear Information System (INIS)

    Doerschel, B.; Hermsdorf, D.; Reichelt, U.; Starke, S.; Wang, Y.

    2003-01-01

    Computation of the shape of etch pits needs to know the varying track etch rate along the particle trajectories. Experiments with alpha particles and 7 Li ions entering CR-39 detectors under different angles showed that this function is not affected by the inclination of the particle trajectory with respect to the normal on the detector surface. Track formation for oblique particle incidence can, therefore, be simulated using the track etch rates determined for perpendicular incidence. 3D computation of the track shape was performed applying a model recently described in literature. A special program has been written for computing the x,y,z coordinates of points on the etch pit walls. In addition, the etch pit profiles in sagittal sections as well as the contours of the etch pit openings on the detector surface have been determined experimentally. Computed and experimental results were in good agreement confirming the applicability of the 3D computational model in combination with the functions for the depth-dependent track etch rates determined experimentally

  11. Concatenated codes for fault tolerant quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.; Zurek, W.

    1995-05-01

    The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.

  12. Evaluation of results for computed tomography in head region

    International Nuclear Information System (INIS)

    Himeji, Toshiharu

    1983-01-01

    In 2 years and 5 months from April 1980 to May 1982, I had maked examination for computed tomography (CT) in head region by TCT-60A (TOSHIBA), and so reported the evaluation of following those results; 1) The number of CT scan was 1228 patients and total 1513 scannings. The contents of its scan were plain CT (86.1%), CE (contrast enhancement) CT (7.3%) and both application methods (6.6%), and included from 1 CT time (85.3%), 2 CT times (9.6%), 3 CT times (3.3%),... til 7 CT times. Our CT scan cases were 720 males (58.6%) and 508 females (41.4%);its scan age level was mostly 40 y.o. -- over 70 y.o., but low age patients (under 10 y.o.) indicated number of 15.3%. In consideration of this fact the advantage of CT scan was very easily and safely procedure free from body lesion. 2) In number of CT scan: the most many patients were visiting department of internal medicine clinic, and following pediatric clinic, surgery and orthopedic department. Above all CT scan cases were included of other all clinical departments in our hospital. (CT scan was very useful for neurological examination). 3) In CT diagnosis our cases were it of cerebral infarction 128 (10.4%), cerebral hemorrage 19 (1.5%) and brain tumor 24 (2.3%), in small cases other craniocerebral diseases. 4) The visiting cases in internal medicine often complain of cerebrovascular symptomes, and in pediatric clinic chief complain was often suspected mental retardation and neurological sign. In surgery department it was suspected metastatic brain tumor from other malignant cancers, and in orthopedic surgery often skull injury or traffic accident. (J.P.N.)

  13. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  14. Development and application of a new deterministic method for calculating computer model result uncertainties

    International Nuclear Information System (INIS)

    Maerker, R.E.; Worley, B.A.

    1989-01-01

    Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis

  15. Preliminary results of very fast computation of Moment Magnitude and focal mechanism in the context of tsunami warning

    Science.gov (United States)

    Schindelé, François; Roch, Julien; Rivera, Luis

    2015-04-01

    Various methodologies were recently developed to compute the moment magnitude and the focal mechanism, thanks to the real time access to numerous broad-band seismic data. Several methods were implemented at the CENALT, in particular the W-Phase method developed by H. Kanamori and L. Rivera. For earthquakes of magnitudes in the range 6.5-9.0, this method provides accurate results in less than 40 minutes. The context of the tsunami warning in Mediterranean, a small basin impacted in less than one hour, and with small sources but some with high tsunami potential (Boumerdes 2003), a comprehensive tsunami warning system in that region should include very fast computation of the seismic parameters. The results of the values of Mw, the focal depth and the type of fault (reverse, normal, strike-slip) are the most relevant parameters expected for the tsunami warning. Preliminary results will be presented using data in the North-eastern and Mediterranean region for the recent period 2010-2014. This work is funded by project ASTARTE - - Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839

  16. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  17. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  18. 49 CFR 173.23 - Previously authorized packaging.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Previously authorized packaging. 173.23 Section... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.23 Previously authorized packaging. (a) When the regulations specify a packaging with a specification marking...

  19. Toward a computational model of hemostasis

    Science.gov (United States)

    Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith

    2017-11-01

    Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.

  20. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    Science.gov (United States)

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  1. Concomitant Use of Transcranial Direct Current Stimulation and Computer-Assisted Training for the Rehabilitation of Attention in Traumatic Brain Injured Patients: Behavioral and Neuroimaging Results.

    Science.gov (United States)

    Sacco, Katiuscia; Galetto, Valentina; Dimitri, Danilo; Geda, Elisabetta; Perotti, Francesca; Zettin, Marina; Geminiani, Giuliano C

    2016-01-01

    Divided attention (DA), the ability to distribute cognitive resources among two or more simultaneous tasks, may be severely compromised after traumatic brain injury (TBI), resulting in problems with numerous activities involved with daily living. So far, no research has investigated whether the use of non-invasive brain stimulation associated with neuropsychological rehabilitation might contribute to the recovery of such cognitive function. The main purpose of this study was to assess the effectiveness of 10 transcranial direct current stimulation (tDCS) sessions combined with computer-assisted training; it also intended to explore the neural modifications induced by the treatment. Thirty-two patients with severe TBI participated in the study: 16 were part of the experimental group, and 16 part of the control group. The treatment included 20' of tDCS, administered twice a day for 5 days. The electrodes were placed on the dorso-lateral prefrontal cortex. Their location varied across patients and it depended on each participant's specific area of damage. The control group received sham tDCS. After each tDCS session, the patient received computer-assisted cognitive training on DA for 40'. The results showed that the experimental group significantly improved in DA performance between pre- and post-treatment, showing faster reaction times (RTs), and fewer omissions. No improvement was detected between the baseline assessment (i.e., 1 month before treatment) and the pre-training assessment, or within the control group. Functional magnetic resonance imaging (fMRI) data, obtained on the experimental group during a DA task, showed post-treatment lower cerebral activations in the right superior temporal gyrus (BA 42), right and left middle frontal gyrus (BA 6), right postcentral gyrus (BA 3) and left inferior frontal gyrus (BA 9). We interpreted such neural changes as normalization of previously abnormal hyperactivations.

  2. Concomitant use of transcranial Direct Current Stimulation and computer-assisted training for the rehabilitation of attention in traumatic brain injured patients: behavioral and neuroimaging results

    Directory of Open Access Journals (Sweden)

    Katiuscia eSacco

    2016-03-01

    Full Text Available Divided attention, the ability to distribute cognitive resources among two or more simultaneous tasks, may be severely compromised after traumatic brain injury (TBI, resulting in problems with numerous activities involved with daily living. So far, no research has investigated whether the use of non-invasive brain stimulation associated with neuropsychological rehabilitation might contribute to the recovery of such cognitive function. The main purpose of this study was to assess the effectiveness of 10 tDCS sessions combined with computer-assisted training; it also intended to explore the neural modifications induced by the treatment. Thirty-two patients with severe TBI participated in the study: sixteen were part of the experimental group, and sixteen part of the control group. The treatment included 20’ of tDCS, administered twice a day for 5 days. The electrodes were placed on the dorso-lateral prefrontal cortex. Their location varied across patients and it depended on each participant’s specific area of damage. The control group received sham tDCS. After each tDCS session, the patient received computer-assisted cognitive training on divided attention for 40’. The results showed that the experimental group significantly improved in divided attention performance between pre- and post-treatment, showing faster reaction times, and fewer omissions. No improvement was detected between the baseline assessment (i.e., one month before treatment and the pre-training assessment, or within the control group. Functional magnetic resonance imaging data, obtained on the experimental group during a divided attention task, showed post-treatment lower cerebral activations in the right superior temporal gyrus (BA 42, right and left middle frontal gyrus (BA 6, right postcentral gyrus (BA 3 and left inferior frontal gyrus (BA 9. We interpreted such neural changes as normalization of previously abnormal hyperactivations.

  3. The use of computers in education worldwide : results from a comparative survey in 18 countries

    NARCIS (Netherlands)

    Pelgrum, W.J.; Plomp, T.

    1991-01-01

    In 1989, the International Association for the Evaluation of Educational Achievement (IEA) Computers in Education study collected data on computer use in elementary, and lower- and upper-secondary education in 22 countries. Although all data sets from the participating countries had not been

  4. Estimate of fuel burnup spatial a multipurpose reactor in computer simulation

    International Nuclear Information System (INIS)

    Santos, Nadia Rodrigues dos; Lima, Zelmo Rodrigues de; Moreira, Maria de Lourdes

    2015-01-01

    In previous research, which aimed, through computer simulation, estimate the spatial fuel burnup for the research reactor benchmark, material test research - International Atomic Energy Agency (MTR/IAEA), it was found that the use of the code in FORTRAN language, based on the diffusion theory of neutrons and WIMSD-5B, which makes cell calculation, bespoke be valid to estimate the spatial burnup other nuclear research reactors. That said, this paper aims to present the results of computer simulation to estimate the space fuel burnup of a typical multipurpose reactor, plate type and dispersion. the results were considered satisfactory, being in line with those presented in the literature. for future work is suggested simulations with other core configurations. are also suggested comparisons of WIMSD-5B results with programs often employed in burnup calculations and also test different methods of interpolation values obtained by FORTRAN. Another proposal is to estimate the burning fuel, taking into account the thermohydraulics parameters and the appearance of xenon. (author)

  5. Efficient 2-D DCT Computation from an Image Representation Point of View

    OpenAIRE

    Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.

    2009-01-01

    A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with the same accuracy.

  6. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    Science.gov (United States)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  7. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process....... In the first step, the vorticity at the new time level is computed using the velocity at the previous time level. In the second step, the velocity at the new time level is computed using the new vorticity. We discuss here the second part which is by far the most time‐consuming. The numerical problem...

  8. An examination of intrinsic errors in electronic structure methods using the Environmental Molecular Sciences Laboratory computational results database and the Gaussian-2 set

    International Nuclear Information System (INIS)

    Feller, D.; Peterson, K.A.

    1998-01-01

    The Gaussian-2 (G2) collection of atoms and molecules has been studied with Hartree endash Fock and correlated levels of theory, ranging from second-order perturbation theory to coupled cluster theory with noniterative inclusion of triple excitations. By exploiting the systematic convergence properties of the correlation consistent family of basis sets, complete basis set limits were estimated for a large number of the G2 energetic properties. Deviations with respect to experimentally derived energy differences corresponding to rigid molecules were obtained for 15 basis set/method combinations, as well as the estimated complete basis set limit. The latter values are necessary for establishing the intrinsic error for each method. In order to perform this analysis, the information generated in the present study was combined with the results of many previous benchmark studies in an electronic database, where it is available for use by other software tools. Such tools can assist users of electronic structure codes in making appropriate basis set and method choices that will increase the likelihood of achieving their accuracy goals without wasteful expenditures of computer resources. copyright 1998 American Institute of Physics

  9. Computational Enhancements for Direct Numerical Simulations of Statistically Stationary Turbulent Premixed Flames

    KAUST Repository

    Mukhadiyev, Nurzhan

    2017-05-01

    Combustion at extreme conditions, such as a turbulent flame at high Karlovitz and Reynolds numbers, is still a vast and an uncertain field for researchers. Direct numerical simulation of a turbulent flame is a superior tool to unravel detailed information that is not accessible to most sophisticated state-of-the-art experiments. However, the computational cost of such simulations remains a challenge even for modern supercomputers, as the physical size, the level of turbulence intensity, and chemical complexities of the problems continue to increase. As a result, there is a strong demand for computational cost reduction methods as well as in acceleration of existing methods. The main scope of this work was the development of computational and numerical tools for high-fidelity direct numerical simulations of premixed planar flames interacting with turbulence. The first part of this work was KAUST Adaptive Reacting Flow Solver (KARFS) development. KARFS is a high order compressible reacting flow solver using detailed chemical kinetics mechanism; it is capable to run on various types of heterogeneous computational architectures. In this work, it was shown that KARFS is capable of running efficiently on both CPU and GPU. The second part of this work was numerical tools for direct numerical simulations of planar premixed flames: such as linear turbulence forcing and dynamic inlet control. DNS of premixed turbulent flames conducted previously injected velocity fluctuations at an inlet. Turbulence injected at the inlet decayed significantly while reaching the flame, which created a necessity to inject higher than needed fluctuations. A solution for this issue was to maintain turbulence strength on the way to the flame using turbulence forcing. Therefore, a linear turbulence forcing was implemented into KARFS to enhance turbulence intensity. Linear turbulence forcing developed previously by other groups was corrected with net added momentum removal mechanism to prevent mean

  10. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  11. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    Directory of Open Access Journals (Sweden)

    P. O. Umenne

    2012-12-01

    Full Text Available Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ execution were developed at the University of Surrey, UK in the 90s. The objective of the research was to develop a software-based computer architecture on which Agents execution could be explored. The combination of Intelligent Agents and HYDRA computer architecture gave rise to a new computer concept: the NET-Computer in which the comput­ing resources reside on the Internet. The Internet computers form the hardware and software resources, and the user is provided with a simple interface to access the Internet and run user tasks. The Agents autonomously roam the Internet (NET-Computer executing the tasks. A growing segment of the Internet is E-Commerce for online shopping for products and services. The Internet computing resources provide a marketplace for product suppliers and consumers alike. Consumers are looking for suppliers selling products and services, while suppliers are looking for buyers. Searching the vast amount of information available on the Internet causes a great deal of problems for both consumers and suppliers. Intelligent Agents executing on the NET-Computer can surf through the Internet and select specific information of interest to the user. The simulation results show that Intelligent Agents executing HYDRA computer architecture could be applied in E-Commerce.

  12. Leakage-Resilient Circuits without Computational Assumptions

    DEFF Research Database (Denmark)

    Dziembowski, Stefan; Faust, Sebastian

    2012-01-01

    Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage...... on computational assumptions, our results are purely information-theoretic. In particular, we do not make use of public key encryption, which was required in all previous works...... into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied...

  13. A new communication scheme for the neutron diffusion nodal method in a distributed computing environment

    International Nuclear Information System (INIS)

    Kirk, B.L.; Azmy, Y.

    1994-01-01

    A modified scheme is developed for solving the two-dimensional nodal diffusion equations on distributed memory computers. The scheme is aimed at minimizing the volume of communication among processors while maximizing the tasks in parallel. Results show a significant improvement in parallel efficiency on the Intel iPSC/860 hypercube compared to previous algorithms

  14. Overview of JET post-mortem results following the 2007-9 operational period, and comparisons with previous campaigns

    International Nuclear Information System (INIS)

    Coad, J P; Gruenhagen, S; Widdowson, A; Hole, D E; Hakola, A; Koivuranta, S; Likonen, J; Rubel, M

    2011-01-01

    In 2010, all the plasma-facing components were removed from JET so that the carbon-based surfaces could be replaced with beryllium (Be) or tungsten as part of the ITER-like wall (ILW) project. This gives unprecedented opportunities for post-mortem analyses of these plasma-facing surfaces; this paper reviews the data obtained so far and relates the information to studies of tiles removed during previous JET shutdowns. The general pattern of erosion/deposition at the JET divertor has been maintained, with deposition of impurities in the scrape-off layer (SOL) at the inner divertor and preferential removal of carbon and transport into the corner. However, the remaining films in the SOL contain very high Be/C ratios at the surface. The first measurements of erosion using a tile profiler have been completed, with up to 200 microns erosion being recorded at points on the inner wall guard limiters.

  15. Computational Models for Calcium-Mediated Astrocyte Functions

    Directory of Open Access Journals (Sweden)

    Tiina Manninen

    2018-04-01

    Full Text Available The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop

  16. Computational Models for Calcium-Mediated Astrocyte Functions.

    Science.gov (United States)

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus

  17. Outcomes With Edoxaban Versus Warfarin in Patients With Previous Cerebrovascular Events

    DEFF Research Database (Denmark)

    Rost, Natalia S; Giugliano, Robert P; Ruff, Christian T

    2016-01-01

    BACKGROUND AND PURPOSE: Patients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients with ver......BACKGROUND AND PURPOSE: Patients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients...... with versus without previous IS/TIA. METHODS: ENGAGE AF-TIMI 48 (Effective Anticoagulation With Factor Xa Next Generation in Atrial Fibrillation-Thrombolysis in Myocardial Infarction 48) was a double-blind trial of 21 105 patients with atrial fibrillation randomized to warfarin (international normalized ratio......). Because only HDER is approved, we focused on the comparison of HDER versus warfarin. RESULTS: Of 5973 (28.3%) patients with previous IS/TIA, 67% had CHADS2 (congestive heart failure, hypertension, age, diabetes, prior stroke/transient ischemic attack) >3 and 36% were ≥75 years. Compared with 15 132...

  18. Study of functional-performance deficits in athletes with previous ankle sprains

    Directory of Open Access Journals (Sweden)

    hamid Babaee

    2008-04-01

    Full Text Available Abstract Background: Despite the importance of functional-performance deficits in athletes with history of ankle sprain few, studies have been carried out in this area. The aim of this research was to study relationship between previous ankle sprains and functional-performance deficits in athletes. Materials and methods: The subjects were 40 professional athletes selected through random sampling among volunteer participants in soccer, basketball, volleyball and handball teams of Lorestan province. The subjects were divided into 2 groups: Injured group (athletes with previous ankle sprains and healthy group (athletes without previous ankle sprains. In this descriptive study we used Functional-performance tests (figure 8 hop test and side hop test to determine ankle deficits and limitations. They participated in figure 8 hop test including hopping in 8 shape course with the length of 5 meters and side hop test including 10 side hop repetitions in course with the length of 30 centimeters. Time were recorded via stopwatch. Results: After data gathering and assessing information distributions, Pearson correlation was used to assess relationships, and independent T test to assess differences between variables. Finally the results showed that there is a significant relationship between previous ankle sprains and functional-performance deficits in the athletes. Conclusion: The athletes who had previous ankle sprains indicated functional-performance deficits more than healthy athletes in completion of mentioned functional-performance tests. The functional-performance tests (figure 8 hop test and side hop test are sensitive and suitable to assess and detect functional-performance deficits in athletes. Therefore we can use the figure 8 hop and side hop tests for goals such as prevention, assessment and rehabilitation of ankle sprains without spending too much money and time.

  19. Calculation of normalised organ and effective doses to adult reference computational phantoms from contemporary computed tomography scanners

    International Nuclear Information System (INIS)

    Jansen, Jan T.M.; Shrimpton, Paul C.

    2010-01-01

    The general-purpose Monte Carlo radiation transport code MCNPX has been used to simulate photon transport and energy deposition in anthropomorphic phantoms due to the x-ray exposure from the Philips iCT 256 and Siemens Definition CT scanners, together with the previously studied General Electric 9800. The MCNPX code was compiled with the Intel FORTRAN compiler and run on a Linux PC cluster. A patch has been successfully applied to reduce computing times by about 4%. The International Commission on Radiological Protection (ICRP) has recently published the Adult Male (AM) and Adult Female (AF) reference computational voxel phantoms as successors to the Medical Internal Radiation Dose (MIRD) stylised hermaphrodite mathematical phantoms that form the basis for the widely-used ImPACT CT dosimetry tool. Comparisons of normalised organ and effective doses calculated for a range of scanner operating conditions have demonstrated significant differences in results (in excess of 30%) between the voxel and mathematical phantoms as a result of variations in anatomy. These analyses illustrate the significant influence of choice of phantom on normalised organ doses and the need for standardisation to facilitate comparisons of dose. Further such dose simulations are needed in order to update the ImPACT CT Patient Dosimetry spreadsheet for contemporary CT practice. (author)

  20. Characterization of scalar mixing in dense gaseous jets using X-ray computed tomography

    Science.gov (United States)

    Dunnmon, Jared; Sobhani, Sadaf; Kim, Tae Wook; Kovscek, Anthony; Ihme, Matthias

    2015-10-01

    An experimental technique based on X-ray computed tomography (XCT) is used to characterize scalar mixing of a krypton jet with air at turbulent conditions. The high radiodensity of the krypton gas enables non-intrusive volumetric measurements of gas density and mixture composition based on spatial variations in X-ray attenuation. Comparisons of these measurements to both computational results from large-eddy simulations and data from previous experiments are presented, and the viability of this diagnostic technique is assessed. Important aspects of X-ray attenuation theory, XCT practice, and relevant error analysis are considered in data processing, and their impacts on the future development of this technique are discussed.

  1. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  2. The Effect of Animation in Multimedia Computer-Based Learning and Learning Style to the Learning Results

    Directory of Open Access Journals (Sweden)

    Muhammad RUSLI

    2017-10-01

    Full Text Available The effectiveness of a learning depends on four main elements, they are content, desired learning outcome, instructional method and the delivery media. The integration of those four elements can be manifested into a learning modul which is called multimedia learning or learning by using multimedia. In learning context by using computer-based multimedia, there are two main things that need to be noticed so that the learning process can run effectively: how the content is presented, and what the learner’s chosen way in accepting and processing the information into a meaningful knowledge. First it is related with the way to visualize the content and how people learn. The second one is related with the learning style of the learner. This research aims to investigate the effect of the type of visualization—static vs animated—on a multimedia computer-based learning, and learning styles—visual vs verbal, towards the students’ capability in applying the concepts, procedures, principles of Java programming. Visualization type act as independent variables, and learning styles of the students act as a moderator variable. Moreover, the instructional strategies followed the Component Display Theory of Merril, and the format of presentation of multimedia followed the Seven Principles of Multimedia Learning of Mayer and Moreno. Learning with the multimedia computer-based learning has been done in the classroom. The subject of this research was the student of STMIK-STIKOM Bali in odd semester 2016-2017 which followed the course of Java programming. The Design experiments used multivariate analysis of variance, MANOVA 2 x 2, with a large sample of 138 students in 4 classes. Based on the results of the analysis, it can be concluded that the animation in multimedia interactive learning gave a positive effect in improving students’ learning outcomes, particularly in the applying the concepts, procedures, and principles of Java programming. The

  3. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  4. Improvements on the seismic catalog previous to the 2011 El Hierro eruption.

    Science.gov (United States)

    Domínguez Cerdeña, Itahiza; del Fresno, Carmen

    2017-04-01

    Precursors from the submarine eruption of El Hierro (Canary Islands) in 2011 included 10,000 low magnitude earthquakes and 5 cm crustal deformation within 81 days previous to the eruption onset on the 10th October. Seismicity revealed a 20 km horizontal migration from the North to the South of the island and depths ranging from 10 and 17 km with deeper events occurring further South. The earthquakes of the seismic catalog were manually picked by the IGN almost in real time, but there has not been a subsequent revision to check for new non located events jet and the completeness magnitude for the seismic catalog have strong changes during the entire swarm due to the variable number of events per day. In this work we used different techniques to improve the quality of the seismic catalog. First we applied different automatic algorithms to detect new events including the LTA-STA method. Then, we performed a semiautomatic system to correlate the new P and S detections with known phases from the original catalog. The new detected earthquakes were also located using Hypoellipse algorithm. The resulting new catalog included 15,000 new events mainly concentrated in the last weeks of the swarm and we assure a completeness magnitude of 1.2 during the whole series. As the seismicity from the original catalog was already relocated using hypoDD algorithm, we improved the location of the new events using a master-cluster relocation. This method consists in relocating earthquakes towards a cluster of well located events instead of a single event as the master-event method. In our case this cluster correspond to the relocated earthquakes from the original catalog. Finally, we obtained a new equation for the local magnitude estimation which allow us to include corrections for each seismic station in order to avoid local effects. The resulting magnitude catalog has a better fit with the moment magnitude catalog obtained for the strong earthquakes of this series in previous studies

  5. A computed room temperature line list for phosphine

    Science.gov (United States)

    Sousa-Silva, Clara; Yurchenko, Sergei N.; Tennyson, Jonathan

    2013-06-01

    An accurate and comprehensive room temperature rotation-vibration transition line list for phosphine (31PH3) is computed using a newly refined potential energy surface and a previously constructed ab initio electric dipole moment surface. Energy levels, Einstein A coefficients and transition intensities are computed using these surfaces and a variational approach to the nuclear motion problem as implemented in the program TROVE. A ro-vibrational spectrum is computed, covering the wavenumber range 0-8000 cm-1. The resulting line list, which is appropriate for temperatures up to 300 K, consists of a total of 137 million transitions between 5.6 million energy levels. Several of the band centres are shifted to better match experimental transition frequencies. The line list is compared to the most recent HITRAN database and other laboratorial sources. Transition wavelengths and intensities are generally found to be in good agreement with the existing experimental data, with particularly close agreement for the rotational spectrum. An analysis of the comparison between the theoretical data created and the existing experimental data is performed, and suggestions for future improvements and assignments to the HITRAN database are made.

  6. Role of computed tomography in the integral diagnostic process of paranasal cavities tumors

    International Nuclear Information System (INIS)

    Lazarova, I.

    1990-01-01

    Results are reported of computed tomographic examination of 129 patients from 3 to 74 years of age, on clinical grounds suspected of having, or histologically verified, tumors of the paranasal cavities. Axial and/or coronary scanning (depending on the case) was performed on computed tomograph Tomoscan-310, according to previously selected programs. Computed tomography was evaluated with regard to the possibility for diagnosing tumors of the paranasal sinuses and its role in furnishing additional information in these diseases. The clearcut differentiation on the computed tomograms both of the bone structures and of the soft tissue - muscles, vessels, connective tissue and fatty tissue spaces - is emphasized. The clinical significance of this special X-ray method of examination in the preoperative period by demonstrating the different directions in which the tumors spread and the possibility for adequate planning of the radiotherapeutic field and posttherapeutic follow-up the pathologic process are pointed out. 5 figs., 5 refs

  7. Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.

    Science.gov (United States)

    Dworak, Markus; Schierl, Thomas; Bruns, Thomas; Strüder, Heiko Klaus

    2007-11-01

    Television and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children. Eleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance. Only computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns. The results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory.

  8. Computation Results from a Parametric Study to Determine Bounding Critical Systems of Homogeneously Water-Moderated Mixed Plutonium--Uranium Oxides

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, Y.

    2001-01-11

    This report provides computational results of an extensive study to examine the following: (1) infinite media neutron-multiplication factors; (2) material bucklings; (3) bounding infinite media critical concentrations; (4) bounding finite critical dimensions of water-reflected and homogeneously water-moderated one-dimensional systems (i.e., spheres, cylinders of infinite length, and slabs that are infinite in two dimensions) that were comprised of various proportions and densities of plutonium oxides and uranium oxides, each having various isotopic compositions; and (5) sensitivity coefficients of delta k-eff with respect to critical geometry delta dimensions were determined for each of the three geometries that were studied. The study was undertaken to support the development of a standard that is sponsored by the International Standards Organization (ISO) under Technical Committee 85, Nuclear Energy (TC 85)--Subcommittee 5, Nuclear Fuel Technology (SC 5)--Working Group 8, Standardization of Calculations, Procedures and Practices Related to Criticality Safety (WG 8). The designation and title of the ISO TC 85/SC 5/WG 8 standard working draft is WD 14941, ''Nuclear energy--Fissile materials--Nuclear criticality control and safety of plutonium-uranium oxide fuel mixtures outside of reactors.'' Various ISO member participants performed similar computational studies using their indigenous computational codes to provide comparative results for analysis in the development of the standard.

  9. The relationship between emotional intelligence, previous caring experience and mindfulness in student nurses and midwives: a cross sectional analysis.

    Science.gov (United States)

    Snowden, Austyn; Stenhouse, Rosie; Young, Jenny; Carver, Hannah; Carver, Fiona; Brown, Norrie

    2015-01-01

    Emotional Intelligence (EI), previous caring experience and mindfulness training may have a positive impact on nurse education. More evidence is needed to support the use of these variables in nurse recruitment and retention. To explore the relationship between EI, gender, age, programme of study, previous caring experience and mindfulness training. Cross sectional element of longitudinal study. 938year one nursing, midwifery and computing students at two Scottish Higher Education Institutes (HEIs) who entered their programme in September 2013. Participants completed a measure of 'trait' EI: Trait Emotional Intelligence Questionnaire Short Form (TEIQue-SF); and 'ability' EI: Schutte's et al. (1998) Emotional Intelligence Scale (SEIS). Demographics, previous caring experience and previous training in mindfulness were recorded. Relationships between variables were tested using non-parametric tests. Emotional intelligence increased with age on both measures of EI [TEIQ-SF H(5)=15.157 p=0.001; SEIS H(5)=11.388, p=0.044]. Females (n=786) scored higher than males (n=149) on both measures [TEIQ-SF, U=44,931, z=-4.509, pemotional intelligence. Mindfulness training was associated with higher 'ability' emotional intelligence. Implications for recruitment, retention and further research are explored. Copyright © 2014. Published by Elsevier Ltd.

  10. Predictive factors for the development of diabetes in women with previous gestational diabetes mellitus

    DEFF Research Database (Denmark)

    Damm, P.; Kühl, C.; Bertelsen, Aksel

    1992-01-01

    OBJECTIVES: The purpose of this study was to determine the incidence of diabetes in women with previous dietary-treated gestational diabetes mellitus and to identify predictive factors for development of diabetes. STUDY DESIGN: Two to 11 years post partum, glucose tolerance was investigated in 241...... women with previous dietary-treated gestational diabetes mellitus and 57 women without previous gestational diabetes mellitus (control group). RESULTS: Diabetes developed in 42 (17.4%) women with previous gestational diabetes mellitus (3.7% insulin-dependent diabetes mellitus and 13.7% non...... of previous patients with gestational diabetes mellitus in whom plasma insulin was measured during an oral glucose tolerance test in late pregnancy a low insulin response at diagnosis was found to be an independent predictive factor for diabetes development. CONCLUSIONS: Women with previous dietary...

  11. REACHING THE COMPUTING HELP DESK

    CERN Multimedia

    Miguel MARQUINA; Roger WOOLNOUGH; IT/User Support

    1999-01-01

    The way to contact the Computing Help Desk (also known as 'UCO' and hosted by IT Division as an entry point for general computing issues) has been streamlined in order to facilitate access to it. A new telephone line and email address have been set: Phone number: 78888Email: Helpdesk@cern.chhopefully easier to remember. Both entries are operational since last December. The previous number and email address remain valid and have been turned into aliases of the above. However we encourage using the latter at your convenience from now on. For additional information please see the article published at the CERN Computing Newsletter 233:http://consult.cern.ch/cnl/233/art_uco.htmlDo not hesitate to contact us (by email to User.Relations@cern.ch) for additional information or feedback regarding this matter.Nicole Cremel, Miguel Marquina, Roger WoolnoughIT/UserSupport

  12. Misleading Performance Claims in Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  13. Computers for imagemaking

    CERN Document Server

    Clark, D

    1981-01-01

    Computers for Image-Making tells the computer non-expert all he needs to know about Computer Animation. In the hands of expert computer engineers, computer picture-drawing systems have, since the earliest days of computing, produced interesting and useful images. As a result of major technological developments since then, it no longer requires the expert's skill to draw pictures; anyone can do it, provided they know how to use the appropriate machinery. This collection of specially commissioned articles reflects the diversity of user applications in this expanding field

  14. Quantum computer science

    CERN Document Server

    Lanzagorta, Marco

    2009-01-01

    In this text we present a technical overview of the emerging field of quantum computation along with new research results by the authors. What distinguishes our presentation from that of others is our focus on the relationship between quantum computation and computer science. Specifically, our emphasis is on the computational model of quantum computing rather than on the engineering issues associated with its physical implementation. We adopt this approach for the same reason that a book on computer programming doesn't cover the theory and physical realization of semiconductors. Another distin

  15. Antimicrobial usage in German acute care hospitals: results of the third national point prevalence survey and comparison with previous national point prevalence surveys.

    Science.gov (United States)

    Aghdassi, Seven Johannes Sam; Gastmeier, Petra; Piening, Brar Christian; Behnke, Michael; Peña Diaz, Luis Alberto; Gropmann, Alexander; Rosenbusch, Marie-Luise; Kramer, Tobias Siegfried; Hansen, Sonja

    2018-04-01

    Previous point prevalence surveys (PPSs) revealed the potential for improving antimicrobial usage (AU) in German acute care hospitals. Data from the 2016 German national PPS on healthcare-associated infections and AU were used to evaluate efforts in antimicrobial stewardship (AMS). A national PPS in Germany was organized by the German National Reference Centre for Surveillance of Nosocomial Infections in 2016 as part of the European PPS initiated by the ECDC. The data were collected in May and June 2016. Results were compared with data from the PPS 2011. A total of 218 hospitals with 64 412 observed patients participated in the PPS 2016. The prevalence of patients with AU was 25.9% (95% CI 25.6%-26.3%). No significant increase or decrease in AU prevalence was revealed in the group of all participating hospitals. Prolonged surgical prophylaxis was found to be common (56.1% of all surgical prophylaxes on the prevalence day), but significantly less prevalent than in 2011 (P < 0.01). The most frequently administered antimicrobial groups were penicillins plus β-lactamase inhibitors (BLIs) (23.2%), second-generation cephalosporins (12.9%) and fluoroquinolones (11.3%). Significantly more penicillins plus BLIs and fewer second-generation cephalosporins and fluoroquinolones were used in 2016. Overall, an increase in the consumption of broad-spectrum antimicrobials was noted. For 68.7% of all administered antimicrobials, the indication was documented in the patient notes. The current data reaffirm the points of improvement that previous data identified and reveal that recent efforts in AMS in German hospitals require further intensification.

  16. Low cost spacecraft computers: Oxymoron or future trend?

    Science.gov (United States)

    Manning, Robert M.

    1993-01-01

    Over the last few decades, application of current terrestrial computer technology in embedded spacecraft control systems has been expensive and wrought with many technical challenges. These challenges have centered on overcoming the extreme environmental constraints (protons, neutrons, gamma radiation, cosmic rays, temperature, vibration, etc.) that often preclude direct use of commercial off-the-shelf computer technology. Reliability, fault tolerance and power have also greatly constrained the selection of spacecraft control system computers. More recently, new constraints are being felt, cost and mass in particular, that have again narrowed the degrees of freedom spacecraft designers once enjoyed. This paper discusses these challenges, how they were previously overcome, how future trends in commercial computer technology will simplify (or hinder) selection of computer technology for spacecraft control applications, and what spacecraft electronic system designers can do now to circumvent them.

  17. Using Palm Technology in Participatory Simulations of Complex Systems: A New Take on Ubiquitous and Accessible Mobile Computing

    Science.gov (United States)

    Klopfer, Eric; Yoon, Susan; Perry, Judy

    2005-01-01

    This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of…

  18. Reflectivity of 1D photonic crystals: A comparison of computational schemes with experimental results

    Science.gov (United States)

    Pérez-Huerta, J. S.; Ariza-Flores, D.; Castro-García, R.; Mochán, W. L.; Ortiz, G. P.; Agarwal, V.

    2018-04-01

    We report the reflectivity of one-dimensional finite and semi-infinite photonic crystals, computed through the coupling to Bloch modes (BM) and through a transfer matrix method (TMM), and their comparison to the experimental spectral line shapes of porous silicon (PS) multilayer structures. Both methods reproduce a forbidden photonic bandgap (PBG), but slowly-converging oscillations are observed in the TMM as the number of layers increases to infinity, while a smooth converged behavior is presented with BM. The experimental reflectivity spectra is in good agreement with the TMM results for multilayer structures with a small number of periods. However, for structures with large amount of periods, the measured spectral line shapes exhibit better agreement with the smooth behavior predicted by BM.

  19. Massenmedium Computer: Ein Handbuch für Theorie und Praxis des Deutschunterrichts.

    OpenAIRE

    Kepser, Matthis

    2000-01-01

    Part I gives a critical overview to previous research projects and teaching ideas which concern the computer in German lessons (state in 1999): electronic word processing, databases, programming, classical drill-and-practice software and hypertext learning surroundings, telecomputing, computer as an object and motive of the literature, computers as an aid for the teacher.Part II expands the approaches with a perspective on the computer as a mass media. For that there will be discussed differe...

  20. Undergraduate students’ challenges with computational modelling in physics

    Directory of Open Access Journals (Sweden)

    Simen A. Sørby

    2012-12-01

    Full Text Available In later years, computational perspectives have become essential parts in several of the University of Oslo’s natural science studies. In this paper we discuss some main findings from a qualitative study of the computational perspectives’ impact on the students’ work with their first course in physics– mechanics – and their learning and meaning making of its contents. Discussions of the students’ learning of physics are based on sociocultural theory, which originates in Vygotsky and Bakhtin, and subsequent physics education research. Results imply that the greatest challenge for students when working with computational assignments is to combine knowledge from previously known, but separate contexts. Integrating knowledge of informatics, numerical and analytical mathematics and conceptual understanding of physics appears as a clear challenge for the students. We also observe alack of awareness concerning the limitations of physical modelling. The students need help with identifying the appropriate knowledge system or “tool set”, for the different tasks at hand; they need helpto create a plan for their modelling and to become aware of its limits. In light of this, we propose thatan instructive and dialogic text as basis for the exercises, in which the emphasis is on specification, clarification and elaboration, would be of potential great aid for students who are new to computational modelling.

  1. Computation of the hyperfine structure in the (α-μ- e-)0 atom

    International Nuclear Information System (INIS)

    Amusia, M.Ya.; Kuchiev, M.Ju.; Yakhontov, V.L.

    1983-01-01

    Computation of the ground-state hyperfine splitting of neutral muonic helium (α-μ - e - ) 0 has been carried out. Account of two terms in the expansion of this quantity in power series of a small parameter #betta# of the order of msub(e)/msub(μ) of the order of 1/200 results in the energy splitting value δ#betta# = 4462.9 MHz in good agreement with previously obtained experimental and theoretical values. (author)

  2. Consideration of turbulent deposition in aerosol behaviour modelling with the CONTAIN code and comparison of the computations to sodium release experiments

    International Nuclear Information System (INIS)

    Jonas, R.

    1988-09-01

    CONTAIN is a computer code to analyze physical, chemical and radiological processes inside the reactor containment in the sequence of severe reactor accident. Modelling of the aerosol behaviour is included. We have improved the code by implementing a subroutine for turbulent deposition of aerosols. In contrast to previous calculations in which this effect was neglected, the computer results are in good agreement with sodium release experiments. If a typical friction velocity of 1 m/s is chosen, the computed aerosol mass median diameters and aerosol mass concentrations agree with the experimental results within a factor of 1.5 or 2, respectively. We have also found a good agreement between the CONTAIN calculations and results from other aerosol codes. (orig.) [de

  3. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  4. HEART TRANSPLANTATION IN PATIENTS WITH PREVIOUS OPEN HEART SURGERY

    Directory of Open Access Journals (Sweden)

    R. Sh. Saitgareev

    2016-01-01

    Full Text Available Heart Transplantation (HTx to date remains the most effective and radical method of treatment of patients with end-stage heart failure. The defi cit of donor hearts is forcing to resort increasingly to the use of different longterm mechanical circulatory support systems, including as a «bridge» to the follow-up HTx. According to the ISHLT Registry the number of recipients underwent cardiopulmonary bypass surgery increased from 40% in the period from 2004 to 2008 to 49.6% for the period from 2009 to 2015. HTx performed in repeated patients, on the one hand, involves considerable technical diffi culties and high risks; on the other hand, there is often no alternative medical intervention to HTx, and if not dictated by absolute contradictions the denial of the surgery is equivalent to 100% mortality. This review summarizes the results of a number of published studies aimed at understanding the immediate and late results of HTx in patients, previously underwent open heart surgery. The effect of resternotomy during HTx and that of the specifi c features associated with its implementation in recipients previously operated on open heart, and its effects on the immediate and long-term survival were considered in this review. Results of studies analyzing the risk factors for perioperative complications in repeated recipients were also demonstrated. Separately, HTx risks after implantation of prolonged mechanical circulatory support systems were examined. The literature does not allow to clearly defi ning the impact factor of earlier performed open heart surgery on the course of perioperative period and on the prognosis of survival in recipients who underwent HTx. On the other hand, subject to the regular fl ow of HTx and the perioperative period the risks in this clinical situation are justifi ed as a long-term prognosis of recipients previously conducted open heart surgery and are comparable to those of patients who underwent primary HTx. Studies

  5. Supporting students' learning in the domain of computer science

    Science.gov (United States)

    Gasparinatou, Alexandra; Grigoriadou, Maria

    2011-03-01

    Previous studies have shown that students with low knowledge understand and learn better from more cohesive texts, whereas high-knowledge students have been shown to learn better from texts of lower cohesion. This study examines whether high-knowledge readers in computer science benefit from a text of low cohesion. Undergraduate students (n = 65) read one of four versions of a text concerning Local Network Topologies, orthogonally varying local and global cohesion. Participants' comprehension was examined through free-recall measure, text-based, bridging-inference, elaborative-inference, problem-solving questions and a sorting task. The results indicated that high-knowledge readers benefited from the low-cohesion text. The interaction of text cohesion and knowledge was reliable for the sorting activity, for elaborative-inference and for problem-solving questions. Although high-knowledge readers performed better in text-based and in bridging-inference questions with the low-cohesion text, the interaction of text cohesion and knowledge was not reliable. The results suggest a more complex view of when and for whom textual cohesion affects comprehension and consequently learning in computer science.

  6. Hispanic women overcoming deterrents to computer science: A phenomenological study

    Science.gov (United States)

    Herling, Lourdes

    The products of computer science are important to all aspects of society and are tools in the solution of the world's problems. It is, therefore, troubling that the United States faces a shortage in qualified graduates in computer science. The number of women and minorities in computer science is significantly lower than the percentage of the U.S. population which they represent. The overall enrollment in computer science programs has continued to decline with the enrollment of women declining at a higher rate than that of men. This study addressed three aspects of underrepresentation about which there has been little previous research: addressing computing disciplines specifically rather than embedding them within the STEM disciplines, what attracts women and minorities to computer science, and addressing the issues of race/ethnicity and gender in conjunction rather than in isolation. Since women of underrepresented ethnicities are more severely underrepresented than women in general, it is important to consider whether race and ethnicity play a role in addition to gender as has been suggested by previous research. Therefore, this study examined what attracted Hispanic women to computer science specifically. The study determines whether being subjected to multiple marginalizations---female and Hispanic---played a role in the experiences of Hispanic women currently in computer science. The study found five emergent themes within the experiences of Hispanic women in computer science. Encouragement and role models strongly influenced not only the participants' choice to major in the field, but to persist as well. Most of the participants experienced a negative atmosphere and feelings of not fitting in while in college and industry. The interdisciplinary nature of computer science was the most common aspect that attracted the participants to computer science. The aptitudes participants commonly believed are needed for success in computer science are the Twenty

  7. Current-voltage curves for molecular junctions computed using all-electron basis sets

    International Nuclear Information System (INIS)

    Bauschlicher, Charles W.; Lawson, John W.

    2006-01-01

    We present current-voltage (I-V) curves computed using all-electron basis sets on the conducting molecule. The all-electron results are very similar to previous results obtained using effective core potentials (ECP). A hybrid integration scheme is used that keeps the all-electron calculations cost competitive with respect to the ECP calculations. By neglecting the coupling of states to the contacts below a fixed energy cutoff, the density matrix for the core electrons can be evaluated analytically. The full density matrix is formed by adding this core contribution to the valence part that is evaluated numerically. Expanding the definition of the core in the all-electron calculations significantly reduces the computational effort and, up to biases of about 2 V, the results are very similar to those obtained using more rigorous approaches. The convergence of the I-V curves and transmission coefficients with respect to basis set is discussed. The addition of diffuse functions is critical in approaching basis set completeness

  8. From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.

    Science.gov (United States)

    Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry

    2015-07-10

    Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.

  9. Numerical simulation of the shot peening process under previous loading conditions

    International Nuclear Information System (INIS)

    Romero-Ángeles, B; Urriolagoitia-Sosa, G; Torres-San Miguel, C R; Molina-Ballinas, A; Benítez-García, H A; Vargas-Bustos, J A; Urriolagoitia-Calderón, G

    2015-01-01

    This research presents a numerical simulation of the shot peening process and determines the residual stress field induced into a component with a previous loading history. The importance of this analysis is based on the fact that mechanical elements under shot peening are also subjected to manufacturing processes, which convert raw material into finished product. However, material is not provided in a virgin state, it has a previous loading history caused by the manner it is fabricated. This condition could alter some beneficial aspects of the residual stress induced by shot peening and could accelerate the crack nucleation and propagation progression. Studies were performed in beams subjected to strain hardening in tension (5ε y ) before shot peening was applied. Latter results were then compared in a numerical assessment of an induced residual stress field by shot peening carried out in a component (beam) without any previous loading history. In this paper, it is clearly shown the detrimental or beneficial effect that previous loading history can bring to the mechanical component and how it can be controlled to improve the mechanical behavior of the material

  10. Does previous use affect litter box appeal in multi-cat households?

    Science.gov (United States)

    Ellis, J J; McGowan, R T S; Martin, F

    2017-08-01

    It is commonly assumed that cats actively avoid eliminated materials (especially in multi-cat homes), suggesting regular litter box cleaning as the best defense against out-of-box elimination. The relationship between previous use and litter box appeal to familiar subsequent users is currently unknown. The purpose of this study was to investigate the relationship between previous litter box use and the identity of the previous user, type of elimination, odor, and presence of physical/visual obstructions in a multi-cat household scenario. Cats preferred a clean litter box to a dirty one, but the identity of the previous user had no impact on preferences. While the presence of odor from urine and/or feces did not impact litter box preferences, the presence of odorless faux-urine and/or feces did - with the presence of faux-feces being preferred over faux-urine. Results suggest neither malodor nor chemical communication play a role in litter box preferences, and instead emphasize the importance of regular removal of physical/visual obstructions as the key factor in promoting proper litter box use. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Multidetector computed tomography of urolithiasis. Technique and results; Multidetektor-Computertomografie der Urolithiasis. Technik und Ergebnisse

    Energy Technology Data Exchange (ETDEWEB)

    Karul, M.; Regier, M. [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Zentrum fuer Radiologie und Endoskopie; Heuer, R. [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Zentrum fuer Operative Medizin

    2013-02-15

    The diagnosis of acute urolithiasis results from unenhanced multidetector computed tomography (MDCT). This test analyses the functional and anatomical possibility for passing an ureteral calculi, the localization and dimension of which are important parameters for further therapy. Alternatively chronic urolithiasis could be ruled out by magnetic resonance urography (MRU). MRU is the first choice especially in pregnant women and children because of radiation hygiene. Enhanced MDCT must be emphasized as an alternative to intravenous urography (IVU) for diagnosis of complex drainage of urine and suspected disorder of the involved kidney. This review illustrates the principles of different tests and the clinical relevance thereof. (orig.)

  12. Cavity-assisted quantum computing in a silicon nanostructure

    International Nuclear Information System (INIS)

    Tang Bao; Qin Hao; Zhang Rong; Xue Peng; Liu Jin-Ming

    2014-01-01

    We present a scheme of quantum computing with charge qubits corresponding to one excess electron shared between dangling-bond pairs of surface silicon atoms that couple to a microwave stripline resonator on a chip. By choosing a certain evolution time, we propose the realization of a set of universal single- and two-qubit logical gates. Due to its intrinsic stability and scalability, the silicon dangling-bond charge qubit can be regarded as one of the most promising candidates for quantum computation. Compared to the previous schemes on quantum computing with silicon bulk systems, our scheme shows such advantages as a long coherent time and direct control and readout. (general)

  13. Predicting the Pullout Capacity of Small Ground Anchors Using Nonlinear Integrated Computing Techniques

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2017-01-01

    Full Text Available This study investigates predicting the pullout capacity of small ground anchors using nonlinear computing techniques. The input-output prediction model for the nonlinear Hammerstein-Wiener (NHW and delay inputs for the adaptive neurofuzzy inference system (DANFIS are developed and utilized to predict the pullout capacity. The results of the developed models are compared with previous studies that used artificial neural networks and least square support vector machine techniques for the same case study. The in situ data collection and statistical performances are used to evaluate the models performance. Results show that the developed models enhance the precision of predicting the pullout capacity when compared with previous studies. Also, the DANFIS model performance is proven to be better than other models used to detect the pullout capacity of ground anchors.

  14. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    OpenAIRE

    P. O. Umenne; M. O. Odhiambo

    2012-01-01

    Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ ex...

  15. A Model for the Acceptance of Cloud Computing Technology Using DEMATEL Technique and System Dynamics Approach

    Directory of Open Access Journals (Sweden)

    seyyed mohammad zargar

    2018-03-01

    Full Text Available Cloud computing is a new method to provide computing resources and increase computing power in organizations. Despite the many benefits this method shares, it has not been universally used because of some obstacles including security issues and has become a concern for IT managers in organization. In this paper, the general definition of cloud computing is presented. In addition, having reviewed previous studies, the researchers identified effective variables on technology acceptance and, especially, cloud computing technology. Then, using DEMATEL technique, the effectiveness and permeability of the variable were determined. The researchers also designed a model to show the existing dynamics in cloud computing technology using system dynamics approach. The validity of the model was confirmed through evaluation methods in dynamics model by using VENSIM software. Finally, based on different conditions of the proposed model, a variety of scenarios were designed. Then, the implementation of these scenarios was simulated within the proposed model. The results showed that any increase in data security, government support and user training can lead to the increase in the adoption and use of cloud computing technology.

  16. Orientation-modulated attention effect on visual evoked potential: Application for PIN system using brain-computer interface.

    Science.gov (United States)

    Wilaiprasitporn, Theerawit; Yagi, Tohru

    2015-01-01

    This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.

  17. Soil Erosion Estimation Using Grid-based Computation

    Directory of Open Access Journals (Sweden)

    Josef Vlasák

    2005-06-01

    Full Text Available Soil erosion estimation is an important part of a land consolidation process. Universal soil loss equation (USLE was presented by Wischmeier and Smith. USLE computation uses several factors, namely R – rainfall factor, K – soil erodability, L – slope length factor, S – slope gradient factor, C – cropping management factor, and P – erosion control management factor. L and S factors are usually combined to one LS factor – Topographic factor. The single factors are determined from several sources, such as DTM (Digital Terrain Model, BPEJ – soil type map, aerial and satellite images, etc. A conventional approach to the USLE computation, which is widely used in the Czech Republic, is based on the selection of characteristic profiles for which all above-mentioned factors must be determined. The result (G – annual soil loss of such computation is then applied for a whole area (slope of interest. Another approach to the USLE computation uses grids as a main data-structure. A prerequisite for a grid-based USLE computation is that each of the above-mentioned factors exists as a separate grid layer. The crucial step in this computation is a selection of appropriate grid resolution (grid cell size. A large cell size can cause an undesirable precision degradation. Too small cell size can noticeably slow down the whole computation. Provided that the cell size is derived from the source’s precision, the appropriate cell size for the Czech Republic varies from 30m to 50m. In some cases, especially when new surveying was done, grid computations can be performed with higher accuracy, i.e. with a smaller grid cell size. In such case, we have proposed a new method using the two-step computation. The first step computation uses a bigger cell size and is designed to identify higher erosion spots. The second step then uses a smaller cell size but it make the computation only the area identified in the previous step. This decomposition allows a

  18. Improving Undergraduates' Critique via Computer Mediated Communication

    Science.gov (United States)

    Mohamad, Maslawati; Musa, Faridah; Amin, Maryam Mohamed; Mufti, Norlaila; Latiff, Rozmel Abdul; Sallihuddin, Nani Rahayu

    2014-01-01

    Our current university students, labeled as "Generation Y" or Millennials, are different from previous generations due to wide exposure to media. Being technologically savvy, they are accustomed to Internet for information and social media for socializing. In line with this current trend, teaching through computer mediated communication…

  19. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  20. Affect and the computer game player: the effect of gender, personality, and game reinforcement structure on affective responses to computer game-play.

    Science.gov (United States)

    Chumbley, Justin; Griffiths, Mark

    2006-06-01

    Previous research on computer games has tended to concentrate on their more negative effects (e.g., addiction, increased aggression). This study departs from the traditional clinical and social learning explanations for these behavioral phenomena and examines the effect of personality, in-game reinforcement characteristics, gender, and skill on the emotional state of the game-player. Results demonstrated that in-game reinforcement characteristics and skill significantly effect a number of affective measures (most notably excitement and frustration). The implications of the impact of game-play on affect are discussed with reference to the concepts of "addiction" and "aggression."

  1. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  2. Impact of previously disadvantaged land-users on sustainable ...

    African Journals Online (AJOL)

    Impact of previously disadvantaged land-users on sustainable agricultural ... about previously disadvantaged land users involved in communal farming systems ... of input, capital, marketing, information and land use planning, with effect on ...

  3. The prevalence of previous self-harm amongst self-poisoning patients in Sri Lanka

    DEFF Research Database (Denmark)

    Mohamed, Fahim; Perera, Aravinda; Wijayaweera, Kusal

    2011-01-01

    BACKGROUND: One of the most important components of suicide prevention strategies is to target people who repeat self-harm as they are a high risk group. However, there is some evidence that the incidence of repeat self-harm is lower in Asia than in the West. The objective of this study...... was to investigate the prevalence of previous self-harm among a consecutive series of self-harm patients presenting to hospitals in rural Sri Lanka. METHOD: Six hundred and ninety-eight self-poisoning patients presenting to medical wards at two hospitals in Sri Lanka were interviewed about their previous episodes...... of self-harm. RESULTS: Sixty-one (8.7%, 95% CI 6.7-11%) patients reported at least one previous episode of self-harm [37 (10.7%) male, 24 (6.8%) female]; only 19 (2.7%, 95% CI 1.6-4.2%) patients had made more than one previous attempt. CONCLUSION: The low prevalence of previous self-harm is consistent...

  4. Brain-computer interface analysis of a dynamic visuo-motor task.

    Science.gov (United States)

    Logar, Vito; Belič, Aleš

    2011-01-01

    The area of brain-computer interfaces (BCIs) represents one of the more interesting fields in neurophysiological research, since it investigates the development of the machines that perform different transformations of the brain's "thoughts" to certain pre-defined actions. Experimental studies have reported some successful implementations of BCIs; however, much of the field still remains unexplored. According to some recent reports the phase coding of informational content is an important mechanism in the brain's function and cognition, and has the potential to explain various mechanisms of the brain's data transfer, but it has yet to be scrutinized in the context of brain-computer interface. Therefore, if the mechanism of phase coding is plausible, one should be able to extract the phase-coded content, carried by brain signals, using appropriate signal-processing methods. In our previous studies we have shown that by using a phase-demodulation-based signal-processing approach it is possible to decode some relevant information on the current motor action in the brain from electroencephalographic (EEG) data. In this paper the authors would like to present a continuation of their previous work on the brain-information-decoding analysis of visuo-motor (VM) tasks. The present study shows that EEG data measured during more complex, dynamic visuo-motor (dVM) tasks carries enough information about the currently performed motor action to be successfully extracted by using the appropriate signal-processing and identification methods. The aim of this paper is therefore to present a mathematical model, which by means of the EEG measurements as its inputs predicts the course of the wrist movements as applied by each subject during the task in simulated or real time (BCI analysis). However, several modifications to the existing methodology are needed to achieve optimal decoding results and a real-time, data-processing ability. The information extracted from the EEG could

  5. A programming approach to computability

    CERN Document Server

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  6. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  7. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  8. GEANT4 simulations for Proton computed tomography applications

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim T. de; Evseev, Ivan; Schelin, Hugo R.; Shtejer Diaz, Katherin; Lopes, Ricardo T.

    2011-01-01

    Proton radiation therapy is a highly precise form of cancer treatment. In existing proton treatment centers, dose calculations are performed based on X-ray computed tomography (CT). Alternatively, one could image the tumor directly with proton CT (pCT). Proton beams in medical applications deal with relatively thick targets like the human head or trunk. Thus, the fidelity of proton computed tomography (pCT) simulations as a tool for proton therapy planning depends in the general case on the accuracy of results obtained for the proton interaction with thick absorbers. GEANT4 simulations of proton energy spectra after passing thick absorbers do not agree well with existing experimental data, as showed previously. The spectra simulated for the Bethe-Bloch domain showed an unexpected sensitivity to the choice of low-energy electromagnetic models during the code execution. These observations were done with the GEANT4 version 8.2 during our simulations for pCT. This work describes in more details the simulations of the proton passage through gold absorbers with varied thickness. The simulations were done by modifying only the geometry in the Hadron therapy Example, and for all available choices of the Electromagnetic Physics Models. As the most probable reasons for these effects is some specific feature in the code or some specific implicit parameters in the GEANT4 manual, we continued our study with version 9.2 of the code. Some improvements in comparison with our previous results were obtained. The simulations were performed considering further applications for pCT development. The authors want to thank CNPq, CAPES and 'Fundacao Araucaria' for financial support of this work. (Author)

  9. Deep Learning for Computer Vision: A Brief Review

    Science.gov (United States)

    Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619

  10. Deep Learning for Computer Vision: A Brief Review

    Directory of Open Access Journals (Sweden)

    Athanasios Voulodimos

    2018-01-01

    Full Text Available Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.

  11. Deep Learning for Computer Vision: A Brief Review.

    Science.gov (United States)

    Voulodimos, Athanasios; Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.

  12. The cost-effectiveness of the RSI QuickScan intervention programme for computer workers: Results of an economic evaluation alongside a randomised controlled trial.

    Science.gov (United States)

    Speklé, Erwin M; Heinrich, Judith; Hoozemans, Marco J M; Blatter, Birgitte M; van der Beek, Allard J; van Dieën, Jaap H; van Tulder, Maurits W

    2010-11-11

    The costs of arm, shoulder and neck symptoms are high. In order to decrease these costs employers implement interventions aimed at reducing these symptoms. One frequently used intervention is the RSI QuickScan intervention programme. It establishes a risk profile of the target population and subsequently advises interventions following a decision tree based on that risk profile. The purpose of this study was to perform an economic evaluation, from both the societal and companies' perspective, of the RSI QuickScan intervention programme for computer workers. In this study, effectiveness was defined at three levels: exposure to risk factors, prevalence of arm, shoulder and neck symptoms, and days of sick leave. The economic evaluation was conducted alongside a randomised controlled trial (RCT). Participating computer workers from 7 companies (N = 638) were assigned to either the intervention group (N = 320) or the usual care group (N = 318) by means of cluster randomisation (N = 50). The intervention consisted of a tailor-made programme, based on a previously established risk profile. At baseline, 6 and 12 month follow-up, the participants completed the RSI QuickScan questionnaire. Analyses to estimate the effect of the intervention were done according to the intention-to-treat principle. To compare costs between groups, confidence intervals for cost differences were computed by bias-corrected and accelerated bootstrapping. The mean intervention costs, paid by the employer, were 59 euro per participant in the intervention and 28 euro in the usual care group. Mean total health care and non-health care costs per participant were 108 euro in both groups. As to the cost-effectiveness, improvement in received information on healthy computer use as well as in their work posture and movement was observed at higher costs. With regard to the other risk factors, symptoms and sick leave, only small and non-significant effects were found. In this study, the RSI Quick

  13. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  14. On the computation of molecular surface correlations for protein docking using fourier techniques.

    Science.gov (United States)

    Sakk, Eric

    2007-08-01

    The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.

  15. Determining root correspondence between previously and newly detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, N Reginald

    2014-06-17

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  16. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  17. Clinical significance of stress-induced ST segment changes in patients with previous myocardial infarction

    International Nuclear Information System (INIS)

    Futagami, Yasuo; Hamada, Masayuki; Makino, Katsutoshi; Ichikawa, Takehiko; Konishi, Tokuji

    1984-01-01

    To explain the clinical significance of stress(st)-induced ST-segment (ST) changes postinfarction, 93 patients with previous myocardial infarction (MI) were performed st- 201 Tl myocardial single photon emission computed tomography (SPECT) and compared ST changes with SPECT, coronary arteriographic and left ventriculographic findings. 30 out of 93 cases (32%) had ST depression, 20 (21.5%) had ST elevation, 9 (10%) had both ST depression and elevation and remaining 34 (36.5 %) had no significant ST changes. In single vessel disease, ST depression were noted in 29% (12/42), while in multivessel disease, 53% (27/51). 35 out of 39 cases (90%) with ST depression had transient perfusion defect but no apparent relation was noted between location of ST depression on ECG and region of transient perfusion defect in SPECT. All of 28 cases with ST elevation were noted in anterior MI cases, and 26 out of these showed severe LV wall motion abnormality in contrast left ventriculography and broad anterior permanent defect in SPECT. Only 15 cases (54%) showed slight redistribution. Thus, we conclude that in patients with previous MI, st-induced ST depression seems to reflect myocardial ischemia and ST elevation possibly related abnormal LV wall motion. (author)

  18. A computationally efficient fuzzy control s

    Directory of Open Access Journals (Sweden)

    Abdel Badie Sharkawy

    2013-12-01

    Full Text Available This paper develops a decentralized fuzzy control scheme for MIMO nonlinear second order systems with application to robot manipulators via a combination of genetic algorithms (GAs and fuzzy systems. The controller for each degree of freedom (DOF consists of a feedforward fuzzy torque computing system and a feedback fuzzy PD system. The feedforward fuzzy system is trained and optimized off-line using GAs, whereas not only the parameters but also the structure of the fuzzy system is optimized. The feedback fuzzy PD system, on the other hand, is used to keep the closed-loop stable. The rule base consists of only four rules per each DOF. Furthermore, the fuzzy feedback system is decentralized and simplified leading to a computationally efficient control scheme. The proposed control scheme has the following advantages: (1 it needs no exact dynamics of the system and the computation is time-saving because of the simple structure of the fuzzy systems and (2 the controller is robust against various parameters and payload uncertainties. The computational complexity of the proposed control scheme has been analyzed and compared with previous works. Computer simulations show that this controller is effective in achieving the control goals.

  19. Computer Self-Efficacy: A Practical Indicator of Student Computer Competency in Introductory IS Courses

    Directory of Open Access Journals (Sweden)

    Rex Karsten

    1998-01-01

    Full Text Available Students often receive their first college-level computer training in introductory information systems courses. Students and faculty frequently expect this training to develop a level of student computer competence that will support computer use in future courses. In this study, we applied measures of computer self-efficacy to students in a typical introductory IS course. The measures provided useful evidence that student perceptions of their ability to use computers effectively in the future significantly improved as a result of their training experience. The computer self-efficacy measures also provided enhanced insight into course-related factors of practical concern to IS educators. Study results also suggest computer self-efficacy measures may be a practical and informative means of assessing computer-training outcomes in the introductory IS course context

  20. Computational atomic and nuclear physics

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.; McGrory, J.B.

    1990-01-01

    The evolution of parallel processor supercomputers in recent years provides opportunities to investigate in detail many complex problems, in many branches of physics, which were considered to be intractable only a few years ago. But to take advantage of these new machines, one must have a better understanding of how the computers organize their work than was necessary with previous single processor machines. Equally important, the scientist must have this understanding as well as a good understanding of the structure of the physics problem under study. In brief, a new field of computational physics is evolving, which will be led by investigators who are highly literate both computationally and physically. A Center for Computationally Intensive Problems has been established with the collaboration of the University of Tennessee Science Alliance, Vanderbilt University, and the Oak Ridge National Laboratory. The objective of this Center is to carry out forefront research in computationally intensive areas of atomic, nuclear, particle, and condensed matter physics. An important part of this effort is the appropriate training of students. An early effort of this Center was to conduct a Summer School of Computational Atomic and Nuclear Physics. A distinguished faculty of scientists in atomic, nuclear, and particle physics gave lectures on the status of present understanding of a number of topics at the leading edge in these fields, and emphasized those areas where computational physics was in a position to make a major contribution. In addition, there were lectures on numerical techniques which are particularly appropriate for implementation on parallel processor computers and which are of wide applicability in many branches of science

  1. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  2. Identifying Computer-Generated Portraits: The Importance of Training and Incentives.

    Science.gov (United States)

    Mader, Brandon; Banks, Martin S; Farid, Hany

    2017-09-01

    The past two decades have seen remarkable advances in photo-realistic rendering of everything from inanimate objects to landscapes, animals, and humans. We previously showed that despite these tremendous advances, human observers remain fairly good at distinguishing computer-generated from photographic images. Building on these results, we describe a series of follow-up experiments that reveal how to improve observer performance. Of general interest to anyone performing psychophysical studies on Mechanical Turk or similar platforms, we find that observer performance can be significantly improved with the proper incentives.

  3. Contributions to computational stereology and parallel programming

    DEFF Research Database (Denmark)

    Rasmusson, Allan

    rotator, even without the need for isotropic sections. To meet the need for computational power to perform image restoration of virtual tissue sections, parallel programming on GPUs has also been part of the project. This has lead to a significant change in paradigm for a previously developed surgical...

  4. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  5. Results of computer-tomographic examination in different forms and course of schizophrenia

    International Nuclear Information System (INIS)

    Stojchev, R.

    1991-01-01

    Data are reported of a clinical and computer-tomographic study of 103 schizophrenic patients. Those with simple form of the disease had most pronounced evidence of dilated III and lateral ventricles (41.8% of the cases for the III ventricle and 72.4% for the lateral ventricles). All patients with circular, simple and catatonic form had signs of pathology of the cortical sulci. Regarding the ventricular system evidences of pathology prevailed in cases of impetus-progredient and constantly progredient course, whereas in respect to cortical pathology, the results were almost identical in all three types of psychosis - 95.2% of cases of constantly progredient and 95.6% - of impetus-progredient course. Attention was called to the 'surprising' data of organic brain injury in patients with paranoid and circular form of the disease, as well as in the most benign (from clinical point of view) impetus course. It is assumed that morphologic changes in the brain of schizophrenic patients are a natural phenomenon, but so far have not been a subject of comprehensive studies, maybe because of prejudice or lack of appropriate methods for examination of the brain during life's time. 6 figs., 15 refs

  6. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  7. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  8. The pathogenicity of genetic variants previously associated with left ventricular non-compaction

    DEFF Research Database (Denmark)

    Abbasi, Yeganeh; Jabbari, Javad; Jabbari, Reza

    2016-01-01

    BACKGROUND: Left ventricular non-compaction (LVNC) is a rare cardiomyopathy. Many genetic variants have been associated with LVNC. However, the number of the previous LVNC-associated variants that are common in the background population remains unknown. The aim of this study was to provide...... an updated list of previously reported LVNC-associated variants with biologic description and investigate the prevalence of LVNC variants in healthy general population to find false-positive LVNC-associated variants. METHODS AND RESULTS: The Human Gene Mutation Database and PubMed were systematically...... searched to identify all previously reported LVNC-associated variants. Thereafter, the Exome Sequencing Project (ESP) and the Exome Aggregation Consortium (ExAC), that both represent the background population, was searched for all variants. Four in silico prediction tools were assessed to determine...

  9. Erlotinib-induced rash spares previously irradiated skin

    International Nuclear Information System (INIS)

    Lips, Irene M.; Vonk, Ernest J.A.; Koster, Mariska E.Y.; Houwing, Ronald H.

    2011-01-01

    Erlotinib is an epidermal growth factor receptor inhibitor prescribed to patients with locally advanced or metastasized non-small cell lung carcinoma after failure of at least one earlier chemotherapy treatment. Approximately 75% of the patients treated with erlotinib develop acneiform skin rashes. A patient treated with erlotinib 3 months after finishing concomitant treatment with chemotherapy and radiotherapy for non-small cell lung cancer is presented. Unexpectedly, the part of the skin that had been included in his previously radiotherapy field was completely spared from the erlotinib-induced acneiform skin rash. The exact mechanism of erlotinib-induced rash sparing in previously irradiated skin is unclear. The underlying mechanism of this phenomenon needs to be explored further, because the number of patients being treated with a combination of both therapeutic modalities is increasing. The therapeutic effect of erlotinib in the area of the previously irradiated lesion should be assessed. (orig.)

  10. Quantum walk computation

    International Nuclear Information System (INIS)

    Kendon, Viv

    2014-01-01

    Quantum versions of random walks have diverse applications that are motivating experimental implementations as well as theoretical studies. Recent results showing quantum walks are “universal for quantum computation” relate to algorithms, to be run on quantum computers. We consider whether an experimental implementation of a quantum walk could provide useful computation before we have a universal quantum computer

  11. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  12. Computer-Based Cognitive Training for Mild Cognitive Impairment: Results from a Pilot Randomized, Controlled Trial

    OpenAIRE

    Barnes, Deborah E.; Yaffe, Kristine; Belfor, Nataliya; Jagust, William J.; DeCarli, Charles; Reed, Bruce R.; Kramer, Joel H.

    2009-01-01

    We performed a pilot randomized, controlled trial of intensive, computer-based cognitive training in 47 subjects with mild cognitive impairment (MCI). The intervention group performed exercises specifically designed to improve auditory processing speed and accuracy for 100 minutes/day, 5 days/week for 6 weeks; the control group performed more passive computer activities (reading, listening, visuospatial game) for similar amounts of time. Subjects had a mean age of 74 years and 60% were men; 7...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  14. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  15. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  16. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  17. Informed public choices for low-carbon electricity portfolios using a computer decision tool.

    Science.gov (United States)

    Mayer, Lauren A Fleishman; Bruine de Bruin, Wändi; Morgan, M Granger

    2014-04-01

    Reducing CO2 emissions from the electricity sector will likely require policies that encourage the widespread deployment of a diverse mix of low-carbon electricity generation technologies. Public discourse informs such policies. To make informed decisions and to productively engage in public discourse, citizens need to understand the trade-offs between electricity technologies proposed for widespread deployment. Building on previous paper-and-pencil studies, we developed a computer tool that aimed to help nonexperts make informed decisions about the challenges faced in achieving a low-carbon energy future. We report on an initial usability study of this interactive computer tool. After providing participants with comparative and balanced information about 10 electricity technologies, we asked them to design a low-carbon electricity portfolio. Participants used the interactive computer tool, which constrained portfolio designs to be realistic and yield low CO2 emissions. As they changed their portfolios, the tool updated information about projected CO2 emissions, electricity costs, and specific environmental impacts. As in the previous paper-and-pencil studies, most participants designed diverse portfolios that included energy efficiency, nuclear, coal with carbon capture and sequestration, natural gas, and wind. Our results suggest that participants understood the tool and used it consistently. The tool may be downloaded from http://cedmcenter.org/tools-for-cedm/informing-the-public-about-low-carbon-technologies/ .

  18. Computer Class Role Playing Games, an innovative teaching methodology based on STEM and ICT: first experimental results

    Science.gov (United States)

    Maraffi, S.

    2016-12-01

    Context/PurposeWe experienced a new teaching and learning technology: a Computer Class Role Playing Game (RPG) to perform educational activity in classrooms through an interactive game. This approach is new, there are some experiences on educational games, but mainly individual and not class-based. Gaming all together in a class, with a single scope for the whole class, it enhances peer collaboration, cooperative problem solving and friendship. MethodsTo perform the research we experimented the games in several classes of different degrees, acquiring specific questionnaire by teachers and pupils. Results Experimental results were outstanding: RPG, our interactive activity, exceed by 50% the overall satisfaction compared to traditional lessons or Power Point supported teaching. InterpretationThe appreciation of RPG was in agreement with the class level outcome identified by the teacher after the experimentation. Our work experience get excellent feedbacks by teachers, in terms of efficacy of this new teaching methodology and of achieved results. Using new methodology more close to the student point of view improves the innovation and creative capacities of learners, and it support the new role of teacher as learners' "coach". ConclusionThis paper presents the first experimental results on the application of this new technology based on a Computer game which project on a wall in the class an adventure lived by the students. The plots of the actual adventures are designed for deeper learning of Science, Technology, Engineering, Mathematics (STEM) and Social Sciences & Humanities (SSH). The participation of the pupils it's based on the interaction with the game by the use of their own tablets or smartphones. The game is based on a mixed reality learning environment, giving the students the feel "to be IN the adventure".

  19. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... When the image slices are reassembled by computer software, the result is a very detailed multidimensional view ... Safety Images related to Computed Tomography (CT) - Head Videos related to Computed Tomography (CT) - Head Sponsored by ...

  20. Radioiodine treatment of recurrent hyperthyroidism in patients previously treated for Graves' disease by subtotal thyroidectomy

    DEFF Research Database (Denmark)

    Vestergaard, H; Laurberg, P

    1992-01-01

    showed a higher sensitivity to radioiodine, with more cases of early hypothyroidism, than non-operated patients. However, after 50 months of follow-up the outcome was identical. The results indicate that frequent assessment is necessary after radioiodine treatment of previously operated patients, since......Radioiodine therapy is often employed for treatment of patients with relapse of hyperthyroidism due to Graves' disease, after previous thyroid surgery. Little is known about the outcome of this treatment compared to patients with no previous surgery. A total of 20 patients who had received surgical...... treatment for Graves' hyperthyroidism 1-46 years previously and with relapse of the hyperthyroidism, and 25 patients with hyperthyroidism due to Graves' disease and no previous thyroid surgery were treated with radioiodine, following the same protocol. Early after treatment the previously operated patients...

  1. Function's evaluation, perfusion and metabolism by positron emission tomography associated with multislice tomography (PET/CT) in patient with previous diagnosis to myocardial necrosis

    International Nuclear Information System (INIS)

    Campisi, Roxana; Aramayo, Natalia; Osorio, Amilcar

    2010-01-01

    A 64-years-old male patient with previous diagnosis of myocardial necrosis as assessed by myocardial perfusion gated single photon emission computed tomography (gSPECT) with 3-vessel-disease, left ventricular dysfunction and symptomatic by epigastric pain. The patient was referred for myocardial viability assessment by positron emission tomography (PET) to define clinical management decision. (authors) [es

  2. Chest X ray effective doses estimation in computed radiography

    International Nuclear Information System (INIS)

    Abdalla, Esra Abdalrhman Dfaalla

    2013-06-01

    Conventional chest radiography is technically difficult because of wide in tissue attenuations in the chest and limitations of screen-film systems. Computed radiography (CR) offers a different approach utilizing a photostimulable phosphor. photostimulable phosphors overcome some image quality limitations of chest imaging. The objective of this study was to estimate the effective dose in computed radiography at three hospitals in Khartoum. This study has been conducted in radiography departments in three centres Advanced Diagnostic Center, Nilain Diagnostic Center, Modern Diagnostic Center. The entrance surface dose (ESD) measurement was conducted for quality control of x-ray machines and survey of operators experimental techniques. The ESDs were measured by UNFORS dosimeter and mathematical equations to estimate patient doses during chest X rays. A total of 120 patients were examined in three centres, among them 62 were males and 58 were females. The overall mean and range of patient dosed was 0.073±0.037 (0.014-0.16) mGy per procedure while the effective dose was 3.4±01.7 (0.6-7.0) mSv per procedure. This study compared radiation doses to patients radiographic examinations of chest using computed radiology. The radiation dose was measured in three centres in Khartoum- Sudan. The results of the measured effective dose showed that the dose in chest radiography was lower in computed radiography compared to previous studies.(Author)

  3. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  4. MCNP HPGe detector benchmark with previously validated Cyltran model.

    Science.gov (United States)

    Hau, I D; Russ, W R; Bronson, F

    2009-05-01

    An exact copy of the detector model generated for Cyltran was reproduced as an MCNP input file and the detection efficiency was calculated similarly with the methodology used in previous experimental measurements and simulation of a 280 cm(3) HPGe detector. Below 1000 keV the MCNP data correlated to the Cyltran results within 0.5% while above this energy the difference between MCNP and Cyltran increased to about 6% at 4800 keV, depending on the electron cut-off energy.

  5. Applying standardized uptake values in gallium-67-citrate single-photon emission computed tomography/computed tomography studies and their correlation with blood test results in representative organs.

    Science.gov (United States)

    Toriihara, Akira; Daisaki, Hiromitsu; Yamaguchi, Akihiro; Yoshida, Katsuya; Isogai, Jun; Tateishi, Ukihide

    2018-05-21

    Recently, semiquantitative analysis using standardized uptake value (SUV) has been introduced in bone single-photon emission computed tomography/computed tomography (SPECT/CT). Our purposes were to apply SUV-based semiquantitative analytic method for gallium-67 (Ga)-citrate SPECT/CT and to evaluate correlation between SUV of physiological uptake and blood test results in representative organs. The accuracy of semiquantitative method was validated using an National Electrical Manufacturers Association body phantom study (radioactivity ratio of sphere : background=4 : 1). Thereafter, 59 patients (34 male and 25 female; mean age, 66.9 years) who had undergone Ga-citrate SPECT/CT were retrospectively enrolled in the study. A mean SUV of physiological uptake was calculated for the following organs: the lungs, right atrium, liver, kidneys, spleen, gluteal muscles, and bone marrow. The correlation between physiological uptakes and blood test results was evaluated using Pearson's correlation coefficient. The phantom study revealed only 1% error between theoretical and actual SUVs in the background, suggesting the sufficient accuracy of scatter and attenuation corrections. However, a partial volume effect could not be overlooked, particularly in small spheres with a diameter of less than 28 mm. The highest mean SUV was observed in the liver (range: 0.44-4.64), followed by bone marrow (range: 0.33-3.60), spleen (range: 0.52-2.12), and kidneys (range: 0.42-1.45). There was no significant correlation between hepatic uptake and liver function, renal uptake and renal function, or bone marrow uptake and blood cell count (P>0.05). The physiological uptake in Ga-citrate SPECT/CT can be represented as SUVs, which are not significantly correlated with corresponding blood test results.

  6. ENIAC in action making and remaking the modern computer

    CERN Document Server

    Haigh, Thomas; Rope, Crispin

    2016-01-01

    Conceived in 1943, completed in 1945, and decommissioned in 1955, ENIAC (the Electronic Numerical Integrator and Computer) was the first general-purpose programmable electronic computer. But ENIAC was more than just a milestone on the road to the modern computer. During its decade of operational life, ENIAC calculated sines and cosines and tested for statistical outliers, plotted the trajectories of bombs and shells, and ran the first numerical weather simulations. " ENIAC in Action "tells the whole story for the first time, from ENIAC's design, construction, testing, and use to its afterlife as part of computing folklore. It highlights the complex relationship of ENIAC and its designers to the revolutionary approaches to computer architecture and coding first documented by John von Neumann in 1945. Within this broad sweep, the authors emphasize the crucial but previously neglected years of 1947 to 1948, when ENIAC was reconfigured to run what the authors claim was the first modern computer program to be exe...

  7. Developing and validating an instrument for measuring mobile computing self-efficacy.

    Science.gov (United States)

    Wang, Yi-Shun; Wang, Hsiu-Yuan

    2008-08-01

    IT-related self-efficacy has been found to have a critical influence on system use. However, traditional measures of computer self-efficacy and Internet-related self-efficacy are perceived to be inapplicable in the context of mobile computing and commerce because they are targeted primarily at either desktop computer or wire-based technology contexts. Based on previous research, this study develops and validates a multidimensional instrument for measuring mobile computing self-efficacy (MCSE). This empirically validated instrument will be useful to researchers in developing and testing the theories of mobile user behavior, and to practitioners in assessing the mobile computing self-efficacy of users and promoting the use of mobile commerce systems.

  8. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  9. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  10. Comparative analysis of the results obtained by computer code ASTEC V2 and RELAP 5.3.2 for small leak ID 80 for VVER 1000

    International Nuclear Information System (INIS)

    Atanasova, B.; Grudev, P.

    2011-01-01

    The purpose of this report is to present the results obtained by simulation and subsequent analysis of emergency mode for small leak with ID 80 for WWER 1000/B320 - Kozloduy NPP Units 5 and 6. Calculations were performed with the ASTEC v2 computer code used for calculation of severe accident, which was designed by French and German groups - IRSN and GRS. Integral RELAP5 computer code is used as a reference for comparison of results. The analyzes are focused on the processes occurring in reactor internals phase of emergency mode with significant core damage. The main thermohydraulic parameters, start of reactor core degradation and subsequent fuel relocalization till reactor vessel failure are evaluated in the analysis. RELAP5 computer code is used as a reference code to compare the results obtained till early core degradation that occurs after core stripping and excising of fuel temperature above 1200 0 C

  11. Using Computer Simulations in Chemistry Problem Solving

    Science.gov (United States)

    Avramiotis, Spyridon; Tsaparlis, Georgios

    2013-01-01

    This study is concerned with the effects of computer simulations of two novel chemistry problems on the problem solving ability of students. A control-experimental group, equalized by pair groups (n[subscript Exp] = n[subscript Ctrl] = 78), research design was used. The students had no previous experience of chemical practical work. Student…

  12. 3D ultrasound computer tomography: Hardware setup, reconstruction methods and first clinical results

    Science.gov (United States)

    Gemmeke, Hartmut; Hopp, Torsten; Zapf, Michael; Kaiser, Clemens; Ruiter, Nicole V.

    2017-11-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3D USCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3D USCT, realizing for the first time the full benefits of a 3D system. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3 T MRI volume. Important for the obtained resolution are the simultaneously obtained results of the transmission tomography. The KIT 3D USCT was then tested in a pilot study on ten patients. The primary goals of the pilot study were to test the USCT device, the data acquisition protocols, the image reconstruction methods and the image fusion techniques in a clinical environment. The study was conducted successfully; the data acquisition could be carried out for all patients with an average imaging time of six minutes per breast. The reconstructions provide promising images. Overlaid volumes of the modalities show qualitative and quantitative information at a glance. This paper gives a summary of the involved techniques, methods, and first results.

  13. Noise tolerant spatiotemporal chaos computing.

    Science.gov (United States)

    Kia, Behnam; Kia, Sarvenaz; Lindner, John F; Sinha, Sudeshna; Ditto, William L

    2014-12-01

    We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

  14. LightKone Project: Lightweight Computation for Networks at the Edge

    OpenAIRE

    Van Roy, Peter; TEKK Tour Digital Wallonia

    2017-01-01

    LightKone combines two recent advances in distributed computing to enable general-purpose computing on edge networks: * Synchronization-free programming: Large-scale applications can run efficiently on edge networks by using convergent data structures (based on Lasp and Antidote from previous project SyncFree) → tolerates dynamicity and loose coupling of edge networks * Hybrid gossip: Communication can be made highly resilient on edge networks by combining gossip with classical distributed al...

  15. Automatic electromagnetic valve for previous vacuum

    International Nuclear Information System (INIS)

    Granados, C. E.; Martin, F.

    1959-01-01

    A valve which permits the maintenance of an installation vacuum when electric current fails is described. It also lets the air in the previous vacuum bomb to prevent the oil ascending in the vacuum tubes. (Author)

  16. Practical Applications of Evolutionary Computation to Financial Engineering Robust Techniques for Forecasting, Trading and Hedging

    CERN Document Server

    Iba, Hitoshi

    2012-01-01

    “Practical Applications of Evolutionary Computation to Financial Engineering” presents the state of the art techniques in Financial Engineering using recent results in Machine Learning and Evolutionary Computation. This book bridges the gap between academics in computer science and traders and explains the basic ideas of the proposed systems and the financial problems in ways that can be understood by readers without previous knowledge on either of the fields. To cement the ideas discussed in the book, software packages are offered that implement the systems described within. The book is structured so that each chapter can be read independently from the others. Chapters 1 and 2 describe evolutionary computation. The third chapter is an introduction to financial engineering problems for readers who are unfamiliar with this area. The following chapters each deal, in turn, with a different problem in the financial engineering field describing each problem in detail and focusing on solutions based on evolutio...

  17. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  18. Report of the evaluation by the Ad Hoc Review Committee on Computational Science and Engineering. Result evaluation in fiscal year 2000

    International Nuclear Information System (INIS)

    2001-06-01

    The Research Evaluation Committee, which consisted of 14 members from outside of the Japan Atomic Energy Research Institute (JAERI), set up an Ad Hoc Review Committee on Computational Science and Engineering in accordance with the 'Fundamental Guideline for the Evaluation of Research and Development (R and D) at JAERI' and its subsidiary regulations in order to evaluate the R and D accomplishments achieved for five years from Fiscal Year 1995 to Fiscal Year 1999 at Center for Promotion of Computational Science and Engineering of JAERI. The Ad Hoc Review Committee consisted of seven specialists from outside of JAERI. The Ad Hoc Review Committee conducted its activities from December 2000 to March 2001. The evaluation was performed on the basis of the materials submitted in advance and of the oral presentations made at the Ad Hoc Review Committee meeting which was held on December 27, 2000, in line with the items, viewpoints, and criteria for the evaluation specified by the Research Evaluation Committee. The result of the evaluation by the Ad Hoc Review Committee was submitted to the Research Evaluation Committee, and was judged to be appropriate at its meeting held on March 16, 2001. This report describes the result of the evaluation by the Ad Hoc Review Committee on Computational Science and Engineering. (author)

  19. Comparison of hand and semiautomatic tracing methods for creating maxillofacial artificial organs using sequences of computed tomography (CT) and cone beam computed tomography (CBCT) images.

    Science.gov (United States)

    Szabo, Bence T; Aksoy, Seçil; Repassy, Gabor; Csomo, Krisztian; Dobo-Nagy, Csaba; Orhan, Kaan

    2017-06-09

    The aim of this study was to compare the paranasal sinus volumes obtained by manual and semiautomatic imaging software programs using both CT and CBCT imaging. 121 computed tomography (CT) and 119 cone beam computed tomography (CBCT) examinations were selected from the databases of the authors' institutes. The Digital Imaging and Communications in Medicine (DICOM) images were imported into 3-dimensonal imaging software, in which hand mode and semiautomatic tracing methods were used to measure the volumes of both maxillary sinuses and the sphenoid sinus. The determined volumetric means were compared to previously published averages. Isometric CBCT-based volume determination results were closer to the real volume conditions, whereas the non-isometric CT-based volume measurements defined coherently lower volumes. By comparing the 2 volume measurement modes, the values gained from hand mode were closer to the literature data. Furthermore, CBCT-based image measurement results corresponded to the known averages. Our results suggest that CBCT images provide reliable volumetric information that can be depended on for artificial organ construction, and which may aid the guidance of the operator prior to or during the intervention.

  20. Results of the deepest all-sky survey for continuous gravitational waves on LIGO S6 data running on the Einstein@Home volunteer distributed computing project

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acemese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwa, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Arker, Bd.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Be, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitoss, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Boutfanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, O.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, C.; Cahillane, C.; Bustillo, J. Calderon; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, Laura; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Costa, C. F. Da Silva; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.A.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M. Di; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Dreyer, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Egizenstein, H. -B.; Ehrens, P.; Eichholel, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, O.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Far, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.M.; Fournier, J. -D.; Frasca, J. -D; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritsche, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garuti, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gi, K.; Glaetke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Granta, A.; Gras, S.; Cray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, S.; Hennig, J.; Henry, J.A.; Heptonsta, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howel, E. J.; Hu, Y. M.; Huang, O.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Isogai, T.; Lyer, B. R.; Fzumi, K.; Jaccimin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jones, R.; Jonker, R. J. G.; Ju, L.; Wads, k; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kefelian, F.; Keh, M. S.; Keite, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, Namjun; Kim, W.; Kimbre, S. J.; King, E. J.; King, P. J.; Kisse, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringe, V.; Krishnan, B.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Liick, H.; Lundgren, A. P.; Lynch, R.; Ivia, Y.; Machenschalk, B.; Maclnnis, M.; Macleod, D. M.; Magafia-Sandoval, F.; Zertuche, L. Magafia; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Manse, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matiehard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mende, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Miche, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecehia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Gutierrez-Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Hang, S.; Ohme, F.; Oliver, M.; Oppermann, P.; Ram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, . J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powel, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, .; Punturo, M.; Purrer, PuppoM.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rowan, RosiliskaS.; Ruggi, RiidigerP.; Ryan, K.; Sachdev, Perminder S; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabe, R.; Schofield, R. M. S.; Schonbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Sielleez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, António Dias da; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazus, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sunil, Suns; Sutton, P. J.; Swinkels, B. L.; Szczepariczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tomasi, Z.; Torres, C. V.; Tome, C.; Tot, D.; Travasso, F.; Traylor, G.; Trifire, D.; Tringali, M. C.; Trozz, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Valente, G.; Valdes, G.; van Bake, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; Van Heilningen, J. V.; Van Vegge, A. A.; Vardaro, M.; Vass, S.; Vaslith, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Vvang, G.; Wang, O.; Wang, X.; Wiang, Y.; Ward, R. L.; Wiarner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weliels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; WilIke, B.; Wimmer, M. H.; Whinkler, W.; Wipf, C. C.; De Witte, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J.L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S.J.; Zhu, X.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    We report results of a deep all-sky search for periodic gravitational waves from isolated neutron stars in data from the S6 LIGO science run. The search was possible thanks to the computing power provided by the volunteers of the Einstein@Home distributed computing project. We find no significant

  1. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  2. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  3. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    Science.gov (United States)

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  4. Development of multimedia computer-based training for VXI integrated fuel monitors

    International Nuclear Information System (INIS)

    Keeffe, R.; Ellacott, T.; Truong, Q.S.

    1999-01-01

    The Canadian Safeguards Support Program has developed the VXI Integrated Fuel Monitor (VFIM) which is based on the international VXI instrument bus standard. This equipment is a generic radiation monitor which can be used in an integrated mode where several detection systems can be connected to a common system where information is collected, displayed, and analyzed via a virtual control panel with the aid of computers, trackball and computer monitor. The equipment can also be used in an autonomous mode as a portable radiation monitor with a very low power consumption. The equipment has been described at previous international symposia. Integration of several monitoring systems (bundle counter, core discharge monitor, and yes/no monitor) has been carried out at Wolsong 2. Performance results from one of the monitoring systems which was installed at CANDU nuclear stations are discussed in a companion paper at this symposium. This paper describes the development of an effective multimedia computer-based training package for the primary users of the equipment; namely IAEA inspectors and technicians. (author)

  5. Short-distance expansion for the electromagnetic half-space Green's tensor: general results and an application to radiative lifetime computations

    International Nuclear Information System (INIS)

    Panasyuk, George Y; Schotland, John C; Markel, Vadim A

    2009-01-01

    We obtain a short-distance expansion for the half-space, frequency domain electromagnetic Green's tensor. The small parameter of the theory is ωε 1 L/c, where ω is the frequency, ε 1 is the permittivity of the upper half-space, in which both the source and the point of observation are located, and which is assumed to be transparent, c is the speed of light in vacuum and L is a characteristic length, defined as the distance from the point of observation to the reflected (with respect to the planar interface) position of the source. In the case when the lower half-space (the substrate) is characterized by a complex permittivity ε 2 , we compute the expansion to third order. For the case when the substrate is a transparent dielectric, we compute the imaginary part of the Green's tensor to seventh order. The analytical calculations are verified numerically. The practical utility of the obtained expansion is demonstrated by computing the radiative lifetime of two electromagnetically interacting molecules in the vicinity of a transparent dielectric substrate. The computation is performed in the strong interaction regime when the quasi-particle pole approximation is inapplicable. In this regime, the integral representation for the half-space Green's tensor is difficult to use while its electrostatic limiting expression is grossly inadequate. However, the analytical expansion derived in this paper can be used directly and efficiently. The results of this study are also relevant to nano-optics and near-field imaging, especially when tomographic image reconstruction is involved

  6. Optical computing.

    Science.gov (United States)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  7. Novel opportunities for computational biology and sociology in drug discovery☆

    Science.gov (United States)

    Yao, Lixia; Evans, James A.; Rzhetsky, Andrey

    2013-01-01

    Current drug discovery is impossible without sophisticated modeling and computation. In this review we outline previous advances in computational biology and, by tracing the steps involved in pharmaceutical development, explore a range of novel, high-value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy–industry links for scientific and human benefit. Attention to these opportunities could promise punctuated advance and will complement the well-established computational work on which drug discovery currently relies. PMID:20349528

  8. Novel opportunities for computational biology and sociology in drug discovery

    Science.gov (United States)

    Yao, Lixia

    2009-01-01

    Drug discovery today is impossible without sophisticated modeling and computation. In this review we touch on previous advances in computational biology and by tracing the steps involved in pharmaceutical development, we explore a range of novel, high value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy-industry ties for scientific and human benefit. Attention to these opportunities could promise punctuated advance, and will complement the well-established computational work on which drug discovery currently relies. PMID:19674801

  9. Development of computer code SIMPSEX for simulation of FBR fuel reprocessing flowsheets: II. additional benchmarking results

    International Nuclear Information System (INIS)

    Shekhar Kumar; Koganti, S.B.

    2003-07-01

    Benchmarking and application of a computer code SIMPSEX for high plutonium FBR flowsheets was reported recently in an earlier report (IGC-234). Improvements and recompilation of the code (Version 4.01, March 2003) required re-validation with the existing benchmarks as well as additional benchmark flowsheets. Improvements in the high Pu region (Pu Aq >30 g/L) resulted in better results in the 75% Pu flowsheet benchmark. Below 30 g/L Pu Aq concentration, results were identical to those from the earlier version (SIMPSEX Version 3, code compiled in 1999). In addition, 13 published flowsheets were taken as additional benchmarks. Eleven of these flowsheets have a wide range of feed concentrations and few of them are β-γ active runs with FBR fuels having a wide distribution of burnup and Pu ratios. A published total partitioning flowsheet using externally generated U(IV) was also simulated using SIMPSEX. SIMPSEX predictions were compared with listed predictions from conventional SEPHIS, PUMA, PUNE and PUBG. SIMPSEX results were found to be comparable and better than the result from above listed codes. In addition, recently reported UREX demo results along with AMUSE simulations are also compared with SIMPSEX predictions. Results of the benchmarking SIMPSEX with these 14 benchmark flowsheets are discussed in this report. (author)

  10. Physics vs. computer science

    International Nuclear Information System (INIS)

    Pike, R.

    1982-01-01

    With computers becoming more frequently used in theoretical and experimental physics, physicists can no longer afford to be ignorant of the basic techniques and results of computer science. Computing principles belong in a physicist's tool box, along with experimental methods and applied mathematics, and the easiest way to educate physicists in computing is to provide, as part of the undergraduate curriculum, a computing course designed specifically for physicists. As well, the working physicist should interact with computer scientists, giving them challenging problems in return for their expertise. (orig.)

  11. Efficient quantum computing with weak measurements

    International Nuclear Information System (INIS)

    Lund, A P

    2011-01-01

    Projective measurements with high quantum efficiency are often assumed to be required for efficient circuit-based quantum computing. We argue that this is not the case and show that the fact that they are not required was actually known previously but was not deeply explored. We examine this issue by giving an example of how to perform the quantum-ordering-finding algorithm efficiently using non-local weak measurements considering that the measurements used are of bounded weakness and some fixed but arbitrary probability of success less than unity is required. We also show that it is possible to perform the same computation with only local weak measurements, but this must necessarily introduce an exponential overhead.

  12. The Future of Brain-Computer Interfacing (keynote paper)

    NARCIS (Netherlands)

    Nijholt, Antinus

    In this paper we survey some early applications and research on brain-computer interfacing. We emphasize and revalue the role the views on artistic and playful applications have played. In previous years various road maps for BCI research appeared. The interest in medical applications has guided BCI

  13. Does Computer-aided Detection Assist in the Early Detection of Breast Cancer?

    International Nuclear Information System (INIS)

    Hukkinen, K.; Pamilo, M.

    2005-01-01

    Purpose: To evaluate whether breast cancers detected at screening are visible in previous mammograms, and to assess the performance of a computer-aided detection (CAD) system in detecting lesions in preoperative and previous mammograms. Material and Methods: Initial screening detected 67 women with 69 surgically verified breast cancers (Group A). An experienced screening radiologist retrospectively analyzed previous mammograms for visible lesions (Group B), noting in particular their size and morphology. Preoperative and previous mammograms were analyzed with CAD; a relatively inexperienced resident also analyzed previous mammograms. The performances of CAD and resident were then compared. Results: Of the 69 lesions identified, 36 were visible in previous mammograms. Of these 36 'missed' lesions, 14 were under 10 mm in diameter and 29 were mass lesions. The sensitivity of CAD was 81% in Group A and 64% in Group B. Small mass lesions were harder for CAD to detect. The specificity of CAD was 3% in Group A and 9% in Group B. Together, CAD and the resident found more 'missed' lesions than separately. Conclusion: Of the 69 breast cancers, 36 were visible in previous mammograms. CAD's sensitivity in detecting cancer lesions ranged from 64% to 81%, while specificity ranged from 9% to as low as 3%. CAD may be helpful if the radiologist is less subspecialized in mammography

  14. Exploration of cloud computing late start LDRD #149630 : Raincoat. v. 2.1.

    Energy Technology Data Exchange (ETDEWEB)

    Echeverria, Victor T.; Metral, Michael David; Leger, Michelle A.; Gabert, Kasimir Georg; Edgett, Patrick Garrett; Thai, Tan Q.

    2010-09-01

    This report contains documentation from an interoperability study conducted under the Late Start LDRD 149630, Exploration of Cloud Computing. A small late-start LDRD from last year resulted in a study (Raincoat) on using Virtual Private Networks (VPNs) to enhance security in a hybrid cloud environment. Raincoat initially explored the use of OpenVPN on IPv4 and demonstrates that it is possible to secure the communication channel between two small 'test' clouds (a few nodes each) at New Mexico Tech and Sandia. We extended the Raincoat study to add IPSec support via Vyatta routers, to interface with a public cloud (Amazon Elastic Compute Cloud (EC2)), and to be significantly more scalable than the previous iteration. The study contributed to our understanding of interoperability in a hybrid cloud.

  15. Optical encryption with selective computational ghost imaging

    International Nuclear Information System (INIS)

    Zafari, Mohammad; Kheradmand, Reza; Ahmadi-Kandjani, Sohrab

    2014-01-01

    Selective computational ghost imaging (SCGI) is a technique which enables the reconstruction of an N-pixel image from N measurements or less. In this paper we propose an optical encryption method based on SCGI and experimentally demonstrate that this method has much higher security under eavesdropping and unauthorized accesses compared with previous reported methods. (paper)

  16. Ifosfamide in previously untreated disseminated neuroblastoma. Results of Study 3A of the European Neuroblastoma Study Group.

    Science.gov (United States)

    Kellie, S J; De Kraker, J; Lilleyman, J S; Bowman, A; Pritchard, J

    1988-05-01

    A prospective study of the effectiveness of ifosfamide as a single agent in the management of previously untreated patients with Evans stage IV neuroblastoma was undertaken. Eighteen children aged more than 1 year were treated with ifosfamide (IFX) 3 g/m2 daily for 2 days immediately after diagnosis and 3 weeks later. Treatment was continued with combination chemotherapy using vincristine, cyclophosphamide, cisplatinum and etoposide (OPEC) or a variant. Mesna (2-mercaptoethane sulphonate) was given to all patients during IFX treatment to prevent urotoxicity. Eight of the 18 patients (44%) responded to IFX. Nine had greater than 66% reduction in baseline tumor volume. Of 15 evaluable patients with raised pre-treatment urinary catecholamine excretion, six (40%) achieved greater than 50% reduction in pretreatment levels. Two of 10 patients evaluable for bone marrow response had complete clearance. Toxicity was mild in all patients. Upon completing 'first line' therapy, only four patients (22%) achieved a good partial remission (GPR) or complete response (CR). Median survival was 11 months. There was a lower rate of attaining GPR and shortened median survival in patients receiving phase II IFX before OPEC or variant, compared to patients with similar pre-treatment characteristics treated with OPEC from diagnosis in an earlier study.

  17. Intruder dose pathway analysis for the onsite disposal of radioactive wastes: The ONSITE/MAXI1 computer program

    International Nuclear Information System (INIS)

    Kennedy, W.E. Jr.; Peloquin, R.A.; Napier, B.A.; Neuder, S.M.

    1987-02-01

    This document summarizes initial efforts to develop human-intrusion scenarios and a modified version of the MAXI computer program for potential use by the NRC in reviewing applications for onsite radioactive waste disposal. Supplement 1 of NUREG/CR-3620 (1986) summarized modifications and improvements to the ONSITE/MAXI1 software package. This document summarizes a modified version of the ONSITE/MAXI1 computer program. This modified version of the computer program operates on a personal computer and permits the user to optionally select radiation dose conversion factors published by the International Commission on Radiological Protection (ICRP) in their Publication No. 30 (ICRP 1979-1982) in place of those published by the ICRP in their Publication No. 2 (ICRP 1959) (as implemented in the previous versions of the ONSITE/MAXI1 computer program). The pathway-to-human models used in the computer program have not been changed from those described previously. Computer listings of the ONSITE/MAXI1 computer program and supporting data bases are included in the appendices of this document

  18. Unit physics performance of a mix model in Eulerian fluid computations

    Energy Technology Data Exchange (ETDEWEB)

    Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  19. nab-Paclitaxel in Combination with Carboplatin for a Previously Treated Thymic Carcinoma

    Directory of Open Access Journals (Sweden)

    Go Makimoto

    2014-01-01

    Full Text Available We present the case of a 40-year-old man with previously treated thymic carcinoma, complaining of gradually worsening back pain. Computed tomography scans of the chest showed multiple pleural disseminated nodules with a pleural effusion in the right thorax. The patient was treated with carboplatin on day 1 plus nab-paclitaxel on day 1 and 8 in cycles repeated every 4 weeks. Objective tumor shrinkage was observed after 4 cycles of this regimen. In addition, the elevated serum cytokeratin 19 fragment level decreased, and the patient's back pain was relieved without any analgesics. Although he experienced grade 4 neutropenia and granulocyte colony-stimulating factor (G-CSF injection, the severity of thrombocytopenia and nonhematological toxicities such as reversible neuropathy did not exceed grade 1 during the treatment. To our knowledge, this is the first report to demonstrate the efficacy of combination chemotherapy consisting of carboplatin and nab-paclitaxel against thymic carcinoma. This case report suggests that nab-paclitaxel in combination with carboplatin can be a favorable chemotherapy regimen for advanced thymic carcinoma.

  20. Previous bacterial infection affects textural quality parameters of heat-treated fillets from rainbow trout (Oncorhynchus mykiss)

    DEFF Research Database (Denmark)

    Ingerslev, Hans-Christian; Hyldig, Grethe; Przybylska, Dominika Alicja

    2012-01-01

    Sensory quality of fish meat is influenced by many parameters prior to slaughter. In the present study, it was examined if previous infections or damages in the muscle tissue influence product quality parameters in fish. Fillets from rainbow trout (Oncorhynchus mykiss) reared in seawater....... This article was the first to describe a correlation between previous infections in fish and changes in sensory-quality parameters. PRACTICAL APPLICATIONS. This work contributes with knowledge about sensory-quality parameters of fish meat after recovery from infections and physical-tissue damage. Because...... the results demonstrate an influence on the texture from previous disease, the practical potentials of the results are valuable for the aquaculture industry. In order to minimize the effects of previous diseases on the sensory quality regarding the texture, these fishes should be processed as cold...

  1. Optimum off-line trace synchronization of computer clusters

    International Nuclear Information System (INIS)

    Jabbarifar, Masoume; Dagenais, Michel; Roy, Robert; Sendi, Alireza Shameli

    2012-01-01

    A tracing and monitoring framework produces detailed execution trace files for a system. Each trace file contains events with associated timestamps based on the local clock of their respective system, which are not perfectly synchronized. To monitor all behavior in multi-core distributed systems, a global time reference is required, thus the need for traces synchronization techniques. The synchronization is time consuming when there is a cluster of many computers. In this paper we propose an optimized technique to reduce the total synchronization time. Compared with related techniques that have been used on kernel level traces, this method improves the performance while maintaining a high accuracy. It uses the packet rate and the hop count as two major criteria to focus the computation on more accurate network links during synchronization. These criteria, tested in real-word experiments, were identified as most important features of a network. Furthermore, we present numerical and analytical evaluation results, and compare these with previous methods demonstrating the accuracy and the performance of the method.

  2. A Fixpoint-Based Calculus for Graph-Shaped Computational Fields

    DEFF Research Database (Denmark)

    Lluch Lafuente, Alberto; Loreti, Michele; Montanari, Ugo

    2015-01-01

    topology is represented by a graph-shaped field, namely a network with attributes on both nodes and arcs, where arcs represent interaction capabilities between nodes. We propose a calculus where computation is strictly synchronous and corresponds to sequential computations of fixpoints in the graph......-shaped field. Under some conditions, those fixpoints can be computed by synchronised iterations, where in each iteration the attributes of a node is updated based on the attributes of the neighbours in the previous iteration. Basic constructs are reminiscent of the semiring μ-calculus, a semiring......-valued generalisation of the modal μ-calculus, which provides a flexible mechanism to specify the neighbourhood range (according to path formulae) and the way attributes should be combined (through semiring operators). Additional control-How constructs allow one to conveniently structure the fixpoint computations. We...

  3. Correction of facial and mandibular asymmetry using a computer aided design/computer aided manufacturing prefabricated titanium implant.

    Science.gov (United States)

    Watson, Jason; Hatamleh, Muhanad; Alwahadni, Ahed; Srinivasan, Dilip

    2014-05-01

    Patients with significant craniofacial asymmetry may have functional problems associated with their occlusion and aesthetic concerns related to the imbalance in soft and hard tissue profiles. This report details a case of facial asymmetry secondary to left mandible angle deficiency due to undergoing previous radiotherapy. We describe the correction of the bony deformity using computer aided design/computer aided manufacturing custom-made titanium onlay using novel direct metal laser sintering. The direct metal laser sintering onlay proved a very accurate operative fit and showed a good aesthetic correction of the bony defect with no reported complications postoperatively. It is a useful low-morbidity technique, and there is no resorption or associated donor-site complications.

  4. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  5. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  6. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  7. Pattern recognition, neural networks, genetic algorithms and high performance computing in nuclear reactor diagnostics. Results and perspectives

    International Nuclear Information System (INIS)

    Dzwinel, W.; Pepyolyshev, N.

    1996-01-01

    The main goal of this paper is the presentation of our experience in development of the diagnostic system for the IBR-2 (Russia - Dubna) nuclear reactor. The authors show the principal results of the system modifications to make it work more reliable and much faster. The former needs the adaptation of new techniques of data processing, the latter, implementation of the newest computational facilities. The results of application of the clustering techniques and a method of visualization of the multi-dimensional information directly on the operator display are presented. The experiences with neural nets, used for prediction of the reactor operation, are discussed. The genetic algorithms were also tested, to reduce the quantity of data nd extracting the most informative components of the analyzed spectra. (authors)

  8. Micro-Ramp Flow Control for Oblique Shock Interactions: Comparisons of Computational and Experimental Data

    Science.gov (United States)

    Hirt, Stephanie M.; Reich, David B.; O'Connor, Michael B.

    2012-01-01

    Computational fluid dynamics was used to study the effectiveness of micro-ramp vortex generators to control oblique shock boundary layer interactions. Simulations were based on experiments previously conducted in the 15- by 15-cm supersonic wind tunnel at the NASA Glenn Research Center. Four micro-ramp geometries were tested at Mach 2.0 varying the height, chord length, and spanwise spacing between micro-ramps. The overall flow field was examined. Additionally, key parameters such as boundary-layer displacement thickness, momentum thickness and incompressible shape factor were also examined. The computational results predicted the effects of the microramps well, including the trends for the impact that the devices had on the shock boundary layer interaction. However, computing the shock boundary layer interaction itself proved to be problematic since the calculations predicted more pronounced adverse effects on the boundary layer due to the shock than were seen in the experiment.

  9. Computer Vision Syndrome.

    Science.gov (United States)

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  10. Meningitis tuberculosa: Clinical findings and results of cranial computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Trautmann, M.; Loddenkemper, R.; Hoffmann, H.G.

    1982-10-01

    Guided by 9 own observations between 1977 and 1981, new diagnostic facilities in tuberculous meningitis are discussed. For differentiation from viral meningitis, measurement of CSF lactic acid concentration in addition to that of CSF glucose has proved to be of value in recent years. In accordance with the literature, two cases of this series which were examined for CSF lactic acid concentration showed markedly elevated levels of 8,4 rsp. 10,4 mmol/l. In contrast to this, in viral meningitis usually values of less than 3.5 mmol/l are found. Additionally, the presence of hypochlor- and hyponatremia, which could be demonstrated in 6 of our 9 patients, may raise the suspicion of tuberculous etiology. In the series presented, cranial computed tomography was of greatest diagnostic value, enabling the diagnosis of hydrocephalus internus in 5, and basal arachnoiditis in 2 cases.

  11. Functional computed tomography imaging of tumor-induced angiogenesis. Preliminary results of new tracer kinetic modeling using a computer discretization approach

    International Nuclear Information System (INIS)

    Kaneoya, Katsuhiko; Ueda, Takuya; Suito, Hiroshi

    2008-01-01

    The aim of this study was to establish functional computed tomography (CT) imaging as a method for assessing tumor-induced angiogenesis. Functional CT imaging was mathematically analyzed for 14 renal cell carcinomas by means of two-compartment modeling using a computer-discretization approach. The model incorporated diffusible kinetics of contrast medium including leakage from the capillary to the extravascular compartment and back-flux to the capillary compartment. The correlations between functional CT parameters [relative blood volume (rbv), permeability 1 (Pm1), and permeability 2 (Pm2)] and histopathological markers of angiogenesis [microvessel density (MVD) and vascular endothelial growth factor (VEGF)] were statistically analyzed. The modeling was successfully performed, showing similarity between the mathematically simulated curve and the measured time-density curve. There were significant linear correlations between MVD grade and Pm1 (r=0.841, P=0.001) and between VEGF grade and Pm2 (r=0.804, P=0.005) by Pearson's correlation coefficient. This method may be a useful tool for the assessment of tumor-induced angiogenesis. (author)

  12. Development of Computer Science Disciplines - A Social Network Analysis Approach

    OpenAIRE

    Pham, Manh Cuong; Klamma, Ralf; Jarke, Matthias

    2011-01-01

    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and ...

  13. National Rates of Uterine Rupture are not Associated with Rates of Previous Caesarean Delivery

    DEFF Research Database (Denmark)

    Colmorn, Lotte B.; Langhoff-Roos, Jens; Jakobsson, Maija

    2017-01-01

    % of all Nordic deliveries. Information on the comparison population was retrieved from the national medical birth registers. Incidence rate ratios by previous caesarean delivery and intended mode of delivery after caesarean were modelled using Poisson regression. RESULTS: The incidence of uterine rupture......BACKGROUND: Previous caesarean delivery and intended mode of delivery after caesarean are well-known individual risk factors for uterine rupture. We examined if different national rates of uterine rupture are associated with differences in national rates of previous caesarean delivery and intended...... was 7.8/10 000 in Finland and 4.6/10 000 in Denmark. Rates of caesarean (21.3%) and previous caesarean deliveries (11.5%) were highest in Denmark, while the rate of intended vaginal delivery after caesarean was highest in Finland (72%). National rates of uterine rupture were not associated...

  14. Phase transitions enable computational universality in neuristor-based cellular automata

    International Nuclear Information System (INIS)

    Pickett, Matthew D; Stanley Williams, R

    2013-01-01

    We recently demonstrated that Mott memristors, two-terminal devices that exhibit threshold switching via an insulator to conductor phase transition, can serve as the active components necessary to build a neuristor, a biomimetic threshold spiking device. Here we extend those results to demonstrate, in simulation, neuristor-based circuits capable of performing general Boolean logic operations. We additionally show that these components can be used to construct a one-dimensional cellular automaton, rule 137, previously proven to be universal. This proof-of-principle shows that localized phase transitions can perform spiking computation, which is of particular interest for neuromorphic hardware. (paper)

  15. GUI program to compute probabilistic seismic hazard analysis

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Chi, H. C.; Cho, J. C.; Park, J. H.; Kim, K. G.; Im, I. S.

    2006-12-01

    The development of program to compute probabilistic seismic hazard is completed based on Graphic User Interface(GUI). The main program consists of three part - the data input processes, probabilistic seismic hazard analysis and result output processes. The probabilistic seismic hazard analysis needs various input data which represent attenuation formulae, seismic zoning map, and earthquake event catalog. The input procedure of previous programs based on text interface take a much time to prepare the data. The data cannot be checked directly on screen to prevent input erroneously in existing methods. The new program simplifies the input process and enable to check the data graphically in order to minimize the artificial error within limits of the possibility

  16. X-Ray Computed Tomography of Tranquility Base Moon Rock

    Science.gov (United States)

    Jones, Justin S.; Garvin, Jim; Viens, Mike; Kent, Ryan; Munoz, Bruno

    2016-01-01

    X-ray Computed Tomography (CT) was used for the first time on the Apollo 11 Lunar Sample number 10057.30, which had been previously maintained by the White House, then transferred back to NASA under the care of Goddard Space Flight Center. Results from this analysis show detailed images of the internal structure of the moon rock, including vesicles (pores), crystal needles, and crystal bundles. These crystals, possibly the common mineral ilmenite, are found in abundance and with random orientation. Future work, in particular a greater understanding of these crystals and their formation, may lead to a more in-depth understanding of the lunar surface evolution and mineral content.

  17. Integration of computer-aided diagnosis/detection (CAD) results in a PACS environment using CAD-PACS toolkit and DICOM SR

    International Nuclear Information System (INIS)

    Le, Anh H.T.; Liu, Brent; Huang, H.K.

    2009-01-01

    Picture Archiving and Communication System (PACS) is a mature technology in health care delivery for daily clinical imaging service and data management. Computer-aided detection and diagnosis (CAD) utilizes computer methods to obtain quantitative measurements from medical images and clinical information to assist clinicians to assess a patient's clinical state more objectively. CAD needs image input and related information from PACS to improve its accuracy; and PACS benefits from CAD results online and available at the PACS workstation as a second reader to assist physicians in the decision making process. Currently, these two technologies remain as two separate independent systems with only minimal system integration. This paper describes a universal method to integrate CAD results with PACS in its daily clinical environment. The method is based on Health Level 7 (HL7) and Digital imaging and communications in medicine (DICOM) standards, and Integrating the Healthcare Enterprise (IHE) workflow profiles. In addition, the integration method is Health Insurance Portability and Accountability Act (HIPAA) compliant. The paper presents (1) the clinical value and advantages of integrating CAD results in a PACS environment, (2) DICOM Structured Reporting formats and some important IHE workflow profiles utilized in the system integration, (3) the methodology using the CAD-PACS integration toolkit, and (4) clinical examples with step-by-step workflows of this integration. (orig.)

  18. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    Directory of Open Access Journals (Sweden)

    Shuo Gu

    2017-01-01

    Full Text Available With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  19. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    Science.gov (United States)

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  20. Cloud Computing:Strategies for Cloud Computing Adoption

    OpenAIRE

    Shimba, Faith

    2010-01-01

    The advent of cloud computing in recent years has sparked an interest from different organisations, institutions and users to take advantage of web applications. This is a result of the new economic model for the Information Technology (IT) department that cloud computing promises. The model promises a shift from an organisation required to invest heavily for limited IT resources that are internally managed, to a model where the organisation can buy or rent resources that are managed by a clo...

  1. Computation of the power spectrum in chaotic ¼λφ4 inflation

    International Nuclear Information System (INIS)

    Rojas, Clara; Villalba, Víctor M.

    2012-01-01

    The phase-integral approximation devised by Fröman and Fröman, is used for computing cosmological perturbations in the quartic chaotic inflationary model. The phase-integral formulas for the scalar power spectrum are explicitly obtained up to fifth order of the phase-integral approximation. As in previous reports (Rojas 2007b, 2007c and 2009), we point out that the accuracy of the phase-integral approximation compares favorably with the numerical results and those obtained using the slow-roll and uniform approximation methods

  2. User-customized brain computer interfaces using Bayesian optimization.

    Science.gov (United States)

    Bashashati, Hossein; Ward, Rabab K; Bashashati, Ali

    2016-04-01

    The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject's brain characteristics. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.

  3. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    Energy Technology Data Exchange (ETDEWEB)

    Valach, M; Zymak, J; Svoboda, R [Nuclear Research Inst. Rez plc, Rez (Czech Republic)

    1997-08-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs.

  4. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    International Nuclear Information System (INIS)

    Valach, M.; Zymak, J.; Svoboda, R.

    1997-01-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs

  5. Renormalization of vacuum expectation values in spontaneously broken gauge theories: two-loop results

    International Nuclear Information System (INIS)

    Sperling, Marcus; Stöckinger, Dominik; Voigt, Alexander

    2014-01-01

    We complete the two-loop calculation of β-functions for vacuum expectation values (VEVs) in gauge theories by the missing O(g 4 )-terms. The full two-loop results are presented for generic and supersymmetric theories up to two-loop level in arbitrary R ξ -gauge. The results are obtained by means of a scalar background field, identical to our previous analysis. As a by-product, the two-loop scalar anomalous dimension for generic supersymmetric theories is presented. As an application we compute the β-functions for VEVs and tan β in the MSSM, NMSSM, and E 6 SSM

  6. A computational fluid dynamics simulation framework for ventricular catheter design optimization.

    Science.gov (United States)

    Weisenberg, Sofy H; TerMaath, Stephanie C; Barbier, Charlotte N; Hill, Judith C; Killeffer, James A

    2017-11-10

    OBJECTIVE Cerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter's small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter's fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter's design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective. METHODS The computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter's inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter. RESULTS The optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using

  7. Cloud Computing

    CERN Document Server

    Baun, Christian; Nimis, Jens; Tai, Stefan

    2011-01-01

    Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d

  8. The Relation between Accounting Result and Tax Result in the Case of the Profit Tax

    Directory of Open Access Journals (Sweden)

    Băcanu Mihaela-Nicoleta

    2017-01-01

    Full Text Available Accounting and taxation are two connected domains in Romania. The proof that these areconnected is the computation of the profit tax, for which the tax result is computed based on theaccounting result. The scope of the paper is to present what is the relation between accountingresult and tax result. There is a direct relation but also an indirect relation between the two results,taking into consideration the way of computing the tax result, but also the professional judgment,when the revenues and the expenses are recorded in the accounting register. The paper alsoanalyzes which one of the two results influences the other result.

  9. VBAC Scoring: Successful vaginal delivery in previous one caesarean section in induced labour

    International Nuclear Information System (INIS)

    Raja, J.F.; Bangash, K.T.; Mahmud, G.

    2013-01-01

    Objective: To develop a scoring system for the prediction of successful vaginal birth after caesarean section, following induction of labour with intra-vaginal E2 gel (Glandin). Methods: The cross-sectional study was conducted from January 2010 to August 2011, at the Pakistan Institute of Medical Sciences in Islamabad. Trial of labour in previous one caesarean section, undergoing induction with intra-vaginal E2 gel, was attempted in 100 women. They were scored according to six variables; maternal age; gestation; indications of previous caesarean; history of vaginal birth either before or after the previous caesarean; Bishop score and body mass index. Multivariate and univariate logistic regression analysis was used to develop the scoring system. Results: Of the total, 67 (67%) women delivered vaginally, while 33 (33%) ended in repeat caesarean delivery. Among the subjects, 55 (55%) women had no history of vaginal delivery either before or after previous caesarean section; 15 (15%) had history of vaginal births both before and after the previous caesarean; while 30 (30%) had vaginal delivery only after the previous caesarean section. Rates of successful vaginal birth after caesarean increased from 38% in women having a score of 0-3 to 58% in patients scoring 4-6. Among those having a score of 7-9 and 10-12, the success rates were 71% and 86% respectively. Conclusion: Increasing scores correlated with the increasing probability of vaginal birth after caesarean undergoing induction of labour. The admission VBAC scoring system is useful in counselling women with previous caesarean for the option of induction of labour or repeat caesarean delivery. (author)

  10. Computational complexity of the landscape II-Cosmological considerations

    Science.gov (United States)

    Denef, Frederik; Douglas, Michael R.; Greene, Brian; Zukowski, Claire

    2018-05-01

    We propose a new approach for multiverse analysis based on computational complexity, which leads to a new family of "computational" measure factors. By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it. We argue for a principle of "limited computational complexity" governing early universe dynamics as simulated by this supercomputer, which translates to a global measure for regulating the infinities of eternal inflation. The rules for time evolution can be thought of as a search algorithm, whose details should be constrained by a stronger principle of "minimal computational complexity". Unlike previously studied global measures, ours avoids standard equilibrium considerations and the well-known problems of Boltzmann Brains and the youngness paradox. We also give various definitions of the computational complexity of a cosmology, and argue that there are only a few natural complexity classes.

  11. Impulsivity moderates the relationship between previous quit failure and cue-induced craving.

    Science.gov (United States)

    Erblich, Joel; Michalowski, Alexandra

    2015-12-01

    Poor inhibitory control has been shown to be an important predictor of relapse to a number of drugs, including nicotine. Indeed, smokers who exhibit higher levels of impulsivity are thought to have impaired regulation of urges to smoke, and previous research has suggested that impulsivity may moderate cue-induced cigarette cravings. To that end, we conducted a study to evaluate the interplay between failed smoking cessation, cue-induced craving, and impulsivity. Current smokers (n=151) rated their cigarette cravings before and after laboratory to exposure to smoking cues, and completed questionnaires assessing impulsivity and previous failed quit attempts. Findings indicated that shorter duration of previous failed quit attempts was related to higher cue-induced cigarette craving, especially among smokers with higher levels of impulsivity. Results underscore the importance of considering trait impulsivity as a factor in better understanding the management of cue-induced cravings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    Science.gov (United States)

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  13. `95 computer system operation project

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  14. '95 computer system operation project

    International Nuclear Information System (INIS)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new

  15. Meningitis tuberculosa: Clinical findings and results of cranial computed tomography

    International Nuclear Information System (INIS)

    Trautmann, M.; Loddenkemper, R.; Hoffmann, H.G.; Krankenhaus Zehlendorf, Berlin; Allgemeines Krankenhaus Altona

    1982-01-01

    Guided by 9 own observations between 1977 and 1981, new diagnostic facilities in tuberculous meningitis are discussed. For differentiation from viral meningitis, measurement of CSF lactic acid concentration in addition to that of CSF glucose has proved to be of value in recent years. In accordance with the literature, two cases of this series which were examined for CSF lactic acid concentration showed markedly elevated levels of 8,4 rsp. 10,4 mmol/l. In contrast to this, in viral meningitis usually values of less than 3.5 mmol/l are found. Additionally, the presence of hypochlor- and hyponatremia, which could be demonstrated in 6 of our 9 patients, may raise the suspicion of tuberculous etiology. In the series presented, cranial computed tomography was of greatest diagnostic value, enabling the diagnosis of hydrocephalus internus in 5, and basal arachnoiditis in 2 cases. (orig.) [de

  16. Conceptual design of pipe whip restraints using interactive computer analysis

    International Nuclear Information System (INIS)

    Rigamonti, G.; Dainora, J.

    1975-01-01

    Protection against pipe break effects necessitates a complex interaction between failure mode analysis, piping layout, and structural design. Many iterations are required to finalize structural designs and equipment arrangements. The magnitude of the pipe break loads transmitted by the pipe whip restraints to structural embedments precludes the application of conservative design margins. A simplified analytical formulation of the nonlinear dynamic problems associated with pipe whip has been developed and applied using interactive computer analysis techniques. In the dynamic analysis, the restraint and the associated portion of the piping system, are modeled using the finite element lumped mass approach to properly reflect the dynamic characteristics of the piping/restraint system. The analysis is performed as a series of piecewise linear increments. Each of these linear increments is terminated by either formation of plastic conditions or closing/opening of gaps. The stiffness matrix is modified to reflect the changed stiffness characteristics of the system and re-started using the previous boundary conditions. The formation of yield hinges are related to the plastic moment of the section and unloading paths are automatically considered. The conceptual design of the piping/restraint system is performed using interactive computer analysis. The application of the simplified analytical approach with interactive computer analysis results in an order of magnitude reduction in engineering time and computer cost. (Auth.)

  17. How to prevent type 2 diabetes in women with previous gestational diabetes?

    DEFF Research Database (Denmark)

    Pedersen, Anne Louise Winkler; Terkildsen Maindal, Helle; Juul, Lise

    2017-01-01

    OBJECTIVES: Women with previous gestational diabetes (GDM) have a seven times higher risk of developing type 2 diabetes (T2DM) than women without. We aimed to review the evidence of effective behavioural interventions seeking to prevent T2DM in this high-risk group. METHODS: A systematic review...... of RCTs in several databases in March 2016. RESULTS: No specific intervention or intervention components were found superior. The pooled effect on diabetes incidence (four trials) was estimated to: -5.02 per 100 (95% CI: -9.24; -0.80). CONCLUSIONS: This study indicates that intervention is superior...... to no intervention in prevention of T2DM among women with previous GDM....

  18. Outcome of trial of scar in patients with previous caesarean section

    International Nuclear Information System (INIS)

    Khan, B.; Bashir, R.; Khan, W.

    2016-01-01

    Medical evidence indicates that 60-80% of women can achieve vaginal delivery after a previous lower segment caesarean section. Proper selection of patients for trial of scar and vigilant monitoring during labour will achieve successful maternal and perinatal outcome. The objective of our study is to establish the fact that vaginal delivery after one caesarean section has a high success rate in patients with previous one caesarean section for non-recurrent cause. Methods: The study was conducted in Ayub Teaching Abbottabad, Gynae-B Unit. All labouring patients, during the study period of five years, with previous one caesarean section and between 37 weeks to 41 weeks of gestation for a non-recurrent cause were included in the study. Data was recorded on special proforma designed for the purpose. Patients who had previous classical caesarean section, more than one caesarean section, and previous caesarean section with severe wound infection, transverse lie and placenta previa in present pregnancy were excluded. Foetal macrosomia (wt>4 kg) and severe IUGR with compromised blood flow on Doppler in present pregnancy were also not considered suitable for the study. Patients who had any absolute contraindication for vaginal delivery were also excluded. Results: There were 12505 deliveries during the study period. Total vaginal deliveries were 8790 and total caesarean sections were 3715. Caesarean section rate was 29.7%. Out of these 8790 patients, 764 patients were given a trial of scar and 535 patients delivered successfully vaginally (70%). Women who presented with spontaneous onset of labour were more likely to deliver vaginally (74.8%) as compared to induction group (27.1%). Conclusion: Trial of vaginal birth after caesarean (VBAC) in selected cases has great importance in the present era of the rising rate of primary caesarean section. (author)

  19. Reoperative sentinel lymph node biopsy after previous mastectomy.

    Science.gov (United States)

    Karam, Amer; Stempel, Michelle; Cody, Hiram S; Port, Elisa R

    2008-10-01

    Sentinel lymph node (SLN) biopsy is the standard of care for axillary staging in breast cancer, but many clinical scenarios questioning the validity of SLN biopsy remain. Here we describe our experience with reoperative-SLN (re-SLN) biopsy after previous mastectomy. Review of the SLN database from September 1996 to December 2007 yielded 20 procedures done in the setting of previous mastectomy. SLN biopsy was performed using radioisotope with or without blue dye injection superior to the mastectomy incision, in the skin flap in all patients. In 17 of 20 patients (85%), re-SLN biopsy was performed for local or regional recurrence after mastectomy. Re-SLN biopsy was successful in 13 of 20 patients (65%) after previous mastectomy. Of the 13 patients, 2 had positive re-SLN, and completion axillary dissection was performed, with 1 having additional positive nodes. In the 11 patients with negative re-SLN, 2 patients underwent completion axillary dissection demonstrating additional negative nodes. One patient with a negative re-SLN experienced chest wall recurrence combined with axillary recurrence 11 months after re-SLN biopsy. All others remained free of local or axillary recurrence. Re-SLN biopsy was unsuccessful in 7 of 20 patients (35%). In three of seven patients, axillary dissection was performed, yielding positive nodes in two of the three. The remaining four of seven patients all had previous modified radical mastectomy, so underwent no additional axillary surgery. In this small series, re-SLN was successful after previous mastectomy, and this procedure may play some role when axillary staging is warranted after mastectomy.

  20. State-of-the-art and dissemination of computational tools for drug-design purposes: a survey among Italian academics and industrial institutions.

    Science.gov (United States)

    Artese, Anna; Alcaro, Stefano; Moraca, Federica; Reina, Rocco; Ventura, Marzia; Costantino, Gabriele; Beccari, Andrea R; Ortuso, Francesco

    2013-05-01

    During the first edition of the Computationally Driven Drug Discovery meeting, held in November 2011 at Dompé Pharma (L'Aquila, Italy), a questionnaire regarding the diffusion and the use of computational tools for drug-design purposes in both academia and industry was distributed among all participants. This is a follow-up of a previously reported investigation carried out among a few companies in 2007. The new questionnaire implemented five sections dedicated to: research group identification and classification; 18 different computational techniques; software information; hardware data; and economical business considerations. In this article, together with a detailed history of the different computational methods, a statistical analysis of the survey results that enabled the identification of the prevalent computational techniques adopted in drug-design projects is reported and a profile of the computational medicinal chemist currently working in academia and pharmaceutical companies in Italy is highlighted.

  1. Influence of previous administration of trans-phenylcyclopropylamine on radioprotective and hypothermic effects of serotonin

    International Nuclear Information System (INIS)

    Misustova, J.; Hosek, B.; Novak, L.; Kautska, J.

    1978-01-01

    The influence of a previous administration of trans-phenylcyclopropylamine (t-PCPA) on radioprotective and hypothermic effects of serotonin was studied in male mice of the H strain, which were given t-PCPA in the dose of 4 mg/kg intraperitoneally 2 or 7 hours before application of serotonin (40 mg/kg, i.p.). The time course of protection was studied for exposures to 800 and 900 R. The results have shown that a previous administration of t-PCPA does not alter the short-time protective effect of serotonin, but that it significantly prolongs the time course of protection. The administration of t-PCPA also affects the starting speed and the duration of the serotonin-induced hypothermic reaction. The established correlation between prolongation of the radioprotective and hypothermic effects of serotonin induced by previous application of t-PCPA supplements the results with the existence of mutual relationship between changes of the energetic exchange and radioresistance of the organism. (author)

  2. Effectiveness of disinfection with alcohol 70% (w/v of contaminated surfaces not previously cleaned

    Directory of Open Access Journals (Sweden)

    Maurício Uchikawa Graziano

    2013-04-01

    Full Text Available OBJECTIVE: To evaluate the disinfectant effectiveness of alcohol 70% (w/v using friction, without previous cleaning, on work surfaces, as a concurrent disinfecting procedure in Health Services. METHOD: An experimental, randomized and single-blinded laboratory study was undertaken. The samples were enamelled surfaces, intentionally contaminated with Serratia marcescens microorganisms ATCC 14756 106 CFU/mL with 10% of human saliva added, and were submitted to the procedure of disinfection WITHOUT previous cleaning. The results were compared to disinfection preceded by cleaning. RESULTS: There was a reduction of six logarithms of the initial microbial population, equal in the groups WITH and WITHOUT previous cleaning (p=0.440 and a residual microbial load ≤ 102 CFU. CONCLUSION: The research demonstrated the acceptability of the practice evaluated, bringing an important response to the area of health, in particular to Nursing, which most undertakes procedures of concurrent cleaning /disinfecting of these work surfaces.

  3. Effectiveness of disinfection with alcohol 70% (w/v of contaminated surfaces not previously cleaned

    Directory of Open Access Journals (Sweden)

    Maurício Uchikawa Graziano

    Full Text Available OBJECTIVE: To evaluate the disinfectant effectiveness of alcohol 70% (w/v using friction, without previous cleaning, on work surfaces, as a concurrent disinfecting procedure in Health Services. METHOD: An experimental, randomized and single-blinded laboratory study was undertaken. The samples were enamelled surfaces, intentionally contaminated with Serratia marcescens microorganisms ATCC 14756 106 CFU/mL with 10% of human saliva added, and were submitted to the procedure of disinfection WITHOUT previous cleaning. The results were compared to disinfection preceded by cleaning. RESULTS: There was a reduction of six logarithms of the initial microbial population, equal in the groups WITH and WITHOUT previous cleaning (p=0.440 and a residual microbial load ≤ 102 CFU. CONCLUSION: The research demonstrated the acceptability of the practice evaluated, bringing an important response to the area of health, in particular to Nursing, which most undertakes procedures of concurrent cleaning /disinfecting of these work surfaces.

  4. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  5. Automated Testing Infrastructure and Result Comparison for Geodynamics Codes

    Science.gov (United States)

    Heien, E. M.; Kellogg, L. H.

    2013-12-01

    The geodynamics community uses a wide variety of codes on a wide variety of both software and hardware platforms to simulate geophysical phenomenon. These codes are generally variants of finite difference or finite element calculations involving Stokes flow or wave propagation. A significant problem is that codes of even low complexity will return different results depending on the platform due to slight differences in hardware, software, compiler, and libraries. Furthermore, changes to the codes during development may affect solutions in unexpected ways such that previously validated results are altered. The Computational Infrastructure for Geodynamics (CIG) is funded by the NSF to enhance the capabilities of the geodynamics community through software development. CIG has recently done extensive work in setting up an automated testing and result validation system based on the BaTLab system developed at the University of Wisconsin, Madison. This system uses 16 variants of Linux and Mac platforms on both 32 and 64-bit processors to test several CIG codes, and has also recently been extended to support testing on the XSEDE TACC (Texas Advanced Computing Center) Stampede cluster. In this work we overview the system design and demonstrate how automated testing and validation occurs and results are reported. We also examine several results from the system from different codes and discuss how changes in compilers and libraries affect the results. Finally we detail some result comparison tools for different types of output (scalar fields, velocity fields, seismogram data), and discuss within what margins different results can be considered equivalent.

  6. The Preliminary Study for Numerical Computation of 37 Rod Bundle in CANDU Reactor

    International Nuclear Information System (INIS)

    Jeon, Yu Mi; Bae, Jun Ho; Park, Joo Hwan

    2010-01-01

    A typical CANDU 6 fuel bundle consists of 37 fuel rods supported by two endplates and separated by spacer pads at various locations. In addition, the bearing pads are brazed to each outer fuel rod with the aim of reducing the contact area between the fuel bundle and the pressure tube. Although the recent progress of CFD methods has provided opportunities for computing the thermal-hydraulic phenomena inside of a fuel channel, it is yet impossible to reflect the detailed shape of rod bundle on the numerical computation due to a lot of computing mesh and memory capacity. Hence, the previous studies conducted a numerical computation for smooth channels without considering spacers, bearing pads. But, it is well known that these components are an important factor to predict the pressure drop and heat transfer rate in a channel. In this study, the new computational method is proposed to solve the complex geometry such as a fuel rod bundle. In front of applying the method to the problem of 37 rod bundle, the validity and the accuracy of the method are tested by applying the method to the simple geometry. Based on the present result, the calculation for the fully shaped 37-rod bundle is scheduled for the future works

  7. 77 FR 70176 - Previous Participation Certification

    Science.gov (United States)

    2012-11-23

    ... participants' previous participation in government programs and ensure that the past record is acceptable prior... information is designed to be 100 percent automated and digital submission of all data and certifications is... government programs and ensure that the past record is acceptable prior to granting approval to participate...

  8. Monitoring self-adaptive applications within edge computing frameworks: A state-of-the-art review

    NARCIS (Netherlands)

    Taherizadeh, S.; Jones, A.C.; Taylor, I.; Zhao, Z.; Stankovski, V.

    Recently, a promising trend has evolved from previous centralized computation to decentralized edge computing in the proximity of end-users to provide cloud applications. To ensure the Quality of Service (QoS) of such applications and Quality of Experience (QoE) for the end-users, it is necessary to

  9. Computers in Academic Architecture Libraries.

    Science.gov (United States)

    Willis, Alfred; And Others

    1992-01-01

    Computers are widely used in architectural research and teaching in U.S. schools of architecture. A survey of libraries serving these schools sought information on the emphasis placed on computers by the architectural curriculum, accessibility of computers to library staff, and accessibility of computers to library patrons. Survey results and…

  10. Initial results from a prototype whole-body photon-counting computed tomography system.

    Science.gov (United States)

    Yu, Z; Leng, S; Jorgensen, S M; Li, Z; Gutjahr, R; Chen, B; Duan, X; Halaweish, A F; Yu, L; Ritman, E L; McCollough, C H

    X-ray computed tomography (CT) with energy-discriminating capabilities presents exciting opportunities for increased dose efficiency and improved material decomposition analyses. However, due to constraints imposed by the inability of photon-counting detectors (PCD) to respond accurately at high photon flux, to date there has been no clinical application of PCD-CT. Recently, our lab installed a research prototype system consisting of two x-ray sources and two corresponding detectors, one using an energy-integrating detector (EID) and the other using a PCD. In this work, we report the first third-party evaluation of this prototype CT system using both phantoms and a cadaver head. The phantom studies demonstrated several promising characteristics of the PCD sub-system, including improved longitudinal spatial resolution and reduced beam hardening artifacts, relative to the EID sub-system. More importantly, we found that the PCD sub-system offers excellent pulse pileup control in cases of x-ray flux up to 550 mA at 140 kV, which corresponds to approximately 2.5×10 11 photons per cm 2 per second. In an anthropomorphic phantom and a cadaver head, the PCD sub-system provided image quality comparable to the EID sub-system for the same dose level. Our results demonstrate the potential of the prototype system to produce clinically-acceptable images in vivo .

  11. Validation of the Online version of the Previous Day Food Questionnaire for schoolchildren

    Directory of Open Access Journals (Sweden)

    Raquel ENGEL

    Full Text Available ABSTRACT Objective To evaluate the validity of the web-based version of the Previous Day Food Questionnaire Online for schoolchildren from the 2nd to 5th grades of elementary school. Methods Participants were 312 schoolchildren aged 7 to 12 years of a public school from the city of Florianópolis, Santa Catarina, Brazil. Validity was assessed by sensitivity, specificity, as well as by agreement rates (match, omission, and intrusion rates of food items reported by children on the Previous Day Food Questionnaire Online, using direct observation of foods/beverages eaten during school meals (mid-morning snack or afternoon snack on the previous day as the reference. Multivariate multinomial logistic regression analysis was used to evaluate the influence of participants’ characteristics on omission and intrusion rates. Results The results showed adequate sensitivity (67.7% and specificity (95.2%. There were low omission and intrusion rates of 22.8% and 29.5%, respectively when all food items were analyzed. Pizza/hamburger showed the highest omission rate, whereas milk and milk products showed the highest intrusion rate. The participants who attended school in the afternoon shift presented a higher probability of intrusion compared to their peers who attended school in the morning. Conclusion The Previous Day Food Questionnaire Online possessed satisfactory validity for the assessment of food intake at the group level in schoolchildren from the 2nd to 5th grades of public school.

  12. Computer-aided cleanup

    International Nuclear Information System (INIS)

    Williams, J.; Jones, B.

    1994-01-01

    In late 1992, the remedial investigation of operable unit 2 at the Department of Energy (DOE) Superfund site in Fernald, Ohio was in trouble. Despite years of effort--including an EPA-approved field-investigation work plan, 123 soil borings, 51 ground-water-monitoring wells, analysis of more than 650 soil and ground-water samples, and preparation of a draft remedial-investigation (RI) report--it was not possible to conclude if contaminated material in the unit was related to ground-water contamination previously detected beneath and beyond the site boundary. Compounding the problem, the schedule for the RI, feasibility study and record of decision for operable unit 2 was governed by a DOE-EPA consent agreement stipulating penalties of up to $10,000 per week for not meeting scheduled milestones--and time was running out. An advanced three-dimensional computer model confirmed that radioactive wastes dumped at the Fernald, Ohio Superfund site had contaminated ground water, after years of previous testing has been inconclusive. The system is now being used to aid feasibility and design work on the more-than-$1 billion remediation project

  13. Special issue of Higher-Order and Symbolic Computation

    DEFF Research Database (Denmark)

    Danvy, Olivier; Sabry, Amr

    This issue of HOSC is dedicated to the general topic of continuations. It grew out of the third ACM SIGPLAN Workshop on Continuations (CW'01), which took place in London, UK on January 16, 2001 [3]. The notion of continuation is ubiquitous in many different areas of computer science, including...... and streamline Filinski's earlier work in the previous special issue of HOSC (then LISP and Symbolic Computation) that grew out of the first ACM SIGPLAN Workshop on Continuations [1, 2]. Hasegawa and Kakutani's article is the journal version of an article presented at FOSSACS 2001 and that received the EATCS...

  14. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Marlowe Thomas J.

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants. We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results. We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.

  15. Restructuring the CS 1 classroom: Examining the effect of open laboratory-based classes vs. closed laboratory-based classes on Computer Science 1 students' achievement and attitudes toward computers and computer courses

    Science.gov (United States)

    Henderson, Jean Foster

    The purpose of this study was to assess the effect of classroom restructuring involving computer laboratories on student achievement and student attitudes toward computers and computer courses. The effects of the targeted student attributes of gender, previous programming experience, math background, and learning style were also examined. The open lab-based class structure consisted of a traditional lecture class with a separate, unscheduled lab component in which lab assignments were completed outside of class; the closed lab-based class structure integrated a lab component within the lecture class so that half the class was reserved for lecture and half the class was reserved for students to complete lab assignments by working cooperatively with each other and under the supervision and guidance of the instructor. The sample consisted of 71 students enrolled in four intact classes of Computer Science I during the fall and spring semesters of the 2006--2007 school year at two southern universities: two classes were held in the fall (one at each university) and two classes were held in the spring (one at each university). A counterbalanced repeated measures design was used in which all students experienced both class structures for half of each semester. The order of control and treatment was rotated among the four classes. All students received the same amount of class and instructor time. A multivariate analysis of variance (MANOVA) via a multiple regression strategy was used to test the study's hypotheses. Although the overall MANOVA model was statistically significant, independent follow-up univariate analyses relative to each dependent measure found that the only significant research factor was math background: Students whose mathematics background was at the level of Calculus I or higher had significantly higher student achievement than students whose mathematics background was less than Calculus I. The results suggest that classroom structures that

  16. 78 FR 36089 - Airworthiness Directives; Hawker Beechcraft Corporation (Type Certificate Previously Held by...

    Science.gov (United States)

    2013-06-17

    ... Corporation (Type Certificate Previously Held by Raytheon Aircraft Company) Model BAe.125 Series 800A... structural damage or lead to divergent flutter, and result in loss of integrity of the wing, loss of control... to divergent flutter, and result in loss of integrity of the wing, loss of control of the airplane...

  17. Expansion of the TFTR neutral beam computer system

    International Nuclear Information System (INIS)

    McEnerney, J.; Chu, J.; Davis, S.; Fitzwater, J.; Fleming, G.; Funk, P.; Hirsch, J.; Lagin, L.; Locasak, V.; Randerson, L.; Schechtman, N.; Silber, K.; Skelly, G.; Stark, W.

    1992-01-01

    Previous TFTR Neutral Beam computing support was based primarily on an Encore Concept 32/8750 computer within the TFTR Central Instrumentation, Control and Data Acquisition System (CICADA). The resources of this machine were 90% utilized during a 2.5 minute duty cycle. Both interactive and automatic processes were supported, with interactive response suffering at lower priority. Further, there were additional computing requirements and no cost effective path for expansion within the Encore framework. Two elements provided a solution to these problems: improved price performance for computing and a high speed bus link to the SELBUS. The purchase of a Sun SPARCstation and a VME/SELBUS bus link, allowed offloading the automatic processing to the workstation. This paper describes the details of the system including the performance of the bus link and Sun SPARCstation, raw data acquisition and data server functions, application software conversion issues, and experiences with the UNIX operating system in the mixed platform environment

  18. Executing a gather operation on a parallel computer

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2012-03-20

    Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

  19. Culture Negative Listeria monocytogenes Meningitis Resulting in Hydrocephalus and Severe Neurological Sequelae in a Previously Healthy Immunocompetent Man with Penicillin Allergy

    DEFF Research Database (Denmark)

    Gaini, Shahin; Karlsen, Gunn Hege; Nandy, Anirban

    2015-01-01

    A previously healthy 74-year-old Caucasian man with penicillin allergy was admitted with evolving headache, confusion, fever, and neck stiffness. Treatment for bacterial meningitis with dexamethasone and monotherapy ceftriaxone was started. The cerebrospinal fluid showed negative microscopy...... the catheter. The patient had severe neurological sequelae. This case report emphasises the importance of covering empirically for Listeria monocytogenes in all patients with penicillin allergy with suspected bacterial meningitis. The case also shows that it is possible to have significant infection...

  20. Feature binding and attention in working memory: a resolution of previous contradictory findings.

    Science.gov (United States)

    Allen, Richard J; Hitch, Graham J; Mate, Judit; Baddeley, Alan D

    2012-01-01

    We aimed to resolve an apparent contradiction between previous experiments from different laboratories, using dual-task methodology to compare effects of a concurrent executive load on immediate recognition memory for colours or shapes of items or their colour-shape combinations. Results of two experiments confirmed previous evidence that an irrelevant attentional load interferes equally with memory for features and memory for feature bindings. Detailed analyses suggested that previous contradictory evidence arose from limitations in the way recognition memory was measured. The present findings are inconsistent with an earlier suggestion that feature binding takes place within a multimodal episodic buffer Baddeley, ( 2000 ) and support a subsequent account in which binding takes place automatically prior to information entering the episodic buffer Baddeley, Allen, & Hitch, ( 2011 ). Methodologically, the results suggest that different measures of recognition memory performance (A', d', corrected recognition) give a converging picture of main effects, but are less consistent in detecting interactions. We suggest that this limitation on the reliability of measuring recognition should be taken into account in future research so as to avoid problems of replication that turn out to be more apparent than real.