WorldWideScience

Sample records for long-term grid computations

  1. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  2. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  3. Assessing Smart Grids contribution to the energy transition with long-term scenarios

    International Nuclear Information System (INIS)

    Bouckaert, Stephanie

    2013-01-01

    In the context of discussions on the energy transition, the general consensus is that part of the solution could come from Smart Grids to deal both with climate and energy issues. Prospective energy systems models may be used to estimate the long-term development of the energy system in order to meet future energy demands while taking into account environmental and technical constraints. These historical models are demand driven and should from now on evolve to considerate future developments of the electricity system. In this study, we have implemented some functionalities related to the concept of Smart Grids in a long-term planning model (demand-side integration, storage, renewable energy). This approach makes it possible to evaluate their benefits separately or collectively, taking into account possible interactions between these functionalities. We have also implemented an indicator reflecting the level of reliability of the electricity system in our model. This additional parameter enables to constrain future electricity systems to ensure a level of reliability identical to the existing one. Our analysis is demonstrated by the case of the Reunion Island, which aims to produce electricity using 100% renewable sources by 2030, and for which Smart Grids functionalities are also potential solutions for reaching this objective. (author) [fr

  4. [Grid computing

    CERN Multimedia

    Wolinsky, H

    2003-01-01

    "Turn on a water spigot, and it's like tapping a bottomless barrel of water. Ditto for electricity: Flip the switch, and the supply is endless. But computing is another matter. Even with the Internet revolution enabling us to connect in new ways, we are still limited to self-contained systems running locally stored software, limited by corporate, institutional and geographic boundaries" (1 page).

  5. A Worldwide Production Grid Service Built on EGEE and OSG Infrastructures Lessons Learnt and Long-term Requirements

    International Nuclear Information System (INIS)

    Shiers, J.; Dimou, M.; Mendez Lorenzo, P.

    2007-01-01

    Using the Grid Infrastructures provided by EGEE, OSG and others, a worldwide production service has been built that provides the computing and storage needs for the 4 main physics collaborations at CERN's Large Hadron Collider (LHC). The large number of users, their geographical distribution and the very high service availability requirements make this experience of Grid usage worth studying for the sake of a solid and scalable future operation. This service must cater for the needs of thousands of physicists in hundreds of institutes in tens of countries. A 24x7 service with availability of up to 99% is required with major service responsibilities at each of some ten T ier1 a nd of the order of one hundred T ier2 s ites. Such a service - which has been operating for some 2 years and will be required for at least an additional decade - has required significant manpower and resource investments from all concerned and is considered a major achievement in the field of Grid computing. We describe the main lessons learned in offering a production service across heterogeneous Grids as well as the requirements for long-term operation and sustainability. (Author)

  6. Computer Simulations of Developmental Change: The Contributions of Working Memory Capacity and Long-Term Knowledge

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2008-01-01

    Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…

  7. LHC computing grid

    International Nuclear Information System (INIS)

    Novaes, Sergio

    2011-01-01

    Full text: We give an overview of the grid computing initiatives in the Americas. High-Energy Physics has played a very important role in the development of grid computing in the world and in Latin America it has not been different. Lately, the grid concept has expanded its reach across all branches of e-Science, and we have witnessed the birth of the first nationwide infrastructures and its use in the private sector. (author)

  8. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  9. Does computer use pose a hazard for future long-term sickness absence?

    DEFF Research Database (Denmark)

    Andersen, Johan Hviid; Mikkelsen, Sigurd

    2010-01-01

    . The hazard ratio for sickness absence with weekly increase of one hour in computer use was 0.99 (95% CI: 0.99 to 1.00). Low satisfaction with work place arrangements and female gender both doubled the risk of sickness absence.We have earlier found that computer use did not predict persistent pain in the neck...... and upper limb, and it seems that computer use neither predicts future long-term sickness absence of all causes....

  10. A novel neural prosthesis providing long-term electrocorticography recording and cortical stimulation for epilepsy and brain-computer interface.

    Science.gov (United States)

    Romanelli, Pantaleo; Piangerelli, Marco; Ratel, David; Gaude, Christophe; Costecalde, Thomas; Puttilli, Cosimo; Picciafuoco, Mauro; Benabid, Alim; Torres, Napoleon

    2018-05-11

    OBJECTIVE Wireless technology is a novel tool for the transmission of cortical signals. Wireless electrocorticography (ECoG) aims to improve the safety and diagnostic gain of procedures requiring invasive localization of seizure foci and also to provide long-term recording of brain activity for brain-computer interfaces (BCIs). However, no wireless devices aimed at these clinical applications are currently available. The authors present the application of a fully implantable and externally rechargeable neural prosthesis providing wireless ECoG recording and direct cortical stimulation (DCS). Prolonged wireless ECoG monitoring was tested in nonhuman primates by using a custom-made device (the ECoG implantable wireless 16-electrode [ECOGIW-16E] device) containing a 16-contact subdural grid. This is a preliminary step toward large-scale, long-term wireless ECoG recording in humans. METHODS The authors implanted the ECOGIW-16E device over the left sensorimotor cortex of a nonhuman primate ( Macaca fascicularis), recording ECoG signals over a time span of 6 months. Daily electrode impedances were measured, aiming to maintain the impedance values below a threshold of 100 KΩ. Brain mapping was obtained through wireless cortical stimulation at fixed intervals (1, 3, and 6 months). After 6 months, the device was removed. The authors analyzed cortical tissues by using conventional histological and immunohistological investigation to assess whether there was evidence of damage after the long-term implantation of the grid. RESULTS The implant was well tolerated; no neurological or behavioral consequences were reported in the monkey, which resumed his normal activities within a few hours of the procedure. The signal quality of wireless ECoG remained excellent over the 6-month observation period. Impedance values remained well below the threshold value; the average impedance per contact remains approximately 40 KΩ. Wireless cortical stimulation induced movements of the upper

  11. Long term performance analysis of a grid connected photovoltaic system in Northern Ireland

    International Nuclear Information System (INIS)

    Mondol, Jayanta Deb; Yohanis, Yigzaw; Smyth, Mervyn; Norton, Brian

    2006-01-01

    The performance of a 13 kW p roof mounted, grid connected photovoltaic system in Northern Ireland over a period of three years has been analysed on hourly, daily and monthly bases. The derived parameters included reference yield, array yield, final yield, array capture losses, system losses, PV and inverter efficiencies and performance ratio. The effects of insolation and inverter operation on the system performance were investigated. The monthly average daily PV, system and inverter efficiencies varied from 4.5% to 9.2%, 3.6% to 7.8% and 50% to 87%, respectively. The annual average PV, system and inverter efficiencies were 7.6%, 6.4% and 75%, respectively. The monthly average daily DC and AC performance ratios ranged from 0.35 to 0.74 and 0.29 to 0.66, respectively. The annual average monthly AC performance ratios for the three years were 0.60, 0.61 and 0.62, respectively. The performance of this system is compared with that of other representative systems internationally

  12. [Efficiency of computer-based documentation in long-term care--preliminary project].

    Science.gov (United States)

    Lüngen, Markus; Gerber, Andreas; Rupprecht, Christoph; Lauterbach, Karl W

    2008-06-01

    In Germany the documentation of processes in long-term care is mainly paper-based. Planning, realization and evaluation are not supported in an optimal way. In a preliminary study we evaluated the consequences of the introduction of a computer-based documentation system using handheld devices. We interviewed 16 persons before and after introducing the computer-based documentation and assessed costs for the documentation process and administration. The results show that reducing costs is likely. The job satisfaction of the personnel increased, more time could be spent for caring for the residents. We suggest further research to reach conclusive results.

  13. Comparison of measured and predicted long term performance of grid a connected photovoltaic system

    International Nuclear Information System (INIS)

    Mondol, Jayanta Deb; Yohanis, Yigzaw G.; Norton, Brian

    2007-01-01

    Predicted performance of a grid connected photovoltaic (PV) system using TRNSYS was compared with measured data. A site specific global-diffuse correlation model was developed and used to calculate the beam and diffuse components of global horizontal insolation. A PV module temperature equation and a correlation relating input and output power of an inverter were developed using measured data and used in TRNSYS to perform PV array and inverter outputs simulation. Different combinations of the tilted surface radiation model, global-diffuse correlation model and PV module temperature equation were used in the simulations. Statistical error analysis was performed to compare the results for each combination. The simulation accuracy was improved by using the new global-diffuse correlation and module temperature equation in the TRNSYS simulation. For an isotropic sky tilted surface radiation model, the average monthly difference between measured and predicted PV output before and after modification of the TRNSYS component were 10.2% and 3.3%, respectively, and, for an anisotropic sky model, 15.4% and 10.7%, respectively. For inverter output, the corresponding errors were 10.4% and 3.3% and 15.8% and 8.6%, respectively. Measured PV efficiency, overall system efficiency, inverter efficiency and performance ratio of the system were compared with the predicted results. The predicted PV performance parameters agreed more closely with the measured parameters in summer than in winter. The difference between predicted performances using an isotropic and an anisotropic sky tilted surface models is between 1% and 2%

  14. Resource allocation in grid computing

    NARCIS (Netherlands)

    Koole, Ger; Righter, Rhonda

    2007-01-01

    Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of

  15. Recent trends in grid computing

    International Nuclear Information System (INIS)

    Miura, Kenichi

    2004-01-01

    Grid computing is a technology which allows uniform and transparent access to geographically dispersed computational resources, such as computers, databases, experimental and observational equipment etc. via high-speed, high-bandwidth networking. The commonly used analogy is that of electrical power grid, whereby the household electricity is made available from outlets on the wall, and little thought need to be given to where the electricity is generated and how it is transmitted. The usage of grid also includes distributed parallel computing, high through-put computing, data intensive computing (data grid) and collaborative computing. This paper reviews the historical background, software structure, current status and on-going grid projects, including applications of grid technology to nuclear fusion research. (author)

  16. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    International Nuclear Information System (INIS)

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data

  17. A computational environment for long-term multi-feature and multi-algorithm seizure prediction.

    Science.gov (United States)

    Teixeira, C A; Direito, B; Costa, R P; Valderrama, M; Feldwisch-Drentrup, H; Nikolopoulos, S; Le Van Quyen, M; Schelter, B; Dourado, A

    2010-01-01

    The daily life of epilepsy patients is constrained by the possibility of occurrence of seizures. Until now, seizures cannot be predicted with sufficient sensitivity and specificity. Most of the seizure prediction studies have been focused on a small number of patients, and frequently assuming unrealistic hypothesis. This paper adopts the view that for an appropriate development of reliable predictors one should consider long-term recordings and several features and algorithms integrated in one software tool. A computational environment, based on Matlab (®), is presented, aiming to be an innovative tool for seizure prediction. It results from the need of a powerful and flexible tool for long-term EEG/ECG analysis by multiple features and algorithms. After being extracted, features can be subjected to several reduction and selection methods, and then used for prediction. The predictions can be conducted based on optimized thresholds or by applying computational intelligence methods. One important aspect is the integrated evaluation of the seizure prediction characteristic of the developed predictors.

  18. Does computer use pose a hazard for future long-term sickness absence?

    DEFF Research Database (Denmark)

    Andersen, JH; Mikkelsen, Sigurd

    2010-01-01

    The aim of the study was to investigate if weekly duration of computer use predicted sickness absence for more than two weeks at a later time.A cohort of 2146 frequent computer users filled in a questionnaire at baseline and was followed for one year with continuously recording of the duration of...... and upper limb, and it seems that computer use neither predicts future long-term sickness absence of all causes.......The aim of the study was to investigate if weekly duration of computer use predicted sickness absence for more than two weeks at a later time.A cohort of 2146 frequent computer users filled in a questionnaire at baseline and was followed for one year with continuously recording of the duration...... of computer use and furthermore followed for 300 weeks in a central register of sickness absence for more than 2 weeks.147 participants of the 2,146 (6.9%) became first time sick listed in the follow-up period. Overall, mean weekly computer use did not turn out to be a risk factor for later sickness absence...

  19. Spectral model for long-term computation of thermodynamics and potential evaporation in shallow wetlands

    Science.gov (United States)

    de la Fuente, Alberto; Meruane, Carolina

    2017-09-01

    Altiplanic wetlands are unique ecosystems located in the elevated plateaus of Chile, Argentina, Peru, and Bolivia. These ecosystems are under threat due to changes in land use, groundwater extractions, and climate change that will modify the water balance through changes in precipitation and evaporation rates. Long-term prediction of the fate of aquatic ecosystems imposes computational constraints that make finding a solution impossible in some cases. In this article, we present a spectral model for long-term simulations of the thermodynamics of shallow wetlands in the limit case when the water depth tends to zero. This spectral model solves for water and sediment temperature, as well as heat, momentum, and mass exchanged with the atmosphere. The parameters of the model (water depth, thermal properties of the sediments, and surface albedo) and the atmospheric downscaling were calibrated using the MODIS product of the land surface temperature. Moreover, the performance of the daily evaporation rates predicted by the model was evaluated against daily pan evaporation data measured between 1964 and 2012. The spectral model was able to correctly represent both seasonal fluctuation and climatic trends observed in daily evaporation rates. It is concluded that the spectral model presented in this article is a suitable tool for assessing the global climate change effects on shallow wetlands whose thermodynamics is forced by heat exchanges with the atmosphere and modulated by the heat-reservoir role of the sediments.

  20. A computational analysis of the long-term regulation of arterial pressure.

    Science.gov (United States)

    Beard, Daniel A; Pettersen, Klas H; Carlson, Brian E; Omholt, Stig W; Bugenhagen, Scott M

    2013-01-01

    The asserted dominant role of the kidneys in the chronic regulation of blood pressure and in the etiology of hypertension has been debated since the 1970s. At the center of the theory is the observation that the acute relationships between arterial pressure and urine production-the acute pressure-diuresis and pressure-natriuresis curves-physiologically adapt to perturbations in pressure and/or changes in the rate of salt and volume intake. These adaptations, modulated by various interacting neurohumoral mechanisms, result in chronic relationships between water and salt excretion and pressure that are much steeper than the acute relationships. While the view that renal function is the dominant controller of arterial pressure has been supported by computer models of the cardiovascular system known as the "Guyton-Coleman model", no unambiguous description of a computer model capturing chronic adaptation of acute renal function in blood pressure control has been presented. Here, such a model is developed with the goals of: 1. representing the relevant mechanisms in an identifiable mathematical model; 2. identifying model parameters using appropriate data; 3. validating model predictions in comparison to data; and 4. probing hypotheses regarding the long-term control of arterial pressure and the etiology of primary hypertension. The developed model reveals: long-term control of arterial blood pressure is primarily through the baroreflex arc and the renin-angiotensin system; and arterial stiffening provides a sufficient explanation for the etiology of primary hypertension associated with ageing. Furthermore, the model provides the first consistent explanation of the physiological response to chronic stimulation of the baroreflex.

  1. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  2. Grid Computing Education Support

    Energy Technology Data Exchange (ETDEWEB)

    Steven Crumb

    2008-01-15

    The GGF Student Scholar program enabled GGF the opportunity to bring over sixty qualified graduate and under-graduate students with interests in grid technologies to its three annual events over the three-year program.

  3. Grid computing the European Data Grid Project

    CERN Document Server

    Segal, B; Gagliardi, F; Carminati, F

    2000-01-01

    The goal of this project is the development of a novel environment to support globally distributed scientific exploration involving multi- PetaByte datasets. The project will devise and develop middleware solutions and testbeds capable of scaling to handle many PetaBytes of distributed data, tens of thousands of resources (processors, disks, etc.), and thousands of simultaneous users. The scale of the problem and the distribution of the resources and user community preclude straightforward replication of the data at different sites, while the aim of providing a general purpose application environment precludes distributing the data using static policies. We will construct this environment by combining and extending newly emerging "Grid" technologies to manage large distributed datasets in addition to computational elements. A consequence of this project will be the emergence of fundamental new modes of scientific exploration, as access to fundamental scientific data is no longer constrained to the producer of...

  4. A scalable computational framework for establishing long-term behavior of stochastic reaction networks.

    Directory of Open Access Journals (Sweden)

    Ankit Gupta

    2014-06-01

    Full Text Available Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed.

  5. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  6. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  7. Swiss Solutions for Providing Electrical Power in Cases of Long-Term Black-Out of the Grid

    International Nuclear Information System (INIS)

    Altkind, Franz; Schmid, Daniel

    2015-01-01

    A better understanding of nuclear power plant electrical system robustness and defence-in-depth may be derived from comparing design and operating practices in member countries. In pursuing this goal, the current paper will focus on Switzerland. It will present in general the protective measures implemented in the Swiss nuclear power plants to ensure power supply, which comply with the 'Defence-in-depth' principle by means of several layers of protection. In particular it will present the measures taken in case of a total station blackout. The different layers supplying electricity may be summed up as follows. The first layer consists of the external main grid, which the plant generators feed into. The second layer is the auxiliary power supply when the power plant is in island mode in case of a failure of the main grid. A third layer is provided by the external reserve grid in case of both a failure of the external main grid and of the auxiliary power supply in island mode. As a fourth layer there exists an emergency electrical power supply. This is supplied either from an emergency diesel generator or a direct feed from a hydroelectric power plant. In the fifth layer, the special emergency electrical power supply from bunkered emergency diesel generators power the special emergency safety system and is activated upon the loss of all external feeds. A sixth layer consists of accident management equipment. Since the Fukushima event, the sixth layer has been reinforced and a seventh layer with off-site accident management equipment has been newly added. The Swiss nuclear safety regulator has analysed the accident. It reviewed the Swiss plants' protection against earthquakes as well as flooding and demanded increased precautionary measures from the Swiss operators in the hypothetical case of a total station blackout, when all the first five layers of supply would fail. In the immediate, a centralized storage with severe accident management equipment

  8. Southampton uni's computer whizzes develop "mini" grid

    CERN Multimedia

    Sherriff, Lucy

    2006-01-01

    "In a bid to help its students explore the potential of grid computing, the University of Southampton's Computer Science department has developed what it calls a "lightweight grid". The system has been designed to allow students to experiment with grid technology without the complexity of inherent security concerns of the real thing. (1 page)

  9. Computer and internet access for long-term care residents: perceived benefits and barriers.

    Science.gov (United States)

    Tak, Sunghee H; Beck, Cornelia; McMahon, Ed

    2007-05-01

    In this study, the authors examined residents' computer and Internet access, as well as benefits and barriers to access in nursing homes. Administrators of 64 nursing homes in a national chain completed surveys. Fourteen percent of the nursing homes provided computers for residents to use, and 11% had Internet access. Some residents owned personal computers in their rooms. Administrators perceived the benefits of computer and Internet use for residents as facilitating direct communication with family and providing mental exercise, education, and enjoyment. Perceived barriers included cost and space for computer equipment and residents' cognitive and physical impairments. Implications of residents' computer activities were discussed for nursing care. Further research is warranted to examine therapeutic effects of computerized activities and their cost effectiveness.

  10. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  11. Proposal for grid computing for nuclear applications

    International Nuclear Information System (INIS)

    Faridah Mohamad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim; Zukhaimira Zolkapli

    2013-01-01

    Full-text: The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process. (author)

  12. Long-term internal thoracic artery bypass graft patency and geometry assessed by multidetector computed tomography

    DEFF Research Database (Denmark)

    Zacho, Mette; Damgaard, Sune; Lilleoer, Nikolaj Thomas

    2012-01-01

    The left internal thoracic artery (LITA) undergoes vascular remodelling when used for coronary artery bypass grafting. In this study we tested the hypothesis that the extent of the LITA remodelling late after coronary artery bypass grafting assessed by multidetector computed tomography is related...

  13. Long-term changes of information environments and computer anxiety of nurse administrators in Japan.

    Science.gov (United States)

    Majima, Yukie; Izumi, Takako

    2013-01-01

    In Japan, medical information systems, including electronic medical records, are being introduced increasingly at medical and nursing fields. Nurse administrators, who are involved in the introduction of medical information systems and who must make proper judgment, are particularly required to have at least minimal knowledge of computers and networks and the ability to think about easy-to-use medical information systems. However, few of the current generation of nurse administrators studied information science subjects in their basic education curriculum. It can be said that information education for nurse administrators has become a pressing issue. Consequently, in this study, we conducted a survey of participants taking the first level program of the education course for Japanese certified nurse administrators to ascertain the actual conditions, such as the information environments that nurse administrators are in, their anxiety attitude to computers. Comparisons over the seven years since 2004 revealed that although introduction of electronic medical records in hospitals was progressing, little change in attributes of participants taking the course was observed, such as computer anxiety.

  14. Improved visibility computation on massive grid terrains

    NARCIS (Netherlands)

    Fishman, J.; Haverkort, H.J.; Toma, L.; Wolfson, O.; Agrawal, D.; Lu, C.-T.

    2009-01-01

    This paper describes the design and engineering of algorithms for computing visibility maps on massive grid terrains. Given a terrain T, specified by the elevations of points in a regular grid, and given a viewpoint v, the visibility map or viewshed of v is the set of grid points of T that are

  15. Cloud Computing and Smart Grids

    Directory of Open Access Journals (Sweden)

    Janina POPEANGĂ

    2012-10-01

    Full Text Available Increasing concern about energy consumption is leading to infrastructure that supports real-time, two-way communication between utilities and consumers, and allows software systems at both ends to control and manage power use. To manage communications to millions of endpoints in a secure, scalable and highly-available environment and to achieve these twin goals of ‘energy conservation’ and ‘demand response’, utilities must extend the same communication network management processes and tools used in the data center to the field.This paper proposes that cloud computing technology, because of its low cost, flexible and redundant architecture and fast response time, has the functionality needed to provide the security, interoperability and performance required for large-scale smart grid applications.

  16. Portal circulation following the Warren procedure. Long-term follow-up by computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gostner, P; Fugazzola, C; Martin, F; Marzoli, G P

    1986-06-01

    Computed tomography with contrast injection was carried out in 18 patients who had undergone a Warren procedure for portal hypertension due to cirrhosis of the liver more than five years previously. The results show that it is not possible to drain only a part of the venous portal territory. The portal circulation does not consist of two portions, with different pressure relationships. Pressure difference across the splenorenal anastomosis is greater than that into the mediastinal veins. Postoperative development of a hepatofugal circulation continues for a long period and is not confined to the early phase only. This phenomenon is, however, not uniform. In particular, there are variations in the extent of the collateral circulation and in the maintenance of liver blood flow.

  17. Long-term prognostic performance of low-dose coronary computed tomography angiography with prospective electrocardiogram triggering

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, Olivier F.; Kaufmann, Basil P.; Possner, Mathias; Liga, Riccardo; Vontobel, Jan; Mikulicic, Fran; Graeni, Christoph; Benz, Dominik C.; Fuchs, Tobias A.; Stehli, Julia; Pazhenkottil, Aju P.; Gaemperli, Oliver; Kaufmann, Philipp A.; Buechel, Ronny R. [University Hospital Zurich, Cardiac Imaging, Department of Nuclear Medicine, Zurich (Switzerland)

    2017-11-15

    To assess long-term prognosis after low-dose 64-slice coronary computed tomography angiography (CCTA) using prospective electrocardiogram-triggering. We included 434 consecutive patients with suspected or known coronary artery disease referred for low-dose CCTA. Patients were classified as normal, with non-obstructive or obstructive lesions, or previously revascularized. Coronary artery calcium score (CACS) was assessed in 223 patients. Follow-up was obtained regarding major adverse cardiac events (MACE): cardiac death, myocardial infarction and elective revascularization. We performed Kaplan-Meier analysis and Cox regressions. Mean effective radiation dose was 1.7 ± 0.6 mSv. At baseline, 38% of patients had normal arteries, 21% non-obstructive lesions, 32% obstructive stenosis and 8% were revascularized. Twenty-nine patients (7%) were lost to follow-up. After a median follow-up of 6.1 ± 0.6 years, MACE occurred in 0% of patients with normal arteries, 6% with non-obstructive lesions, 30% with obstructive stenosis and 39% of those revascularized. MACE occurrence increased with increasing CACS (P < 0.001), but 4% of patients with CACS = 0 experienced MACE. Multivariate Cox regression identified obstructive stenosis, lesion burden in CCTA and CACS as independent MACE predictors (P ≤ 0.001). Low-dose CCTA with prospective electrocardiogram-triggering has an excellent long-term prognostic performance with a warranty period >6 years for patients with normal coronary arteries. (orig.)

  18. Long-term stress distribution patterns of the ankle joint in varus knee alignment assessed by computed tomography osteoabsorptiometry.

    Science.gov (United States)

    Onodera, Tomohiro; Majima, Tokifumi; Iwasaki, Norimasa; Kamishima, Tamotsu; Kasahara, Yasuhiko; Minami, Akio

    2012-09-01

    The stress distribution of an ankle under various physiological conditions is important for long-term survival of total ankle arthroplasty. The aim of this study was to measure subchondral bone density across the distal tibial joint surface in patients with malalignment/instability of the lower limb. We evaluated subchondral bone density across the distal tibial joint in patients with malalignment/instability of the knee by computed tomography (CT) osteoabsorptiometry from ten ankles as controls and from 27 ankles with varus deformity/instability of the knee. The quantitative analysis focused on the location of the high-density area at the articular surface, to determine the resultant long-term stress on the ankle joint. The area of maximum density of subchondral bone was located in the medial part in all subjects. The pattern of maximum density in the anterolateral area showed stepwise increases with the development of varus deformity/instability of the knee. Our results should prove helpful for designing new prostheses and determining clinical indications for total ankle arthroplasty.

  19. Long-term prognostic performance of low-dose coronary computed tomography angiography with prospective electrocardiogram triggering

    International Nuclear Information System (INIS)

    Clerc, Olivier F.; Kaufmann, Basil P.; Possner, Mathias; Liga, Riccardo; Vontobel, Jan; Mikulicic, Fran; Graeni, Christoph; Benz, Dominik C.; Fuchs, Tobias A.; Stehli, Julia; Pazhenkottil, Aju P.; Gaemperli, Oliver; Kaufmann, Philipp A.; Buechel, Ronny R.

    2017-01-01

    To assess long-term prognosis after low-dose 64-slice coronary computed tomography angiography (CCTA) using prospective electrocardiogram-triggering. We included 434 consecutive patients with suspected or known coronary artery disease referred for low-dose CCTA. Patients were classified as normal, with non-obstructive or obstructive lesions, or previously revascularized. Coronary artery calcium score (CACS) was assessed in 223 patients. Follow-up was obtained regarding major adverse cardiac events (MACE): cardiac death, myocardial infarction and elective revascularization. We performed Kaplan-Meier analysis and Cox regressions. Mean effective radiation dose was 1.7 ± 0.6 mSv. At baseline, 38% of patients had normal arteries, 21% non-obstructive lesions, 32% obstructive stenosis and 8% were revascularized. Twenty-nine patients (7%) were lost to follow-up. After a median follow-up of 6.1 ± 0.6 years, MACE occurred in 0% of patients with normal arteries, 6% with non-obstructive lesions, 30% with obstructive stenosis and 39% of those revascularized. MACE occurrence increased with increasing CACS (P < 0.001), but 4% of patients with CACS = 0 experienced MACE. Multivariate Cox regression identified obstructive stenosis, lesion burden in CCTA and CACS as independent MACE predictors (P ≤ 0.001). Low-dose CCTA with prospective electrocardiogram-triggering has an excellent long-term prognostic performance with a warranty period >6 years for patients with normal coronary arteries. (orig.)

  20. Operating the worldwide LHC computing grid: current and future challenges

    International Nuclear Information System (INIS)

    Molina, J Flix; Forti, A; Girone, M; Sciaba, A

    2014-01-01

    The Wordwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse their data. It includes almost 200,000 CPU cores, 200 PB of disk storage and 200 PB of tape storage distributed among more than 150 sites. The WLCG operations team is responsible for several essential tasks, such as the coordination of testing and deployment of Grid middleware and services, communication with the experiments and the sites, followup and resolution of operational issues and medium/long term planning. In 2012 WLCG critically reviewed all operational procedures and restructured the organisation of the operations team as a more coherent effort in order to improve its efficiency. In this paper we describe how the new organisation works, its recent successes and the changes to be implemented during the long LHC shutdown in preparation for the LHC Run 2.

  1. User's guide for simplified computer models for the estimation of long-term performance of cement-based materials

    International Nuclear Information System (INIS)

    Plansky, L.E.; Seitz, R.R.

    1994-02-01

    This report documents user instructions for several simplified subroutines and driver programs that can be used to estimate various aspects of the long-term performance of cement-based barriers used in low-level radioactive waste disposal facilities. The subroutines are prepared in a modular fashion to allow flexibility for a variety of applications. Three levels of codes are provided: the individual subroutines, interactive drivers for each of the subroutines, and an interactive main driver, CEMENT, that calls each of the individual drivers. The individual subroutines for the different models may be taken independently and used in larger programs, or the driver modules can be used to execute the subroutines separately or as part of the main driver routine. A brief program description is included and user-interface instructions for the individual subroutines are documented in the main report. These are intended to be used when the subroutines are used as subroutines in a larger computer code

  2. Grid computing faces IT industry test

    CERN Multimedia

    Magno, L

    2003-01-01

    Software company Oracle Corp. unveiled it's Oracle 10g grid computing platform at the annual OracleWorld user convention in San Francisco. It gave concrete examples of how grid computing can be a viable option outside the scientific community where the concept was born (1 page).

  3. Grid computing infrastructure, service, and applications

    CERN Document Server

    Jie, Wei; Chen, Jinjun

    2009-01-01

    Offering a comprehensive discussion of advances in grid computing, this book summarizes the concepts, methods, technologies, and applications. It covers topics such as philosophy, middleware, architecture, services, and applications. It also includes technical details to demonstrate how grid computing works in the real world

  4. The LHC Computing Grid Project

    CERN Multimedia

    Åkesson, T

    In the last ATLAS eNews I reported on the preparations for the LHC Computing Grid Project (LCGP). Significant LCGP resources were mobilized during the summer, and there have been numerous iterations on the formal paper to put forward to the CERN Council to establish the LCGP. ATLAS, and also the other LHC-experiments, has been very active in this process to maximally influence the outcome. Our main priorities were to ensure that the global aspects are properly taken into account, that the CERN non-member states are also included in the structure, that the experiments are properly involved in the LCGP execution and that the LCGP takes operative responsibility during the data challenges. A Project Launch Board (PLB) was active from the end of July until the 10th of September. It was chaired by Hans Hoffmann and had the IT division leader as secretary. Each experiment had a representative (me for ATLAS), and the large CERN member states were each represented while the smaller were represented as clusters ac...

  5. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  6. Improving long-term operation of power sources in off-grid hybrid systems based on renewable energy, hydrogen and battery

    Science.gov (United States)

    García, Pablo; Torreglosa, Juan P.; Fernández, Luis M.; Jurado, Francisco

    2014-11-01

    This paper presents two novel hourly energy supervisory controls (ESC) for improving long-term operation of off-grid hybrid systems (HS) integrating renewable energy sources (wind turbine and photovoltaic solar panels), hydrogen system (fuel cell, hydrogen tank and electrolyzer) and battery. The first ESC tries to improve the power supplied by the HS and the power stored in the battery and/or in the hydrogen tank, whereas the second one tries to minimize the number of needed elements (batteries, fuel cells and electrolyzers) throughout the expected life of the HS (25 years). Moreover, in both ESC, the battery state-of-charge (SOC) and the hydrogen tank level are controlled and maintained between optimum operating margins. Finally, a comparative study between the controls is carried out by models of the commercially available components used in the HS under study in this work. These ESC are also compared with a third ESC, already published by the authors, and based on reducing the utilization costs of the energy storage devices. The comparative study proves the right performance of the ESC and their differences.

  7. Cost- and reliability-oriented aggregation point association in long-term evolution and passive optical network hybrid access infrastructure for smart grid neighborhood area network

    Science.gov (United States)

    Cheng, Xiao; Feng, Lei; Zhou, Fanqin; Wei, Lei; Yu, Peng; Li, Wenjing

    2018-02-01

    With the rapid development of the smart grid, the data aggregation point (AP) in the neighborhood area network (NAN) is becoming increasingly important for forwarding the information between the home area network and wide area network. Due to limited budget, it is unable to use one-single access technology to meet the ongoing requirements on AP coverage. This paper first introduces the wired and wireless hybrid access network with the integration of long-term evolution (LTE) and passive optical network (PON) system for NAN, which allows a good trade-off among cost, flexibility, and reliability. Then, based on the already existing wireless LTE network, an AP association optimization model is proposed to make the PON serve as many APs as possible, considering both the economic efficiency and network reliability. Moreover, since the features of the constraints and variables of this NP-hard problem, a hybrid intelligent optimization algorithm is proposed, which is achieved by the mixture of the genetic, ant colony and dynamic greedy algorithm. By comparing with other published methods, simulation results verify the performance of the proposed method in improving the AP coverage and the performance of the proposed algorithm in terms of convergence.

  8. Central tarsal bone fractures in horses not used for racing: Computed tomographic configuration and long-term outcome of lag screw fixation

    OpenAIRE

    Gunst, S; Del Chicca, Francesca; Fürst, Anton; Kuemmerle, Jan M

    2016-01-01

    REASONS FOR PERFORMING STUDY: There are no reports on the configuration of equine central tarsal bone fractures based on cross-sectional imaging and clinical and radiographic long-term outcome after internal fixation. OBJECTIVES: To report clinical, radiographic and computed tomographic findings of equine central tarsal bone fractures and to evaluate the long-term outcome of internal fixation. STUDY DESIGN: Retrospective case series. METHODS: All horses diagnosed with a central tarsa...

  9. A computational simulation of long-term synaptic potentiation inducing protocol processes with model of CA3 hippocampal microcircuit.

    Science.gov (United States)

    Świetlik, D; Białowąs, J; Kusiak, A; Cichońska, D

    2018-01-01

    An experimental study of computational model of the CA3 region presents cog-nitive and behavioural functions the hippocampus. The main property of the CA3 region is plastic recurrent connectivity, where the connections allow it to behave as an auto-associative memory. The computer simulations showed that CA3 model performs efficient long-term synaptic potentiation (LTP) induction and high rate of sub-millisecond coincidence detection. Average frequency of the CA3 pyramidal cells model was substantially higher in simulations with LTP induction protocol than without the LTP. The entropy of pyramidal cells with LTP seemed to be significantly higher than without LTP induction protocol (p = 0.0001). There was depression of entropy, which was caused by an increase of forgetting coefficient in pyramidal cells simulations without LTP (R = -0.88, p = 0.0008), whereas such correlation did not appear in LTP simulation (p = 0.4458). Our model of CA3 hippocampal formation microcircuit biologically inspired lets you understand neurophysiologic data. (Folia Morphol 2018; 77, 2: 210-220).

  10. A Worldwide Production Grid Service Built on EGEE and OSG Infrastructures – Lessons Learnt and Long-term Requirements

    CERN Document Server

    Shiers, J; Dimou, M; CERN. Geneva. IT Department

    2007-01-01

    Using the Grid Infrastructures provided by EGEE, OSG and others, a worldwide production service has been built that provides the computing and storage needs for the 4 main physics collaborations at CERN's Large Hadron Collider (LHC). The large number of users, their geographical distribution and the very high service availability requirements make this experience of Grid usage worth studying for the sake of a solid and scalable future operation. This service must cater for the needs of thousands of physicists in hundreds of institutes in tens of countries. A 24x7 service with availability of up to 99% is required with major service responsibilities at each of some ten "Tier1" and of the order of one hundred "Tier2" sites. Such a service - which has been operating for some 2 years and will be required for at least an additional decade - has required significant manpower and resource investments from all concerned and is considered a major achievement in the field of Grid computing. We describe the main lessons...

  11. Adaptively detecting changes in Autonomic Grid Computing

    KAUST Repository

    Zhang, Xiangliang; Germain, Cé cile; Sebag, Michè le

    2010-01-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting

  12. EU grid computing effort takes on malaria

    CERN Multimedia

    Lawrence, Stacy

    2006-01-01

    Malaria is the world's most common parasitic infection, affecting more thatn 500 million people annually and killing more than 1 million. In order to help combat malaria, CERN has launched a grid computing effort (1 page)

  13. VIP visit of LHC Computing Grid Project

    CERN Multimedia

    Krajewski, Yann Tadeusz

    2015-01-01

    VIP visit of LHC Computing Grid Project with Dr -.Ing. Tarek Kamel [Senior Advisor to the President for Government Engagement, ICANN Geneva Office] and Dr Nigel Hickson [VP, IGO Engagement, ICANN Geneva Office

  14. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  15. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  16. Bulk density of an alfisol under cultivation systems in a long-term experiment evaluated with gamma ray computed tomography

    International Nuclear Information System (INIS)

    Bamberg, Adilson Luis; Silva, Thiago Rech da; Pauletto, Eloy Antonio; Pinto, Luiz Fernando Spinelli; Lima, Ana Claudia Rodrigues de; Timm, Luis Carlos

    2009-01-01

    The sustainability of irrigated rice (Oryza sativa L.) in lowland soils is based on the use of crop rotation and succession, which are essential for the control of red and black rice. The effects on the soil properties deserve studies, particularly on soil compaction. The objective of this study was to identify compacted layers in an albaqualf under different cultivation and tillage systems, by evaluating the soil bulk density (Ds) with Gamma Ray Computed Tomography (TC). The analysis was carried out in a long-term experiment, from 1985 to 2004, at an experimental station of EMBRAPA Clima Temperado, Capao do Leao, RS, Brazil, in a random block design with seven treatments, with four replications (T1 - one year rice with conventional tillage followed by two years fallow; T2 - continuous rice under conventional tillage; T4 - rice and soybean (Glycine Max L.) rotation under conventional tillage; T5 - rice, soybean and corn (Zea maize L.) rotation under conventional tillage; T6 - rice under no-tillage in the summer in succession to rye-grass (Lolium multiflorum L.) in the winter; T7 - rice under no-tillage and soybean under conventional tillage rotation; T8 - control: uncultivated soil). The Gamma Ray Computed Tomography method did not identify compacted soil layers under no tillage rice in succession to rye-grass; two fallow years in the irrigated rice production system did not prevent the formation of a compacted layer at the soil surface; and in the rice, soybean and corn rotation under conventional tillage two compacted layers were identified (0.0 to 1.5 cm and 11 to 14 cm), indicating that they may restrict the agricultural production in this cultivation system on Albaqualf soils. (author)

  17. Insightful Workflow For Grid Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Charles Earl

    2008-10-09

    We developed a workflow adaptation and scheduling system for Grid workflow. The system currently interfaces with and uses the Karajan workflow system. We developed machine learning agents that provide the planner/scheduler with information needed to make decisions about when and how to replan. The Kubrick restructures workflow at runtime, making it unique among workflow scheduling systems. The existing Kubrick system provides a platform on which to integrate additional quality of service constraints and in which to explore the use of an ensemble of scheduling and planning algorithms. This will be the principle thrust of our Phase II work.

  18. Computing Flows Using Chimera and Unstructured Grids

    Science.gov (United States)

    Liou, Meng-Sing; Zheng, Yao

    2006-01-01

    DRAGONFLOW is a computer program that solves the Navier-Stokes equations of flows in complexly shaped three-dimensional regions discretized by use of a direct replacement of arbitrary grid overlapping by nonstructured (DRAGON) grid. A DRAGON grid (see figure) is a combination of a chimera grid (a composite of structured subgrids) and a collection of unstructured subgrids. DRAGONFLOW incorporates modified versions of two prior Navier-Stokes-equation-solving programs: OVERFLOW, which is designed to solve on chimera grids; and USM3D, which is used to solve on unstructured grids. A master module controls the invocation of individual modules in the libraries. At each time step of a simulated flow, DRAGONFLOW is invoked on the chimera portion of the DRAGON grid in alternation with USM3D, which is invoked on the unstructured subgrids of the DRAGON grid. The USM3D and OVERFLOW modules then immediately exchange their solutions and other data. As a result, USM3D and OVERFLOW are coupled seamlessly.

  19. FAULT TOLERANCE IN MOBILE GRID COMPUTING

    OpenAIRE

    Aghila Rajagopal; M.A. Maluk Mohamed

    2014-01-01

    This paper proposes a novel model for Surrogate Object based paradigm in mobile grid environment for achieving a Fault Tolerance. Basically Mobile Grid Computing Model focuses on Service Composition and Resource Sharing Process. In order to increase the performance of the system, Fault Recovery plays a vital role. In our Proposed System for Recovery point, Surrogate Object Based Checkpoint Recovery Model is introduced. This Checkpoint Recovery model depends on the Surrogate Object and the Fau...

  20. Developing Long-Term Computing Skills among Low-Achieving Students via Web-Enabled Problem-Based Learning and Self-Regulated Learning

    Science.gov (United States)

    Tsai, Chia-Wen; Lee, Tsang-Hsiung; Shen, Pei-Di

    2013-01-01

    Many private vocational schools in Taiwan have taken to enrolling students with lower levels of academic achievement. The authors re-designed a course and conducted a series of quasi-experiments to develop students' long-term computing skills, and examined the longitudinal effects of web-enabled, problem-based learning (PBL) and self-regulated…

  1. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  2. Linking Working Memory and Long-Term Memory: A Computational Model of the Learning of New Words

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2007-01-01

    The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory…

  3. Discovery Mondays: 'The Grid: a universal computer'

    CERN Multimedia

    2006-01-01

    How can one store and analyse the 15 million billion pieces of data that the LHC will produce each year with a computer that isn't the size of a sky-scraper? The IT experts have found the answer: the Grid, which will harness the power of tens of thousands of computers in the world by putting them together on one network and making them work like a single computer achieving a power that has not yet been matched. The Grid, inspired from the Web, already exists - in fact, several of them exist in the field of science. The European EGEE project, led by CERN, contributes not only to the study of particle physics but to medical research as well, notably in the study of malaria and avian flu. The next Discovery Monday invites you to explore this futuristic computing technology. The 'Grid Masters' of CERN have prepared lively animations to help you understand how the Grid works. Children can practice saving the planet on the Grid video game. You will also discover other applications such as UNOSAT, a United Nations...

  4. Virtual Machine Lifecycle Management in Grid and Cloud Computing

    OpenAIRE

    Schwarzkopf, Roland

    2015-01-01

    Virtualization is the foundation for two important technologies: Virtualized Grid and Cloud Computing. Virtualized Grid Computing is an extension of the Grid Computing concept introduced to satisfy the security and isolation requirements of commercial Grid users. Applications are confined in virtual machines to isolate them from each other and the data they process from other users. Apart from these important requirements, Virtual...

  5. Grid computing techniques and applications

    CERN Document Server

    Wilkinson, Barry

    2009-01-01

    ''… the most outstanding aspect of this book is its excellent structure: it is as though we have been given a map to help us move around this technology from the base to the summit … I highly recommend this book …''Jose Lloret, Computing Reviews, March 2010

  6. Medulloblastoma: long-term results for patients treated with definitive radiation therapy during the computed tomography era

    International Nuclear Information System (INIS)

    Merchant, Thomas E.; Wang, M.-H.; Haida, Toni; Lindsley, Karen L.; Finlay, Jonathan; Dunkel, Ira J.; Rosenblum, Marc K.; Leibel, Steven A.

    1996-01-01

    , M stage and the extent of resection were prognostic factors. Ventriculoperitoneal shunting and the use of chemotherapy were associated with a poor outcome; however, there results were confounded by the positive impact of chemotherapy in decreasing the risk of extra neural metastases and the use of there therapies in the more advanced patients. Conclusion: These long-term follow-up data represent one of the largest series of patients with complete follow-up who were treated with a consistent radiation therapy treatment policy during the CT era. Local failure in patients with localized disease, the persistent risk of late failures, treatment-related toxicity, and the ever-present risk of secondary malignancies demonstrate the limitations of standard therapies. Strategies used to increase the total dose to the primary site should be pursued along with other adjuvant therapies such as intensive chemotherapy

  7. Synchrotron Imaging Computations on the Grid without the Computing Element

    International Nuclear Information System (INIS)

    Curri, A; Pugliese, R; Borghes, R; Kourousias, G

    2011-01-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  8. Financial Derivatives Market for Grid Computing

    CERN Document Server

    Aubert, David; Lindset, Snorre; Huuse, Henning

    2007-01-01

    This Master thesis studies the feasibility and properties of a financial derivatives market on Grid computing, a service for sharing computing resources over a network such as the Internet. For the European Organization for Nuclear Research (CERN) to perform research with the world's largest and most complex machine, the Large Hadron Collider (LHC), Grid computing was developed to handle the information created. In accordance with the mandate of CERN Technology Transfer (TT) group, this thesis is a part of CERN's dissemination of the Grid technology. The thesis gives a brief overview of the use of the Grid technology and where it is heading. IT trend analysts and large-scale IT vendors see this technology as key in transforming the world of IT. They predict that in a matter of years, IT will be bought as a service, instead of a good. Commoditization of IT, delivered as a service, is a paradigm shift that will have a broad impact on all parts of the IT market, as well as on the society as a whole. Political, e...

  9. Computer Simulation of the UMER Gridded Gun

    CERN Document Server

    Haber, Irving; Friedman, Alex; Grote, D P; Kishek, Rami A; Reiser, Martin; Vay, Jean-Luc; Zou, Yun

    2005-01-01

    The electron source in the University of Maryland Electron Ring (UMER) injector employs a grid 0.15 mm from the cathode to control the current waveform. Under nominal operating conditions, the grid voltage during the current pulse is sufficiently positive relative to the cathode potential to form a virtual cathode downstream of the grid. Three-dimensional computer simulations have been performed that use the mesh refinement capability of the WARP particle-in-cell code to examine a small region near the beam center in order to illustrate some of the complexity that can result from such a gridded structure. These simulations have been found to reproduce the hollowed velocity space that is observed experimentally. The simulations also predict a complicated time-dependent response to the waveform applied to the grid during the current turn-on. This complex temporal behavior appears to result directly from the dynamics of the virtual cathode formation and may therefore be representative of the expected behavior in...

  10. Bringing Federated Identity to Grid Computing

    Energy Technology Data Exchange (ETDEWEB)

    Teheran, Jeny [Fermilab

    2016-03-04

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access for users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.

  11. Grid Computing BOINC Redesign Mindmap with incentive system (gamification)

    OpenAIRE

    Kitchen, Kris

    2016-01-01

    Grid Computing BOINC Redesign Mindmap with incentive system (gamification) this is a PDF viewable of https://figshare.com/articles/Grid_Computing_BOINC_Redesign_Mindmap_with_incentive_system_gamification_/1265350

  12. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  13. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges

    Directory of Open Access Journals (Sweden)

    Jaebeom Lee

    2018-05-01

    Full Text Available Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  14. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges.

    Science.gov (United States)

    Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo

    2018-05-09

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  15. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  16. Long-term collections

    CERN Multimedia

    Collectes à long terme

    2007-01-01

    The Committee of the Long Term Collections (CLT) asks for your attention for the following message from a young Peruvian scientist, following the earthquake which devastated part of her country a month ago.

  17. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  18. Adaptively detecting changes in Autonomic Grid Computing

    KAUST Repository

    Zhang, Xiangliang

    2010-10-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs. © 2010 IEEE.

  19. Long term contaminant migration and impacts from uranium mill tailings. Comparison of computer models using a hypothetical dataset

    International Nuclear Information System (INIS)

    Camus, H.

    1995-11-01

    The Uranium Mill Tailings Working Group of BIOMOVS II was initiated in Vienna in 1991 with the primary objective of comparing models which can be used to assess the long term impact of radioactive releases from uranium mill tailings, involving multiple pathways, multiple contaminants and multiple environmental receptors. A secondary objective was to examine how these models can be used to assess the fate of stable toxic elements. This is an interim report of the Working Group describing: development of a basic scenario describing a tailings system; application of models in deterministic calculations of contaminant concentrations in biosphere media, and related radiation doses, contaminant intakes and health risks; comparison of model results and review of the modelling. A hypothetical scenario has been developed for contaminant releases from a uranium mill tailings facility. The assumptions for the tailings facility and its environs have been chosen to facilitate the evaluation of potentially important processes incorporated into models. The site description is therefore idealised and does not represent any particular facility or type of facility. Atmospheric and groundwater release source terms have been chosen to facilitate comparison of models and should not be considered realistic. The time and effort taken over derivation of the scenario description and the associated preliminary modelling has been an important and valuable learning exercise. It also reflects the importance of gaining a clear picture of what is being modelled so that comparisons of model results are meaningful. Work within the exercise has contributed to new model development and to improvements and extensions to existing models. The scenario is a simplified description of a real facility and the releases which might occur. No allowance has been made for engineered features on the tailings disposal system which might reduce releases. The source terms have been chosen so as to test the models

  20. Long term contaminant migration and impacts from uranium mill tailings. Comparison of computer models using a hypothetical dataset

    Energy Technology Data Exchange (ETDEWEB)

    Camus, H [CEA Centre d' Etudes Nucleaires de Fontenay-aux-Roses (France). Inst. de Protection et de Surete Nucleaire; and others

    1995-11-01

    The Uranium Mill Tailings Working Group of BIOMOVS II was initiated in Vienna in 1991 with the primary objective of comparing models which can be used to assess the long term impact of radioactive releases from uranium mill tailings, involving multiple pathways, multiple contaminants and multiple environmental receptors. A secondary objective was to examine how these models can be used to assess the fate of stable toxic elements. This is an interim report of the Working Group describing: development of a basic scenario describing a tailings system; application of models in deterministic calculations of contaminant concentrations in biosphere media, and related radiation doses, contaminant intakes and health risks; comparison of model results and review of the modelling. A hypothetical scenario has been developed for contaminant releases from a uranium mill tailings facility. The assumptions for the tailings facility and its environs have been chosen to facilitate the evaluation of potentially important processes incorporated into models. The site description is therefore idealised and does not represent any particular facility or type of facility. Atmospheric and groundwater release source terms have been chosen to facilitate comparison of models and should not be considered realistic. The time and effort taken over derivation of the scenario description and the associated preliminary modelling has been an important and valuable learning exercise. It also reflects the importance of gaining a clear picture of what is being modelled so that comparisons of model results are meaningful. Work within the exercise has contributed to new model development and to improvements and extensions to existing models. The scenario is a simplified description of a real facility and the releases which might occur. No allowance has been made for engineered features on the tailings disposal system which might reduce releases. The source terms have been chosen so as to test the models

  1. [Computer cardiokymography. On its way to long-term noninvasive monitoring of cardiac performance in daly life].

    Science.gov (United States)

    Khaiutin, V M; Lukoshkova, E V; Sheroziia, G G

    2004-05-01

    stop veloergometry at lower loads, thus increasing the safety of the test. Since for large medical insurance companies very simple and inexpensive cardiokymograph are quite unprofitable, their commercially production in USA and in Germany has been stopped. However, the goal of cardiokymography: a real-time, beat-to-beat, long-term monitoring of cardiac function in daily life, remains the major factor determining the future of the method.

  2. IBM announces global Grid computing solutions for banking, financial markets

    CERN Multimedia

    2003-01-01

    "IBM has announced a series of Grid projects around the world as part of its Grid computing program. They include IBM new Grid-based product offerings with business intelligence software provider SAS and other partners that address the computer-intensive needs of the banking and financial markets industry (1 page)."

  3. Central tarsal bone fractures in horses not used for racing: Computed tomographic configuration and long-term outcome of lag screw fixation.

    Science.gov (United States)

    Gunst, S; Del Chicca, F; Fürst, A E; Kuemmerle, J M

    2016-09-01

    There are no reports on the configuration of equine central tarsal bone fractures based on cross-sectional imaging and clinical and radiographic long-term outcome after internal fixation. To report clinical, radiographic and computed tomographic findings of equine central tarsal bone fractures and to evaluate the long-term outcome of internal fixation. Retrospective case series. All horses diagnosed with a central tarsal bone fracture at our institution in 2009-2013 were included. Computed tomography and internal fixation using lag screw technique was performed in all patients. Medical records and diagnostic images were reviewed retrospectively. A clinical and radiographic follow-up examination was performed at least 1 year post operatively. A central tarsal bone fracture was diagnosed in 6 horses. Five were Warmbloods used for showjumping and one was a Quarter Horse used for reining. All horses had sagittal slab fractures that began dorsally, ran in a plantar or plantaromedial direction and exited the plantar cortex at the plantar or plantaromedial indentation of the central tarsal bone. Marked sclerosis of the central tarsal bone was diagnosed in all patients. At long-term follow-up, 5/6 horses were sound and used as intended although mild osteophyte formation at the distal intertarsal joint was commonly observed. Central tarsal bone fractures in nonracehorses had a distinct configuration but radiographically subtle additional fracture lines can occur. A chronic stress related aetiology seems likely. Internal fixation of these fractures based on an accurate diagnosis of the individual fracture configuration resulted in a very good prognosis. © 2015 EVJ Ltd.

  4. Evaluating long term forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Lady, George M. [Department of Economics, College of Liberal Arts, Temple University, Philadelphia, PA 19122 (United States)

    2010-03-15

    The U.S. Department of Energy's Energy Information Administration (EIA), and its predecessor organizations, has published projections of U.S. energy production, consumption, distribution and prices annually for over 30 years. A natural issue to raise in evaluating the projections is an assessment of their accuracy compared to eventual outcomes. A related issue is the determination of the sources of 'error' in the projections that are due to differences between the actual versus realized values of the associated assumptions. One way to do this would be to run the computer-based model from which the projections are derived at the time the projected values are realized, using actual rather than assumed values for model assumptions; and, compare these results to the original projections. For long term forecasts, this approach would require that the model's software and hardware configuration be archived and available for many years, possibly decades, into the future. Such archival creates many practical problems; and, in general, it is not being done. This paper reports on an alternative approach for evaluating the projections. In the alternative approach, the model is run many times for cases in which important assumptions are changed individually and in combinations. A database is assembled from the solutions and a regression analysis is conducted for each important projected variable with the associated assumptions chosen as exogenous variables. When actual data are eventually available, the regression results are then used to estimate the sources of the differences in the projections of the endogenous variables compared to their eventual outcomes. The results presented here are for residential and commercial sector natural gas and electricity consumption. (author)

  5. Mesoscale Climate Evaluation Using Grid Computing

    Science.gov (United States)

    Campos Velho, H. F.; Freitas, S. R.; Souto, R. P.; Charao, A. S.; Ferraz, S.; Roberti, D. R.; Streck, N.; Navaux, P. O.; Maillard, N.; Collischonn, W.; Diniz, G.; Radin, B.

    2012-04-01

    The CLIMARS project is focused to establish an operational environment for seasonal climate prediction for the Rio Grande do Sul state, Brazil. The dynamical downscaling will be performed with the use of several software platforms and hardware infrastructure to carry out the investigation on mesoscale of the global change impact. The grid computing takes advantage of geographically spread out computer systems, connected by the internet, for enhancing the power of computation. The ensemble climate prediction is an appropriated application for processing on grid computing, because the integration of each ensemble member does not have a dependency on information from another ensemble members. The grid processing is employed to compute the 20-year climatology and the long range simulations under ensemble methodology. BRAMS (Brazilian Regional Atmospheric Model) is a mesoscale model developed from a version of the RAMS (from the Colorado State University - CSU, USA). BRAMS model is the tool for carrying out the dynamical downscaling from the IPCC scenarios. Long range BRAMS simulations will provide data for some climate (data) analysis, and supply data for numerical integration of different models: (a) Regime of the extreme events for temperature and precipitation fields: statistical analysis will be applied on the BRAMS data, (b) CCATT-BRAMS (Coupled Chemistry Aerosol Tracer Transport - BRAMS) is an environmental prediction system that will be used to evaluate if the new standards of temperature, rain regime, and wind field have a significant impact on the pollutant dispersion in the analyzed regions, (c) MGB-IPH (Portuguese acronym for the Large Basin Model (MGB), developed by the Hydraulic Research Institute, (IPH) from the Federal University of Rio Grande do Sul (UFRGS), Brazil) will be employed to simulate the alteration of the river flux under new climate patterns. Important meteorological input variables for the MGB-IPH are the precipitation (most relevant

  6. Overcoming the socio-technical divide: A long-term source of hope in feminist studies of computer science

    Directory of Open Access Journals (Sweden)

    Corinna Bath

    2008-07-01

    Full Text Available The dichotomy of the technical and the social is strongly gendered in western thought. Therefore, potential dissolutions of the socio-technical divide have always been a source of hope from a feminist point of view. The starting point of this contribution are recent trends in the computer science discipline, such as the new interaction paradigm and the concept of ‘social machines’, which seem to challenge the borderline of the technical as opposed to the social and, thereby, refresh promises for changes in the gender-technology relationship. The paper primarily explores the entanglement between the socio-technical divide and the structural-symbolic gender order on the basis of historical academic discourses in German computer science. Thereby, traditions of critical thinking in the German computer science discipline and related feminist voices are introduced. A reflection of these historical discourses indicates that ‘interaction’ and ‘social machines’ are contested zones, which call for feminist intervention.

  7. Long-Term Collections

    CERN Multimedia

    Comité des collectes à long terme

    2011-01-01

    It is the time of the year when our fireman colleagues go around the laboratory for their traditional calendars sale. A part of the money of the sales will be donated in favour of the long-term collections. We hope that you will welcome them warmly.

  8. Impact of sirolimus-eluting stent fractures without early cardiac events on long-term clinical outcomes: A multislice computed tomography study

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Tsuyoshi [Toyohashi Heart Center, Oyama-cho, Toyohashi (Japan); Nagoya City University Graduate School of Medical Sciences, Department of Cardio-Renal Medicine and Hypertension, Nagoya (Japan); Kimura, Masashi; Ehara, Mariko; Terashima, Mitsuyasu; Nasu, Kenya; Kinoshita, Yoshihisa; Habara, Maoto; Tsuchikane, Etsuo; Suzuki, Takahiko [Toyohashi Heart Center, Oyama-cho, Toyohashi (Japan)

    2014-05-15

    This study sought to evaluate the impact of sirolimus-eluting stent (SES) fractures on long-term clinical outcomes using multislice computed tomography (MSCT). In this study, 528 patients undergoing 6- to 18-month follow-up 64-slice MSCT after SES implantation without early clinical events were followed clinically (the median follow-up interval was 4.6 years). A CT-detected stent fracture was defined as a complete gap with Hounsfield units (HU) <300 at the site of separation. The major adverse cardiac events (MACEs), including cardiac death, stent thrombosis, and target lesion revascularisation, were compared according to the presence of stent fracture. Stent fractures were observed in 39 patients (7.4 %). MACEs were more common in patients with CT-detected stent fractures than in those without (46 % vs. 7 %, p < 0.01). Univariate Cox regression analysis indicated a significant relationship between MACE and stent fracture [hazard ratio (HR) 7.65; p < 0.01], age (HR 1.03; p = 0.04), stent length (HR 1.03; p < 0.01), diabetes mellitus (HR 1.77; p = 0.04), and chronic total occlusion (HR 2.54; p = 0.01). In the multivariate model, stent fracture (HR 5.36; p < 0.01) and age (HR 1.03; p = 0.04) remained significant predictors of MACE. An SES fracture detected by MSCT without early clinical events was associated with long-term clinical adverse events. (orig.)

  9. The WECHSL-Mod2 code: A computer program for the interaction of a core melt with concrete including the long term behavior

    International Nuclear Information System (INIS)

    Reimann, M.; Stiefel, S.

    1989-06-01

    The WECHSL-Mod2 code is a mechanistic computer code developed for the analysis of the thermal and chemical interaction of initially molten LWR reactor materials with concrete in a two-dimensional, axisymmetrical concrete cavity. The code performs calculations from the time of initial contact of a hot molten pool over start of solidification processes until long term basemat erosion over several days with the possibility of basemat penetration. The code assumes that the metallic phases of the melt pool form a layer at the bottom overlayed by the oxide melt atop. Heat generation in the melt is by decay heat and chemical reactions from metal oxidation. Energy is lost to the melting concrete and to the upper containment by radiation or evaporation of sumpwater possibly flooding the surface of the melt. Thermodynamic and transport properties as well as criteria for heat transfer and solidification processes are internally calculated for each time step. Heat transfer is modelled taking into account the high gas flux from the decomposing concrete and the heat conduction in the crusts possibly forming in the long term at the melt/concrete interface. The WECHSL code in its present version was validated by the BETA experiments. The test samples include a typical BETA post test calculation and a WECHSL application to a reactor accident. (orig.) [de

  10. From testbed to reality grid computing steps up a gear

    CERN Multimedia

    2004-01-01

    "UK plans for Grid computing changed gear this week. The pioneering European DataGrid (EDG) project came to a successful conclusion at the end of March, and on 1 April a new project, known as Enabling Grids for E-Science in Europe (EGEE), begins" (1 page)

  11. A computational analysis of the long-term regulation of arterial pressure [v1; ref status: indexed, http://f1000r.es/1xq

    Directory of Open Access Journals (Sweden)

    Daniel A. Beard

    2013-10-01

    Full Text Available The asserted dominant role of the kidneys in the chronic regulation of blood pressure and in the etiology of hypertension has been debated since the 1970s. At the center of the theory is the observation that the acute relationships between arterial pressure and urine production—the acute pressure-diuresis and pressure-natriuresis curves—physiologically adapt to perturbations in pressure and/or changes in the rate of salt and volume intake. These adaptations, modulated by various interacting neurohumoral mechanisms, result in chronic relationships between water and salt excretion and pressure that are much steeper than the acute relationships. While the view that renal function is the dominant controller of arterial pressure has been supported by computer models of the cardiovascular system known as the “Guyton-Coleman model”, no unambiguous description of a computer model capturing chronic adaptation of acute renal function in blood pressure control has been presented. Here, such a model is developed with the goals of: 1. capturing the relevant mechanisms in an identifiable mathematical model; 2. identifying model parameters using appropriate data; 3. validating model predictions in comparison to data; and 4. probing hypotheses regarding the long-term control of arterial pressure and the etiology of primary hypertension. The developed model reveals: long-term control of arterial blood pressure is primarily through the baroreflex arc and the renin-angiotensin system; and arterial stiffening provides a sufficient explanation for the etiology of primary hypertension associated with ageing. Furthermore, the model provides the first consistent explanation of the physiological response to chronic stimulation of the baroreflex.

  12. Computation for LHC experiments: a worldwide computing grid

    International Nuclear Information System (INIS)

    Fairouz, Malek

    2010-01-01

    In normal operating conditions the LHC detectors are expected to record about 10 10 collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10 9 octets per second and recording capacity of a few tens of 10 15 octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  13. Long-term Risedronate Treatment Normalizes Mineralization and Continues to Preserve Trabecular Architecture: Sequential Triple Biopsy Studies with Micro-Computed Tomography

    International Nuclear Information System (INIS)

    Borah, B.; Dufresne, T.; Ritman, E.; Jorgensen, S.; Liu, S.; Chmielewski, P.; Phipps, R.; Zhou, X.; Sibonga, J.; Turner, R.

    2006-01-01

    The objective of the study was to assess the time course of changes in bone mineralization and architecture using sequential triple biopsies from women with postmenopausal osteoporosis (PMO) who received long-term treatment with risedronate. Transiliac biopsies were obtained from the same subjects (n = 7) at baseline and after 3 and 5 years of treatment with 5 mg daily risedronate. Mineralization was measured using 3-dimensional (3D) micro-computed tomography (CT) with synchrotron radiation and was compared to levels in healthy premenopausal women (n = 12). Compared to the untreated PMO women at baseline, the premenopausal women had higher average mineralization (Avg-MIN) and peak mineralization (Peak-MIN) by 5.8% (P = 0.003) and 8.0% (P = 0.003), respectively, and lower ratio of low to high-mineralized bone volume (BMR-V) and surface area (BMR-S) by 73.3% (P = 0.005) and 61.7% (P 0.003), respectively. Relative to baseline, 3 years of risedronate treatment significantly increased Avg-MIN (4.9 ± 1.1%, P = 0.016) and Peak-MIN (6.2 ± 1.5%, P = 0.016), and significantly decreased BMR-V (-68.4 ± 7.3%, P = 0.016) and BMR-S (-50.2 ± 5.7%, P = 0.016) in the PMO women. The changes were maintained at the same level when treatment was continued up to 5 years. These results are consistent with the significant reduction of turnover observed after 3 years of treatment and which was similarly maintained through 5 years of treatment. Risedronate restored the degree of mineralization and the ratios of low- to high-mineralized bone to premenopausal levels after 3 years of treatment, suggesting that treatment reduced bone turnover in PMO women to healthy premenopausal levels. Conventional micro-CT analysis further demonstrated that bone volume (BV/TV) and trabecular architecture did not change from baseline up to 5 years of treatment, suggesting that risedronate provided long-term preservation of trabecular architecture in the PMO women. Overall, risedronate provided sustained

  14. Grids in Europe - a computing infrastructure for science

    International Nuclear Information System (INIS)

    Kranzlmueller, D.

    2008-01-01

    Grids provide sheer unlimited computing power and access to a variety of resources to todays scientists. Moving from a research topic of computer science to a commodity tool for science and research in general, grid infrastructures are built all around the world. This talk provides an overview of the developments of grids in Europe, the status of the so-called national grid initiatives as well as the efforts towards an integrated European grid infrastructure. The latter, summarized under the title of the European Grid Initiative (EGI), promises a permanent and reliable grid infrastructure and its services in a way similar to research networks today. The talk describes the status of these efforts, the plans for the setup of this pan-European e-Infrastructure, and the benefits for the application communities. (author)

  15. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    Science.gov (United States)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  16. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  17. The WECHSL-Mod3 code: A computer program for the interaction of a core melt with concrete including the long term behavior. Model description and user's manual

    International Nuclear Information System (INIS)

    Foit, J.J.; Adroguer, B.; Cenerino, G.; Stiefel, S.

    1995-02-01

    The WECHSL-Mod3 code is a mechanistic computer code developed for the analysis of the thermal and chemical interaction of initially molten reactor materials with concrete in a two-dimensional as well as in a one-dimensional, axisymmetrical concrete cavity. The code performs calculations from the time of initial contact of a hot molten pool over start of solidification processes until long term basemat erosion over several days with the possibility of basemat penetration. It is assumed that an underlying metallic layer exists covered by an oxidic layer or that only one oxidic layer is present which can contain a homogeneously dispersed metallic phase. Heat generation in the melt is by decay heat and chemical reactions from metal oxidation. Energy is lost to the melting concrete and to the upper containment by radiation or evaporation of sumpwater possibly flooding the surface of the melt. Thermodynamic and transport properties as well as criteria for heat transfer and solidification processes are internally calculated for each time step. Heat transfer is modelled taking into account the high gas flux from the decomposing concrete and the heat conduction in the crusts possibly forming in the long term at the melt/concrete interface. The CALTHER code (developed at CEA, France) which models the radiative heat transfer from the upper surface of the corium melt to the surrounding cavity is implemented in the present WECHSL version. The WECHSL code in its present version was validated by the BETA, ACE and SURC experiments. The test samples include a BETA and the SURC2 post test calculations and a WECHSL application to a reactor accident. (orig.) [de

  18. The MicroGrid: A Scientific Tool for Modeling Computational Grids

    Directory of Open Access Journals (Sweden)

    H.J. Song

    2000-01-01

    Full Text Available The complexity and dynamic nature of the Internet (and the emerging Computational Grid demand that middleware and applications adapt to the changes in configuration and availability of resources. However, to the best of our knowledge there are no simulation tools which support systematic exploration of dynamic Grid software (or Grid resource behavior. We describe our vision and initial efforts to build tools to meet these needs. Our MicroGrid simulation tools enable Globus applications to be run in arbitrary virtual grid resource environments, enabling broad experimentation. We describe the design of these tools, and their validation on micro-benchmarks, the NAS parallel benchmarks, and an entire Grid application. These validation experiments show that the MicroGrid can match actual experiments within a few percent (2% to 4%.

  19. Long-Term Collections

    CERN Multimedia

    Staff Association

    2016-01-01

    45 years helping in developing countries! CERN personnel have been helping the least fortunate people on the planet since 1971. How? With the Long-Term Collections! Dear Colleagues, The Staff Association’s Long-Term Collections (LTC) Committee is delighted to share this important milestone in the life of our Laboratory with you. Indeed, whilst the name of CERN is known worldwide for scientific discoveries, it also shines in the many humanitarian projects which have been supported by the LTC since 1971. Several schools and clinics, far and wide, carry its logo... Over the past 45 years, 74 projects have been supported (9 of which are still ongoing). This all came from a group of colleagues who wanted to share a little of what life offered them here at CERN, in this haven of mutual understanding, peace and security, with those who were less fortunate elsewhere. Thus, the LTC were born... Since then, we have worked as a team to maintain the dream of these visionaries, with the help of regular donat...

  20. Long-Term Collection

    CERN Multimedia

    Staff Association

    2016-01-01

    Dear Colleagues, As previously announced in Echo (No. 254), your delegates took action to draw attention to the projects of the Long-Term Collections (LTC), the humanitarian body of the CERN Staff Association. On Tuesday, 11 October, at noon, small Z-Cards were widely distributed at the entrances of CERN restaurants and we thank you all for your interest. We hope to have achieved an important part of our goal, which was to inform you, convince you and find new supporters among you. We will find out in the next few days! An exhibition of the LTC was also set up in the Main Building for the entire week. The Staff Association wants to celebrate the occasion of the Long-Term Collection’s 45th anniversary at CERN because, ever since 1971, CERN personnel have showed great support in helping the least fortunate people on the planet in a variety of ways according to their needs. On a regular basis, joint fundraising appeals are made with the Directorate to help the victims of natural disasters around th...

  1. Collectes à long terme

    CERN Multimedia

    Collectes à long terme

    2014-01-01

    En cette fin d’année 2014 qui approche à grands pas, le Comité des Collectes à Long Terme remercie chaleureusement ses fidèles donatrices et donateurs réguliers pour leurs contributions à nos actions en faveur des plus démunis de notre planète. C’est très important, pour notre Comité, de pouvoir compter sur l’appui assidu que vous nous apportez. Depuis plus de 40 ans maintenant, le modèle des CLT est basé principalement sur des actions à long terme (soit une aide pendant 4-5 ans par projet, mais plus parfois selon les circonstances), et sa planification demande une grande régularité de ses soutiens financiers. Grand MERCI à vous ! D’autres dons nous parviennent au cours de l’année, et ils sont aussi les bienvenus. En particulier, nous tenons à remercier...

  2. Removal of apparent singularity in grid computations

    International Nuclear Information System (INIS)

    Jakubovics, J.P.

    1993-01-01

    A self-consistency test for magnetic domain wall models was suggested by Aharoni. The test consists of evaluating the ratio S = var-epsilon wall /var-epsilon wall , where var-epsilon wall is the wall energy, and var-epsilon wall is the integral of a certain function of the direction cosines of the magnetization, α, β, γ over the volume occupied by the domain wall. If the computed configuration is a good approximation to one corresponding to an energy minimum, the ratio is close to 1. The integrand of var-epsilon wall contains terms that are inversely proportional to γ. Since γ passes through zero at the centre of the domain wall, these terms have a singularity at these points. The integral is finite and its evaluation does not usually present any problems when the direction cosines are known in terms of continuous functions. In many cases, significantly better results for magnetization configurations of domain walls can be obtained by computations using finite element methods. The direction cosines are then only known at a set of discrete points, and integration over the domain wall is replaced by summation over these points. Evaluation of var-epsilon wall becomes inaccurate if the terms in the summation are taken to be the values of the integrand at the grid points, because of the large contribution of points close to where γ changes sign. The self-consistency test has recently been generalised to a larger number of cases. The purpose of this paper is to suggest a method of improving the accuracy of the evaluation of integrals in such cases. Since the self-consistency test has so far only been applied to two-dimensional magnetization configurations, the problem and its solution will be presented for that specific case. Generalisation to three or more dimensions is straight forward

  3. Grid Computing Making the Global Infrastructure a Reality

    CERN Document Server

    Fox, Geoffrey C; Hey, Anthony J G

    2003-01-01

    Grid computing is applying the resources of many computers in a network to a single problem at the same time Grid computing appears to be a promising trend for three reasons: (1) Its ability to make more cost-effective use of a given amount of computer resources, (2) As a way to solve problems that can't be approached without an enormous amount of computing power (3) Because it suggests that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as a collaboration toward a common objective. A number of corporations, professional groups, university consortiums, and other groups have developed or are developing frameworks and software for managing grid computing projects. The European Community (EU) is sponsoring a project for a grid for high-energy physics, earth observation, and biology applications. In the United States, the National Technology Grid is prototyping a computational grid for infrastructure and an access grid for people. Sun Microsystems offers Gri...

  4. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  5. Data for Figures and Tables in Journal Article Assessment of the Effects of Horizontal Grid Resolution on Long-Term Air Quality Trends using Coupled WRF-CMAQ Simulations, doi:10.1016/j.atmosenv.2016.02.036

    Science.gov (United States)

    The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication

  6. Olfactory neuroblastoma: the long-term outcome and late toxicity of multimodal therapy including radiotherapy based on treatment planning using computed tomography

    International Nuclear Information System (INIS)

    Mori, Takashi; Onimaru, Rikiya; Onodera, Shunsuke; Tsuchiya, Kazuhiko; Yasuda, Koichi; Hatakeyama, Hiromitsu; Kobayashi, Hiroyuki; Terasaka, Shunsuke; Homma, Akihiro; Shirato, Hiroki

    2015-01-01

    Olfactory neuroblastoma (ONB) is a rare tumor originating from olfactory epithelium. Here we retrospectively analyzed the long-term treatment outcomes and toxicity of radiotherapy for ONB patients for whom computed tomography (CT) and three-dimensional treatment planning was conducted to reappraise the role of radiotherapy in the light of recent advanced technology and chemotherapy. Seventeen patients with ONB treated between July 1992 and June 2013 were included. Three patients were Kadish stage B and 14 were stage C. All patients were treated with radiotherapy with or without surgery or chemotherapy. The radiation dose was distributed from 50 Gy to 66 Gy except for one patient who received 40 Gy preoperatively. The median follow-up time was 95 months (range 8–173 months). The 5-year overall survival (OS) and relapse-free survival (RFS) rates were estimated at 88% and 74%, respectively. Five patients with stage C disease had recurrence with the median time to recurrence of 59 months (range 7–115 months). Late adverse events equal to or above Grade 2 in CTCAE v4.03 were observed in three patients. Multimodal therapy including radiotherapy with precise treatment planning based on CT simulation achieved an excellent local control rate with acceptable toxicity and reasonable overall survival for patients with ONB

  7. Use of clinical and computed tomography findings to assess long-term unsatisfactory outcome after femoral head and neck ostectomy in four large breed dogs.

    Science.gov (United States)

    Ober, Ciprian; Pestean, Cosmin; Bel, Lucia; Taulescu, Marian; Milgram, Joshua; Todor, Adrian; Ungur, Rodica; Leșu, Mirela; Oana, Liviu

    2018-05-10

    Femoral head and neck ostectomy (FHNO) is a salvage surgical procedure intended to eliminate hip joint laxity associated pain in the immature dog, or pain due to secondary osteoarthritis in the mature dog. The outcome of the procedure is associated with the size of the dog but the cause of a generally poorer outcome in larger breeds has not been determined. The objective of this study was to assess the long-term results of FHNO associated with unsatisfactory functional outcome by means of clinical examination and computed tomography (CT) scanning. Four large mixed breed dogs underwent FHNO in different veterinary clinics. Clinical and CT scanning evaluations were carried out long time after the procedures had been done. Hip pain, muscle atrophy, decreased range of motion and chronic lameness were observed at clinical examination. Extensive remodelling, unacceptable bone-on-bone contact with bony proliferation involving the femoral neck and acetabulum, but also excessive removal with bone lysis were observed by CT scanning. Revision osteotomy was performed in one dog. Deep gluteal muscle interposition was used, but no improvements were observed postoperatively. This is the first report on the evaluation of three-dimensional CT reconstructions of the late bone remodelling associated with poor clinical outcome in large dogs. The study shows that FHNO could lead to severe functional deficits in large breed dogs. An extensive follow-study is necessary to more accurately determine the frequency of such complications.

  8. Cone Beam Computed Tomography Evaluation of the Diagnosis, Treatment Planning, and Long-Term Followup of Large Periapical Lesions Treated by Endodontic Surgery: Two Case Reports

    Directory of Open Access Journals (Sweden)

    Vijay Shekhar

    2013-01-01

    Full Text Available The aim of this case report is to present two cases where cone beam computed tomography (CBCT was used for the diagnosis, treatment planning, and followup of large periapical lesions in relation to maxillary anterior teeth treated by endodontic surgery. Periapical disease may be detected sooner using CBCT, and their true size, extent, nature, and position can be assessed. It allows clinician to select the most relevant views of the area of interest resulting in improved detection of periapical lesions. CBCT scan may provide a better, more accurate, and faster method to differentially diagnose a solid (granuloma from a fluid-filled lesion or cavity (cyst. In the present case report, endodontic treatment was performed for both the cases followed by endodontic surgery. Biopsy was done to establish the confirmatory histopathological diagnosis of the periapical lesions. Long-term assessment of the periapical healing following surgery was done in all the three dimensions using CBCT and was found to be more accurate than IOPA radiography. It was concluded that CBCT was a useful modality in making the diagnosis and treatment plan and assessing the outcome of endodontic surgery for large periapical lesions.

  9. Cone Beam Computed Tomography Evaluation of the Diagnosis, Treatment Planning, and Long-Term Followup of Large Periapical Lesions Treated by Endodontic Surgery: Two Case Reports

    Science.gov (United States)

    Shekhar, Vijay; Shashikala, K.

    2013-01-01

    The aim of this case report is to present two cases where cone beam computed tomography (CBCT) was used for the diagnosis, treatment planning, and followup of large periapical lesions in relation to maxillary anterior teeth treated by endodontic surgery. Periapical disease may be detected sooner using CBCT, and their true size, extent, nature, and position can be assessed. It allows clinician to select the most relevant views of the area of interest resulting in improved detection of periapical lesions. CBCT scan may provide a better, more accurate, and faster method to differentially diagnose a solid (granuloma) from a fluid-filled lesion or cavity (cyst). In the present case report, endodontic treatment was performed for both the cases followed by endodontic surgery. Biopsy was done to establish the confirmatory histopathological diagnosis of the periapical lesions. Long-term assessment of the periapical healing following surgery was done in all the three dimensions using CBCT and was found to be more accurate than IOPA radiography. It was concluded that CBCT was a useful modality in making the diagnosis and treatment plan and assessing the outcome of endodontic surgery for large periapical lesions. PMID:23762646

  10. ATLAS grid compute cluster with virtualized service nodes

    International Nuclear Information System (INIS)

    Mejia, J; Stonjek, S; Kluth, S

    2010-01-01

    The ATLAS Computing Grid consists of several hundred compute clusters distributed around the world as part of the Worldwide LHC Computing Grid (WLCG). The Grid middleware and the ATLAS software which has to be installed on each site, often require a certain Linux distribution and sometimes even specific version thereof. On the other hand, mostly due to maintenance reasons, computer centres install the same operating system and version on all computers. This might lead to problems with the Grid middleware if the local version is different from the one for which it has been developed. At RZG we partly solved this conflict by using virtualization technology for the service nodes. We will present the setup used at RZG and show how it helped to solve the problems described above. In addition we will illustrate the additional advantages gained by the above setup.

  11. Introduction: Long term prediction

    International Nuclear Information System (INIS)

    Beranger, G.

    2003-01-01

    Making a decision upon the right choice of a material appropriate to a given application should be based on taking into account several parameters as follows: cost, standards, regulations, safety, recycling, chemical properties, supplying, transformation, forming, assembly, mechanical and physical properties as well as the behaviour in practical conditions. Data taken from a private communication (J.H.Davidson) are reproduced presenting the life time range of materials from a couple of minutes to half a million hours corresponding to applications from missile technology up to high-temperature nuclear reactors or steam turbines. In the case of deep storage of nuclear waste the time required is completely different from these values since we have to ensure the integrity of the storage system for several thousand years. The vitrified nuclear wastes should be stored in metallic canisters made of iron and carbon steels, stainless steels, copper and copper alloys, nickel alloys or titanium alloys. Some of these materials are passivating metals, i.e. they develop a thin protective film, 2 or 3 nm thick - the so-called passive films. These films prevent general corrosion of the metal in a large range of chemical condition of the environment. In some specific condition, localized corrosion such as the phenomenon of pitting, occurs. Consequently, it is absolutely necessary to determine these chemical condition and their stability in time to understand the behavior of a given material. In other words the corrosion system is constituted by the complex material/surface/medium. For high level nuclear wastes the main features for resolving problem are concerned with: geological disposal; deep storage in clay; waste metallic canister; backfill mixture (clay-gypsum) or concrete; long term behavior; data needed for modelling and for predicting; choice of appropriate solution among several metallic candidates. The analysis of the complex material/surface/medium is of great importance

  12. Long-Term Symbolic Learning

    National Research Council Canada - National Science Library

    Kennedy, William G; Trafton, J. G

    2007-01-01

    What are the characteristics of long-term learning? We investigated the characteristics of long-term, symbolic learning using the Soar and ACT-R cognitive architectures running cognitive models of two simple tasks...

  13. Grid Computing Das wahre Web 2.0?

    CERN Document Server

    2008-01-01

    'Grid-Computing ist eine Fortentwicklung des World Wide Web, sozusagen die nchste Generation', sagte (1) Franz-Josef Pfreundt (Fraunhofer-Institut fr Techno- und Wirtschaftsmathematik) schon auf der CeBIT 2003 und verwies auf die NASA als Grid-Avantgarde.

  14. Colgate one of first to build global computing grid

    CERN Multimedia

    Magno, L

    2003-01-01

    "Colgate-Palmolive Co. has become one of the first organizations in the world to build an enterprise network based on the grid computing concept. Since mid-August, the consumer products firm has been working to connect approximately 50 geographically dispersed Unix servers and storage devices in an enterprise grid network" (1 page).

  15. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  16. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  17. LONG TERM COLLECTIONS

    CERN Multimedia

    STAFF ASSOCIATION

    2010-01-01

    ACKNOWLEDGMENTS The Long-Term Collections (CLT) committee would like to warmly thank its faithful donors who, year after year, support our actions all over the world. Without you, all this would not be possible. We would like to thank, in particular, the CERN Firemen’s Association who donated 5000 CHF in the spring thanks to the sale of their traditional calendar, and the generosity of the CERN community. A huge thank you to the firemen for their devotion to our cause. And thank you to all those who have opened their door, their heart, and their purses! Similarly, we warmly thank the CERN Yoga Club once again for its wonderful donation of 2000 CHF we recently received. We would also like to tell you that all our projects are running well. Just to remind you, we are currently supporting the activities of the «Réflexe-Partage» Association in Mali; the training centre of «Education et Développement» in Abomey, Benin; and the orphanage and ...

  18. Grid computing : enabling a vision for collaborative research

    International Nuclear Information System (INIS)

    von Laszewski, G.

    2002-01-01

    In this paper the authors provide a motivation for Grid computing based on a vision to enable a collaborative research environment. The authors vision goes beyond the connection of hardware resources. They argue that with an infrastructure such as the Grid, new modalities for collaborative research are enabled. They provide an overview showing why Grid research is difficult, and they present a number of management-related issues that must be addressed to make Grids a reality. They list projects that provide solutions to subsets of these issues

  19. Fault tolerance in computational grids: perspectives, challenges, and issues.

    Science.gov (United States)

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  20. Security Implications of Typical Grid Computing Usage Scenarios

    International Nuclear Information System (INIS)

    Humphrey, Marty; Thompson, Mary R.

    2001-01-01

    A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing

  1. Security Implications of Typical Grid Computing Usage Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Humphrey, Marty; Thompson, Mary R.

    2001-06-05

    A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing.

  2. Taiwan links up to world's first LHC computing grid project

    CERN Multimedia

    2003-01-01

    "Taiwan's Academia Sinica was linked up to the Large Hadron Collider (LHC) Computing Grid Project last week to work jointly with 12 other countries to construct the world's largest and most powerful particle accelerator" (1/2 page).

  3. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-12-01

    Dec 1, 2015 ... Abstract. This work developed and simulated a mathematical model for a mobile wireless computational Grid ... which mobile modes will process the tasks .... evaluation are analytical modelling, simulation ... MATLAB 7.10.0.

  4. Optimal usage of computing grid network in the fields of nuclear fusion computing task

    International Nuclear Information System (INIS)

    Tenev, D.

    2006-01-01

    Nowadays the nuclear power becomes the main source of energy. To make its usage more efficient, the scientists created complicated simulation models, which require powerful computers. The grid computing is the answer to powerful and accessible computing resources. The article observes, and estimates the optimal configuration of the grid environment in the fields of the complicated nuclear fusion computing tasks. (author)

  5. CMS on the GRID: Toward a fully distributed computing architecture

    International Nuclear Information System (INIS)

    Innocente, Vincenzo

    2003-01-01

    The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described

  6. The 20 Tera flop Erasmus Computing Grid (ECG).

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  7. The 20 Tera flop Erasmus Computing Grid (ECG)

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2009-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  8. [Indication for limited surgery on small lung cancer tumors measuring 1cm or less in diameter on preoperative computed tomography and long-term results].

    Science.gov (United States)

    Togashi, K; Koike, T; Emura, I; Usuda, H

    2008-07-01

    Non-invasive lung cancers showed a good prognosis after limited surgery. But it is still uncertain about invasive lung cancers. We investigated the indications for limited surgery for small lung cancer tumors measuring 1 cm or less in diameter on preoperative computed tomography (CT). This study retrospectively analyzed of 1,245 patients who underwent complete resection of lung cancer between 1989 and 2004 in our hospital. Sixty-two patients (5%) had tumors measuring 1 cm or less in diameter. The probability of survival was calculated using the Kaplan-Meier method. All diseases were detected by medical checkup, 52 % of the patients were not definitively diagnosed with lung cancer before surgery. Adenocarcinoma was histologically diagnosed in 49 patients (79%). Other histologic types included squamous cell carcinoma (8), large cell carcinoma (1), small cell carcinoma (1), carcinoid (2), and adenosquamous cell carcinoma (1). Fifty-seven patients (92%) showed pathologic stage IA. The other stages were IB (2), IIA (1), and IIIB (2). There were 14 bronchioloalveolar carcinomas (25% of IA diseases). The 5-year survival rates of IA patients were 90%. The 5-year survival rate of patients with tumors measuring 1cm or less diameter was 91% after lobectomy or pneumonectomy, and 90% after wedge resection or segmentectomy. There were 3 deaths from cancer recurrence, while there were no deaths in 14 patients with bronchioloalveolar carcinoma After limited surgery, non-invasive cancer showed good long-term results, while invasive cancer showed a recurrence rate of 2.3% to 79% even though the tumor measured 1 cm or less in diameter on preoperative CT.

  9. Integrating GRID tools to build a computing resource broker: activities of DataGrid WP1

    International Nuclear Information System (INIS)

    Anglano, C.; Barale, S.; Gaido, L.; Guarise, A.; Lusso, S.; Werbrouck, A.

    2001-01-01

    Resources on a computational Grid are geographically distributed, heterogeneous in nature, owned by different individuals or organizations with their own scheduling policies, have different access cost models with dynamically varying loads and availability conditions. This makes traditional approaches to workload management, load balancing and scheduling inappropriate. The first work package (WP1) of the EU-funded DataGrid project is addressing the issue of optimizing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of-date (collected in a finite amount of time at a very loosely coupled site). The authors describe the DataGrid approach in integrating existing software components (from Condor, Globus, etc.) to build a Grid Resource Broker, and the early efforts to define a workable scheduling strategy

  10. Long-term usability

    International Nuclear Information System (INIS)

    Holmgren, M.

    1998-01-01

    Longitudinal studies of the usability of a computer based system for process control has been performed. The whole life-cycle of one system has been of interest, as the problems are varying over time. Repeated measurements of the operators' experiences of this system have been made during 15 years. Of main interest have been the advantages/ disadvantages of computer based system for process control compared with conventional instrumentation. The results showed among other things that (1) computer based systems made it easier to control the process and increased security (2) age differences could be explained by other factors (3 ) expectations before the installation may influence the attitudes and use of equipment for quite some time In some questions the attitudes changed over time while in others they were quite stable, for example, in questions concerning preference for type of equipment. What do operators appreciate in a computer based system for process control? About 300 operators were asked about that. The main factor seems to include a reliable system with valid information that functions during disturbances. Hopefully new systems are designed with that in mind. (author)

  11. Long Term Financing of Infrastructure

    OpenAIRE

    Sinha, Sidharth

    2014-01-01

    Infrastructure projects, given their long life, require long term financing. The main sources of long term financings are insurance and pension funds who seek long term investments with low credit risk. However, in India household financial savings are mainly invested in bank deposits. Insurance and pension funds account for only a small percentage of household financial savings. In addition most infrastructure projects do not qualify for investment by insurance and pension funds because of t...

  12. Workflow Support for Advanced Grid-Enabled Computing

    OpenAIRE

    Xu, Fenglian; Eres, M.H.; Tao, Feng; Cox, Simon J.

    2004-01-01

    The Geodise project brings computer scientists and engineer's skills together to build up a service-oriented computing environmnet for engineers to perform complicated computations in a distributed system. The workflow tool is a front GUI to provide a full life cycle of workflow functions for Grid-enabled computing. The full life cycle of workflow functions have been enhanced based our initial research and development. The life cycle starts with a composition of a workflow, followed by an ins...

  13. GLOA: A New Job Scheduling Algorithm for Grid Computing

    Directory of Open Access Journals (Sweden)

    Zahra Pooranian

    2013-03-01

    Full Text Available The purpose of grid computing is to produce a virtual supercomputer by using free resources available through widespread networks such as the Internet. This resource distribution, changes in resource availability, and an unreliable communication infrastructure pose a major challenge for efficient resource allocation. Because of the geographical spread of resources and their distributed management, grid scheduling is considered to be a NP-complete problem. It has been shown that evolutionary algorithms offer good performance for grid scheduling. This article uses a new evaluation (distributed algorithm inspired by the effect of leaders in social groups, the group leaders' optimization algorithm (GLOA, to solve the problem of scheduling independent tasks in a grid computing system. Simulation results comparing GLOA with several other evaluation algorithms show that GLOA produces shorter makespans.

  14. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    CERN Document Server

    INSPIRE-00416173; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machin...

  15. Soil Erosion Estimation Using Grid-based Computation

    Directory of Open Access Journals (Sweden)

    Josef Vlasák

    2005-06-01

    Full Text Available Soil erosion estimation is an important part of a land consolidation process. Universal soil loss equation (USLE was presented by Wischmeier and Smith. USLE computation uses several factors, namely R – rainfall factor, K – soil erodability, L – slope length factor, S – slope gradient factor, C – cropping management factor, and P – erosion control management factor. L and S factors are usually combined to one LS factor – Topographic factor. The single factors are determined from several sources, such as DTM (Digital Terrain Model, BPEJ – soil type map, aerial and satellite images, etc. A conventional approach to the USLE computation, which is widely used in the Czech Republic, is based on the selection of characteristic profiles for which all above-mentioned factors must be determined. The result (G – annual soil loss of such computation is then applied for a whole area (slope of interest. Another approach to the USLE computation uses grids as a main data-structure. A prerequisite for a grid-based USLE computation is that each of the above-mentioned factors exists as a separate grid layer. The crucial step in this computation is a selection of appropriate grid resolution (grid cell size. A large cell size can cause an undesirable precision degradation. Too small cell size can noticeably slow down the whole computation. Provided that the cell size is derived from the source’s precision, the appropriate cell size for the Czech Republic varies from 30m to 50m. In some cases, especially when new surveying was done, grid computations can be performed with higher accuracy, i.e. with a smaller grid cell size. In such case, we have proposed a new method using the two-step computation. The first step computation uses a bigger cell size and is designed to identify higher erosion spots. The second step then uses a smaller cell size but it make the computation only the area identified in the previous step. This decomposition allows a

  16. New challenges in grid generation and adaptivity for scientific computing

    CERN Document Server

    Formaggia, Luca

    2015-01-01

    This volume collects selected contributions from the “Fourth Tetrahedron Workshop on Grid Generation for Numerical Computations”, which was held in Verbania, Italy in July 2013. The previous editions of this Workshop were hosted by the Weierstrass Institute in Berlin (2005), by INRIA Rocquencourt in Paris (2007), and by Swansea University (2010). This book covers different, though related, aspects of the field: the generation of quality grids for complex three-dimensional geometries; parallel mesh generation algorithms; mesh adaptation, including both theoretical and implementation aspects; grid generation and adaptation on surfaces – all with an interesting mix of numerical analysis, computer science and strongly application-oriented problems.

  17. Short-term and long-term effects of a minimally invasive transilial vertebral blocking procedure on the lumbosacral morphometry in dogs measured by computed tomography.

    Science.gov (United States)

    Müller, Friedrich; Schenk, Henning C; Forterre, Franck

    2017-04-01

    To determine the effects of a minimally invasive transilial vertebral (MTV) blocking procedure on the computed tomographic (CT) appearance of the lumbosacral (L7/S1) junction of dogs with degenerative lumbosacral stenosis (DLSS). Prospective study. 59 client-owned dogs with DLSS. Lumbosacral CT images were acquired with hyperextended pelvic limbs before and after MTV in all dogs. Clinical follow-up was obtained after 1 year, including a neurologic status classified in 4 grades, and if possible, CT. Morphometric measurements (Mean ± SEM) including foraminal area, endplate distance at L7/S1 and LS angle were obtained on sets of reformatted parasagittal and sagittal CT images. The mean foraminal area (ForL) increased from 32.5 ± 1.7 mm 2 to 59.7 ± 1.9 mm 2 on the left and from 31.1 ± 1.4 mm 2 to 59.1 ± 2.0 mm 2 on the right (ForR) side after MTV. The mean endplate distance (EDmd) between L7/S1 increased from 3.7 ± 0.1 mm to 6.0 ± 0.1 mm, and mean lumbosacral angle (LSa) from 148.0 ± 1.1° to 170.0 ± 1.1° after MTV. CT measurements were available 1 year postoperatively in 12 cases: ForL: 41.2 ± 3.1 mm 2 ; ForR: 37.9 ± 3.1 mm 2 ; EDmd: 4.3 ± 0.4 mm, and LSa 157.6 ± 2.1° (values are mean and standard error of mean =  SEM). All 39 dogs with long-term follow-up improved by at least 1 neurologic grade, 9/39 improving by 3 grades, 15/39 by 2 grades, and 15/39 by 1 grade. MTV results in clinical improvement and morphometric enlargement of the foraminal area in dogs with variable degrees of foraminal stenosis. MTV may be a valuable minimally invasive option for treatment of dogs with DLSS. © 2017 The American College of Veterinary Surgeons.

  18. Dynamic grid refinement for partial differential equations on parallel computers

    International Nuclear Information System (INIS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems. 6 refs

  19. Lecture 7: Worldwide LHC Computing Grid Overview

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    This presentation will introduce in an informal, but technically correct way the challenges that are linked to the needs of massively distributed computing architectures in the context of the LHC offline computing. The topics include technological and organizational aspects touching many aspects of LHC computing, from data access, to maintenance of large databases and huge collections of files, to the organization of computing farms and monitoring. Fabrizio Furano holds a Ph.D in Computer Science and has worked in the field of Computing for High Energy Physics for many years. Some of his preferred topics include application architectures, system design and project management, with focus on performance and scalability of data access. Fabrizio has experience in a wide variety of environments, from private companies to academic research in particular in object oriented methodologies, mainly using C++. He has also teaching experience at university level in Software Engineering and C++ Programming.

  20. Long-term urethral catheterisation.

    Science.gov (United States)

    Turner, Bruce; Dickens, Nicola

    This article discusses long-term urethral catheterisation, focusing on the relevant anatomy and physiology, indications for the procedure, catheter selection and catheter care. It is important that nurses have a good working knowledge of long-term catheterisation as the need for this intervention will increase with the rise in chronic health conditions and the ageing population.

  1. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  2. IrLaW an OGC compliant infrared thermography measurement system developed on mini PC with real time computing capabilities for long term monitoring of transport infrastructures

    Science.gov (United States)

    Dumoulin, J.; Averty, R.

    2012-04-01

    real site for long term monitoring. It can be remotely controlled in wire or wireless communication mode depending on what is the context of measurement and the degree of accessibility to the system when it is running on real site. To complete and conclude, thanks to the development of a high level library, but also to the deployment of a daemon, our developed measurement system was tuned to be compatible with OGC standards. Complementary functionalities were also developed to allow the system to self declare to 52North. For that, a specific plugin was developed to be inserted previously at 52North level. Finally, data are also accessible by tasking the system when required, fort instance by using the web portal developed in the ISTIMES Framework. ACKNOWLEDGEMENT - The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 225663.

  3. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  4. First Experiences with LHC Grid Computing and Distributed Analysis

    CERN Document Server

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  5. Computing challenges in HEP for WLHC grid

    CERN Document Server

    Muralidharan, Servesh

    2017-01-01

    As CERN moves towards preparation for increasing the luminosity of the particle beam towards HL-LHC, predictions shows computing demand would out grow our conservative scaling estimates by over ten times. Fortunately we are talking about a time scale of roughly ten years to develop new techniques and novel solutions to address this gap in compute resources. Experiments at CERN face a unique scenario where in they need to scale both latency sensitive workloads such as data acquisition of the detectors and throughput based ones such as simulations and reconstruction of high level events and physics processes. In this talk we cover some of the ongoing research at tier-0 in CERN which investigates several aspects of throughput sensitive workloads that consume significant compute cycles.

  6. Long term complications of diabetes

    Science.gov (United States)

    ... medlineplus.gov/ency/patientinstructions/000327.htm Long-term complications of diabetes To use the sharing features on this page, ... other tests. All these may help you keep complications of diabetes away. You will need to check your blood ...

  7. Computation of Asteroid Proper Elements on the Grid

    Science.gov (United States)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  8. Computation of Asteroid Proper Elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković, B.

    2009-12-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  9. Grid computing and e-science: a view from inside

    Directory of Open Access Journals (Sweden)

    Stefano Cozzini

    2008-06-01

    Full Text Available My intention is to analyze how, where and if grid computing technology is truly enabling a new way of doing science (so-called ‘e-science’. I will base my views on the experiences accumulated thus far in a number of scientific communities, which we have provided with the opportunity of using grid computing. I shall first define some basic terms and concepts and then discuss a number of specific cases in which the use of grid computing has actually made possible a new method for doing science. I will then present a case in which this did not result in a change in research methods. I will try to identify the reasons for these failures and analyze the future evolution of grid computing. I will conclude by introducing and commenting the concept of ‘cloud computing’, the approach offered and provided by major industrial actors (Google/IBM and Amazon being among the most important and what impact this technology might have on the world of research.

  10. Computation of asteroid proper elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković B.

    2009-01-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  11. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    This work developed and simulated a mathematical model for a mobile wireless computational Grid architecture using networks of queuing theory. This was in order to evaluate the performance of theload-balancing three tier hierarchical configuration. The throughput and resource utilizationmetrics were measured and the ...

  12. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    Science.gov (United States)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  13. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    International Nuclear Information System (INIS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware. (paper)

  14. WEKA-G: Parallel data mining on computational grids

    Directory of Open Access Journals (Sweden)

    PIMENTA, A.

    2009-12-01

    Full Text Available Data mining is a technology that can extract useful information from large amounts of data. However, mining a database often requires a high computational power. To resolve this problem, this paper presents a tool (Weka-G, which runs in parallel algorithms used in the mining process data. As the environment for doing so, we use a computational grid by adding several features within a WAN.

  15. The LHC Computing Grid in the starting blocks

    CERN Multimedia

    Danielle Amy Venton

    2010-01-01

    As the Large Hadron Collider ramps up operations and breaks world records, it is an exciting time for everyone at CERN. To get the computing perspective, the Bulletin this week caught up with Ian Bird, leader of the Worldwide LHC Computing Grid (WLCG). He is confident that everything is ready for the first data.   The metallic globe illustrating the Worldwide LHC Computing GRID (WLCG) in the CERN Computing Centre. The Worldwide LHC Computing Grid (WLCG) collaboration has been in place since 2001 and for the past several years it has continually run the workloads for the experiments as part of their preparations for LHC data taking. So far, the numerous and massive simulations of the full chain of reconstruction and analysis software could only be carried out using Monte Carlo simulated data. Now, for the first time, the system is starting to work with real data and with many simultaneous users accessing them from all around the world. “During the 2009 large-scale computing challenge (...

  16. [Long-term psychiatric hospitalizations].

    Science.gov (United States)

    Plancke, L; Amariei, A

    2017-02-01

    Long-term hospitalizations in psychiatry raise the question of desocialisation of the patients and the inherent costs. Individual indicators were extracted from a medical administrative database containing full-time psychiatric hospitalizations for the period 2011-2013 of people over 16 years old living in the French region of Nord-Pas-de-Calais. We calculated the proportion of people who had experienced a hospitalization with a duration of 292 days or more during the study period. A bivariate analysis was conducted, then ecological data (level of health-care offer, the deprivation index and the size of the municipalities of residence) were included into a multilevel regression model in order to identify the factors significantly related to variability of long-term hospitalization rates. Among hospitalized individuals in psychiatry, 2.6% had had at least one hospitalization of 292 days or more during the observation period; the number of days in long-term hospitalization represented 22.5% of the total of days of full-time hospitalization in psychiatry. The bivariate analysis revealed that seniority in the psychiatric system was strongly correlated with long hospitalization rates. In the multivariate analysis, the individual indicators the most related to an increased risk of long-term hospitalization were: total lack of autonomy (OR=9.0; 95% CI: 6.7-12.2; P<001); diagnoses of psychological development disorders (OR=9.7; CI95%: 4.5-20.6; P<.001); mental retardation (OR=4.5; CI95%: 2.5-8.2; P<.001): schizophrenia (OR=3.0; CI95%: 1.7-5.2; P<.001); compulsory hospitalization (OR=1.7; CI95%: 1.4-2.1; P<.001); having experienced therapeutic isolation (OR=1.8; CI95%: 1.5-2.1; P<.001). Variations of long-term hospitalization rates depending on the type of establishment were very high, but the density of hospital beds or intensity of ambulatory activity services were not significantly linked to long-term hospitalization. The inhabitants of small urban units had

  17. The extended RBAC model based on grid computing

    Institute of Scientific and Technical Information of China (English)

    CHEN Jian-gang; WANG Ru-chuan; WANG Hai-yan

    2006-01-01

    This article proposes the extended role-based access control (RBAC) model for solving dynamic and multidomain problems in grid computing, The formulated description of the model has been provided. The introduction of context and the mapping relations of context-to-role and context-to-permission help the model adapt to dynamic property in grid environment.The multidomain role inheritance relation by the authorization agent service realizes the multidomain authorization amongst the autonomy domain. A function has been proposed for solving the role inheritance conflict during the establishment of the multidomain role inheritance relation.

  18. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  19. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  20. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  1. DZero data-intensive computing on the Open Science Grid

    International Nuclear Information System (INIS)

    Abbott, B.; Baranovski, A.; Diesburg, M.; Garzoglio, G.; Kurca, T.; Mhashilkar, P.

    2007-01-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

  2. DZero data-intensive computing on the Open Science Grid

    International Nuclear Information System (INIS)

    Abbott, B; Baranovski, A; Diesburg, M; Garzoglio, G; Mhashilkar, P; Kurca, T

    2008-01-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

  3. Markers of sarcopenia quantified by computed tomography predict adverse long-term outcome in patients with resected oesophageal or gastro-oesophageal junction cancer

    International Nuclear Information System (INIS)

    Tamandl, Dietmar; Baltzer, Pascal A.; Ba-Ssalamah, Ahmed; Paireder, Matthias; Asari, Reza; Schoppmann, Sebastian F.

    2016-01-01

    To assess the impact of sarcopenia and alterations in body composition parameters (BCPs) on survival after surgery for oesophageal and gastro-oesophageal junction cancer (OC). 200 consecutive patients who underwent resection for OC between 2006 and 2013 were selected. Preoperative CTs were used to assess markers of sarcopenia and body composition (total muscle area [TMA], fat-free mass index [FFMi], fat mass index [FMi], subcutaneous, visceral and retrorenal fat [RRF], muscle attenuation). Cox regression was used to assess the primary outcome parameter of overall survival (OS) after surgery. 130 patients (65 %) had sarcopenia based on preoperative CT examinations. Sarcopenic patients showed impaired survival compared to non-sarcopenic individuals (hazard ratio [HR] 1.87, 95 % confidence interval [CI] 1.15-3.03, p = 0.011). Furthermore, low skeletal muscle attenuation (HR 1.91, 95 % CI 1.12-3.28, p = 0.019) and increased FMi (HR 3.47, 95 % CI 1.27-9.50, p = 0.016) were associated with impaired outcome. In the multivariate analysis, including a composite score (CSS) of those three parameters and clinical variables, only CSS, T-stage and surgical resection margin remained significant predictors of OS. Patients who show signs of sarcopenia and alterations in BCPs on preoperative CT images have impaired long-term outcome after surgery for OC. (orig.)

  4. Markers of sarcopenia quantified by computed tomography predict adverse long-term outcome in patients with resected oesophageal or gastro-oesophageal junction cancer

    Energy Technology Data Exchange (ETDEWEB)

    Tamandl, Dietmar; Baltzer, Pascal A.; Ba-Ssalamah, Ahmed [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Comprehensive Cancer Center GET-Unit, Vienna (Austria); Paireder, Matthias; Asari, Reza; Schoppmann, Sebastian F. [Medical University of Vienna, Department of Surgery, Upper-GI-Service, Comprehensive Cancer Center GET-Unit, Vienna (Austria)

    2016-05-15

    To assess the impact of sarcopenia and alterations in body composition parameters (BCPs) on survival after surgery for oesophageal and gastro-oesophageal junction cancer (OC). 200 consecutive patients who underwent resection for OC between 2006 and 2013 were selected. Preoperative CTs were used to assess markers of sarcopenia and body composition (total muscle area [TMA], fat-free mass index [FFMi], fat mass index [FMi], subcutaneous, visceral and retrorenal fat [RRF], muscle attenuation). Cox regression was used to assess the primary outcome parameter of overall survival (OS) after surgery. 130 patients (65 %) had sarcopenia based on preoperative CT examinations. Sarcopenic patients showed impaired survival compared to non-sarcopenic individuals (hazard ratio [HR] 1.87, 95 % confidence interval [CI] 1.15-3.03, p = 0.011). Furthermore, low skeletal muscle attenuation (HR 1.91, 95 % CI 1.12-3.28, p = 0.019) and increased FMi (HR 3.47, 95 % CI 1.27-9.50, p = 0.016) were associated with impaired outcome. In the multivariate analysis, including a composite score (CSS) of those three parameters and clinical variables, only CSS, T-stage and surgical resection margin remained significant predictors of OS. Patients who show signs of sarcopenia and alterations in BCPs on preoperative CT images have impaired long-term outcome after surgery for OC. (orig.)

  5. Long-term safety assessment of trench-type surface repository at Chernobyl, Ukraine - computer model and comparison with results from simplified models

    International Nuclear Information System (INIS)

    Haverkamp, B.; Krone, J.; Shybetskyi, I.

    2013-01-01

    The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safety analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)

  6. Monte Carlo simulation with the Gate software using grid computing

    International Nuclear Information System (INIS)

    Reuillon, R.; Hill, D.R.C.; Gouinaud, C.; El Bitar, Z.; Breton, V.; Buvat, I.

    2009-03-01

    Monte Carlo simulations are widely used in emission tomography, for protocol optimization, design of processing or data analysis methods, tomographic reconstruction, or tomograph design optimization. Monte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the 'Multiple Replications In Parallel' approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-science), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses. (authors)

  7. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  8. gLExec: gluing grid computing to the Unix world

    Science.gov (United States)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  9. gLExec: gluing grid computing to the Unix world

    International Nuclear Information System (INIS)

    Groep, D; Koeroo, O; Venekamp, G

    2008-01-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system

  10. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    Science.gov (United States)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  11. Nuclear Energy, Long Term Requirements

    International Nuclear Information System (INIS)

    Knapp, V.

    2006-01-01

    There are serious warnings about depletion of oil and gas and even more serious warnings about dangers of climate change caused by emission of carbon dioxide. Should developed countries be called to replace CO2 emitting energy sources as soon as possible, and the time available may not be longer then few decades, can nuclear energy answer the call and what are the requirements? Assuming optimistic contribution of renewable energy sources, can nuclear energy expand to several times present level in order to replace large part of fossil fuels use? Paper considers intermediate and long-term requirements. Future of nuclear power depends on satisfactory answers on several questions. First group of questions are those important for near and intermediate future. They deal with economics and safety of nuclear power stations in the first place. On the same time scale a generally accepted concept for radioactive waste disposal is also required. All these issues are in the focus of present research and development. Safer and more economical reactors are targets of international efforts in Generation IV and INPRO projects, but aiming further ahead these innovative projects are also addressing issues such as waste reduction and proliferation resistance. However, even assuming successful technical development of these projects, and there is no reason to doubt it, long term and large-scale nuclear power use is thereby not yet secured. If nuclear power is to play an essential role in the long-term future energy production and in reduction of CO2 emission, than several additional questions must be replied. These questions will deal with long-term nuclear fuel sufficiency, with necessary contribution of nuclear power in sectors of transport and industrial processes and with nuclear proliferation safety. This last issue is more political then technical, thus sometimes neglected by nuclear engineers, yet it will have essential role for the long-term prospects of nuclear power. The

  12. The Adoption of Grid Computing Technology by Organizations: A Quantitative Study Using Technology Acceptance Model

    Science.gov (United States)

    Udoh, Emmanuel E.

    2010-01-01

    Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…

  13. Analysing long term discursive processes

    DEFF Research Database (Denmark)

    Horsbøl, Anders

    which extend beyond the single interaction, for instance negotiations or planning processes, seems to have played a less important role, with studies such as Iedema 2001 and Wodak 2000 as exceptions. These long term processes, however, are central to the constitution and workings of organizations......What do timescales - the notion that processes take place or can be viewed within a shorter or longer temporal range (Lemke 2005) - mean for the analysis of discourse? What are the methodological consequences of analyzing discourse at different timescales? It may be argued that discourse analysis...... in general has favored either the analysis of short term processes such as interviews, discussions, and lessons, or the analysis of non-processual entities such as (multimodal) texts, arguments, discursive repertoires, and discourses (in a Foucaultian sense). In contrast, analysis of long term processes...

  14. Comparing long term energy scenarios

    International Nuclear Information System (INIS)

    Cumo, M.; Simbolotti, G.

    2001-01-01

    Major projection studies by international organizations and senior analysts have been compared with reference to individual key parameters (population, energy demand/supply, resources, technology, emissions and global warming) to understand trends and implications of the different scenarios. Then, looking at the long term (i.e., 2050 and beyond), parameters and trends have been compared together to understand and quantify whether and when possible crisis or market turbulence might occur due to shortage of resources or environmental problems [it

  15. Long term radioactive waste management

    International Nuclear Information System (INIS)

    Lavie, J.M.

    1984-01-01

    In France, waste management, a sensitive issue in term of public opinion, is developing quickly, and due to twenty years of experience, is now reaching maturity. With the launching of the French nuclear programme, the use of radioactive sources in radiotherapy and industry, waste management has become an industrial activity. Waste management is an integrated system dealing with the wastes from their production to the long term disposal, including their identification, sortage, treatment, packaging, collection and transport. This system aims at guaranteing the protection of present and future populations with an available technology. In regard to their long term management, and the design of disposals, radioactive wastes are divided in three categories. This classification takes into account the different radioisotopes contained, their half life and their total activity. Presently short-lived wastes are stored in the shallowland disposal of the ''Centre de la Manche''. Set up within the French Atomic Energy Commission (CEA), the National Agency for waste management (ANDRA) is responsible within the framework of legislative and regulatory provisions for long term waste management in France [fr

  16. Dynamic stability calculations for power grids employing a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K

    1982-06-01

    The aim of dynamic contingency calculations in power systems is to estimate the effects of assumed disturbances, such as loss of generation. Due to the large dimensions of the problem these simulations require considerable computing time and costs, to the effect that they are at present only used in a planning state but not for routine checks in power control stations. In view of the homogeneity of the problem, where a multitude of equal generator models, having different parameters, are to be integrated simultaneously, the use of a parallel computer looks very attractive. The results of this study employing a prototype parallel computer (SMS 201) are presented. It consists of up to 128 equal microcomputers bus-connected to a control computer. Each of the modules is programmed to simulate a node of the power grid. Generators with their associated control are represented by models of 13 states each. Passive nodes are complemented by 'phantom'-generators, so that the whole power grid is homogenous, thus removing the need for load-flow-iterations. Programming of microcomputers is essentially performed in FORTRAN.

  17. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  18. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  19. Long-term biodosimetry Redux

    International Nuclear Information System (INIS)

    Simon, Steven L.; Bouville, Andre

    2016-01-01

    This paper revisits and reiterates the needs, purposes and requirements of bio-dosimetric assays for long-term dose and health risk assessments. While the most crucial need for bio-dosimetric assays is to guide medical response for radiation accidents, the value of such techniques for improving our understanding of radiation health risk by supporting epidemiological (long-term health risk) studies is significant. As new cohorts of exposed persons are identified and new health risk studies are undertaken with the hopes that studying the exposed will result in a deeper understanding of radiation risk, the value of reliable dose reconstruction is underscored. The ultimate application of biodosimetry in long-term health risk studies would be to completely replace model-based dose reconstruction-a complex suite of methods for retrospectively estimating dose that is commonly fraught with large uncertainties due to the absence of important exposure-related information, as well as imperfect models. While biodosimetry could potentially supplant model-based doses, there are numerous limitations of presently available techniques that constrain their widespread application in health risk research, including limited ability to assess doses received far in the past, high cost, great inter-individual variability, invasiveness, higher than preferred detection limits and the inability to assess internal dose (for the most part). These limitations prevent the extensive application of biodosimetry to large cohorts and should be considered a challenge to researchers to develop new and more flexible techniques that meet the demands of long-term health risk research. Events in recent years, e.g. the Fukushima reactor accident and the increased threat of nuclear terrorism, underscore that any event that results in significant radiation exposures of a group of people will also produce a much larger population, exposed at lower levels, but that likewise needs (or demands) an exposure

  20. Multiobjective Variable Neighborhood Search algorithm for scheduling independent jobs on computational grid

    Directory of Open Access Journals (Sweden)

    S. Selvi

    2015-07-01

    Full Text Available Grid computing solves high performance and high-throughput computing problems through sharing resources ranging from personal computers to super computers distributed around the world. As the grid environments facilitate distributed computation, the scheduling of grid jobs has become an important issue. In this paper, an investigation on implementing Multiobjective Variable Neighborhood Search (MVNS algorithm for scheduling independent jobs on computational grid is carried out. The performance of the proposed algorithm has been evaluated with Min–Min algorithm, Simulated Annealing (SA and Greedy Randomized Adaptive Search Procedure (GRASP algorithm. Simulation results show that MVNS algorithm generally performs better than other metaheuristics methods.

  1. Cloud computing for energy management in smart grid - an application survey

    International Nuclear Information System (INIS)

    Naveen, P; Ing, Wong Kiing; Danquah, Michael Kobina; Sidhu, Amandeep S; Abu-Siada, Ahmed

    2016-01-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid. (paper)

  2. Long term stability of power systems

    Energy Technology Data Exchange (ETDEWEB)

    Kundur, P; Gao, B [Powertech Labs. Inc., Surrey, BC (Canada)

    1994-12-31

    Power system long term stability is still a developing subject. In this paper we provide our perspectives and experiences related to long term stability. The paper begins with the description of the nature of the long term stability problem, followed by the discussion of issues related to the modeling and solution techniques of tools for long term stability analysis. Cases studies are presented to illustrate the voltage stability aspect and plant dynamics aspect of long term stability. (author) 20 refs., 11 figs.

  3. New software for computer-assisted dental-data matching in Disaster Victim Identification and long-term missing persons investigations: "DAVID Web".

    Science.gov (United States)

    Clement, J G; Winship, V; Ceddia, J; Al-Amad, S; Morales, A; Hill, A J

    2006-05-15

    In 1997 an internally supported but unfunded pilot project at the Victorian Institute of Forensic Medicine (VIFM) Australia led to the development of a computer system which closely mimicked Interpol paperwork for the storage, later retrieval and tentative matching of the many AM and PM dental records that are often needed for rapid Disaster Victim Identification. The program was called "DAVID" (Disaster And Victim IDentification). It combined the skills of the VIFM Information Technology systems manager (VW), an experienced odontologist (JGC) and an expert database designer (JC); all current authors on this paper. Students did much of the writing of software to prescription from Monash University. The student group involved won an Australian Information Industry Award in recognition of the contribution the new software could have made to the DVI process. Unfortunately, the potential of the software was never realized because paradoxically the federal nature of Australia frequently thwarts uniformity of systems across the entire country. As a consequence, the final development of DAVID never took place. Given the recent problems encountered post-tsunami by the odontologists who were obliged to use the Plass Data system (Plass Data Software, Holbaek, Denmark) and with the impending risks imposed upon Victoria by the decision to host the Commonwealth Games in Melbourne during March 2006, funding was sought and obtained from the state government to update counter disaster preparedness at the VIFM. Some of these funds have been made available to upgrade and complete the DAVID project. In the wake of discussions between leading expert odontologists from around the world held in Geneva during July 2003 at the invitation of the International Committee of the Red Cross significant alterations to the initial design parameters of DAVID were proposed. This was part of broader discussions directed towards developing instruments which could be used by the ICRC's "The Missing

  4. Grid: From EGEE to EGI and from INFN-Grid to IGI

    International Nuclear Information System (INIS)

    Giselli, A.; Mazzuccato, M.

    2009-01-01

    In the last fifteen years the approach of the computational Grid has changed the way to use computing resources. Grid computing has raised interest worldwide in academia, industry, and government with fast development cycles. Great efforts, huge funding and resources have been made available through national, regional and international initiatives aiming at providing Grid infrastructures, Grid core technologies, Grid middle ware and Grid applications. The Grid software layers reflect the architecture of the services developed so far by the most important European and international projects. In this paper Grid e-Infrastructure story is given, detailing European, Italian and international projects such as EGEE, INFN-Grid and NAREGI. In addition the sustainability issue in the long-term perspective is described providing plans by European and Italian communities with EGI and IGI.

  5. Navigating Long-Term Care

    Directory of Open Access Journals (Sweden)

    James D. Holt MD

    2017-03-01

    Full Text Available Americans over age 65 constitute a larger percentage of the population each year: from 14% in 2010 (40 million elderly to possibly 20% in 2030 (70 million elderly. In 2015, an estimated 66 million people provided care to the ill, disabled, and elderly in the United States. In 2000, according to the Centers for Disease Control and Prevention (CDC, 15 million Americans used some form of long-term care: adult day care, home health, nursing home, or hospice. In all, 13% of people over 85 years old, compared with 1% of those ages 65 to 74, live in nursing homes in the United States. Transitions of care, among these various levels of care, are common: Nursing home to hospital transfer, one of the best-studied transitions, occurs in more than 25% of nursing home residents per year. This article follows one patient through several levels of care.

  6. Comparison of long-term results of computer-assisted anti-stigma education and reading anti-stigma educational materials.

    Science.gov (United States)

    Finkelstein, Joseph; Lapshin, Oleg; Wasserman, Evgeny

    2007-10-11

    Professionals working with psychiatric patients very often have negative beliefs and attitudes about their clients. We designed our study to investigate the effectiveness of anti-stigma interventions among university students who are trained to provide special education. The objective of our study was to compare sustainability of the effect of two anti-stigma education programs. We enrolled 91 college students from the School of Special Education at the Herzen Russian State Pedagogic University (St Petersburg, Russia). Of those, 36 read two articles and World Health Organization brochure (reading group, RG) devoted to the problem of psychiatric stigma, and 32 studied an anti-stigma web-based program (program group, PG). Twenty-three students were in a control group (CG) and received no intervention. The second study visit in six months was completed by 65 students. To measure the level of stigma we used the Community Attitudes toward the Mentally Ill (CAMI) questionnaire. The web-based program was based on the Computer-assisted Education system (CO-ED) which we described previously. The CO-ED system provides self-paced interactive education driven by adult learning theories. At the time of their first visit the age of the study participants was 19.0+/-1.2 years; of them, 99% were females. After the intervention in PG, the level of stigma assessed by CAMI decreased from 24.0+/-5.0 to 15.8+/- 4.6 points (pstigma dropped from 24.1+/-6.1 to 20.3+/-6.4 points (pstigma in PG was significantly lower than in CG and RG (20.2+/-6.2 in CG, 21.3+/-6.5 in RG, and 18.7+/-4.9 in PG, pstigma materials could be effective in reducing psychiatric stigma among university students. The effect of interactive web-based education based on adult learning theories was more stable as assessed in six months.

  7. Evaluation and analysis of long-term operation data for a grid connected PV generation system; Nippon de hajimete no gyakuchoryu ari kojin jutaku taiyoko hatsuden system no choki unten jisseki no hyoka kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Ishida, T.; Kozuma, S.; Hagihara, R.; Kishi, H.; Uchihashi, K.; Tsuda, S.; Nakano, S. [Sanyo Denki Co. Ltd., Tokyo (Japan)

    1997-11-25

    Long-term operation of a photovoltaic power generation system installed in a private residence in Osaka in 1992 is evaluated. Since the sale of power by back flow was approved five years ago, it has been working continuously without troubles. The evaluation covers the array output coefficient, inverter performance, system output coefficient, and power generation and sale track records. The findings obtained are mentioned below. Regular seasonal changes are observed in the array output coefficient, high in winter and low in summer, but the variation is smaller in amorphous arrays than in polycrystalline arrays. The monthly level of inverter performance is almost in all months higher than 0.90 specified for standard operation. The overall system output coefficient is 0.749, which is higher than the average value in NEDO`s field test business report. A total of 7852kWh has been generated since the system started operation five years ago, of which 3787kWh or 48% has been sold. 3 refs., 9 figs., 5 tabs.

  8. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  9. Long Term Incentives for Residential Customers Using Dynamic Tariff

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Nielsen, Arne Hejde

    2015-01-01

    This paper reviews several grid tariff schemes, including flat tariff, time-of-use, time-varying tariff, demand charge and dynamic tariff (DT), from the perspective of the long term incentives. The long term incentives can motivate the owners of flexible demands to change their energy consumption...... behavior in such a way that the power system operation issues, such as system balance and congestion, can be alleviated. From the comparison study, including analysis and case study, the DT scheme outperforms the other tariff schemes in terms of cost saving and network operation condition improving....

  10. Long-term independent brain-computer interface home use improves quality of life of a patient in the locked-in state: a case study.

    Science.gov (United States)

    Holz, Elisa Mira; Botrel, Loic; Kaufmann, Tobias; Kübler, Andrea

    2015-03-01

    Despite intense brain-computer interface (BCI) research for >2 decades, BCIs have hardly been established at patients' homes. The current study aimed at demonstrating expert independent BCI home use by a patient in the locked-in state and the effect it has on quality of life. In this case study, the P300 BCI-controlled application Brain Painting was facilitated and installed at the patient's home. Family and caregivers were trained in setting up the BCI system. After every BCI session, the end user indicated subjective level of control, loss of control, level of exhaustion, satisfaction, frustration, and enjoyment. To monitor BCI home use, evaluation data of every session were automatically sent and stored on a remote server. Satisfaction with the BCI as an assistive device and subjective workload was indicated by the patient. In accordance with the user-centered design, usability of the BCI was evaluated in terms of its effectiveness, efficiency, and satisfaction. The influence of the BCI on quality of life of the end user was assessed. At the patient's home. A 73-year-old patient with amyotrophic lateral sclerosis in the locked-in state. Not applicable. The BCI has been used by the patient independent of experts for >14 months. The patient painted in about 200 BCI sessions (1-3 times per week) with a mean painting duration of 81.86 minutes (SD=52.15, maximum: 230.41). BCI improved quality of life of the patient. In most of the BCI sessions the end user's satisfaction was high (mean=7.4, SD=3.24; range, 0-10). Dissatisfaction occurred mostly because of technical problems at the beginning of the study or varying BCI control. The subjective workload was moderate (mean=40.61; range, 0-100). The end user was highy satisfied with all components of the BCI (mean 4.42-5.0; range, 1-5). A perfect match between the user and the BCI technology was achieved (mean: 4.8; range, 1-5). Brain Painting had a positive impact on the patient's life on all three dimensions: competence

  11. An Offload NIC for NASA, NLR, and Grid Computing

    Science.gov (United States)

    Awrach, James

    2013-01-01

    This work addresses distributed data management and access dynamically configurable high-speed access to data distributed and shared over wide-area high-speed network environments. An offload engine NIC (network interface card) is proposed that scales at nX10-Gbps increments through 100-Gbps full duplex. The Globus de facto standard was used in projects requiring secure, robust, high-speed bulk data transport. Novel extension mechanisms were derived that will combine these technologies for use by GridFTP, bandwidth management resources, and host CPU (central processing unit) acceleration. The result will be wire-rate encrypted Globus grid data transactions through offload for splintering, encryption, and compression. As the need for greater network bandwidth increases, there is an inherent need for faster CPUs. The best way to accelerate CPUs is through a network acceleration engine. Grid computing data transfers for the Globus tool set did not have wire-rate encryption or compression. Existing technology cannot keep pace with the greater bandwidths of backplane and network connections. Present offload engines with ports to Ethernet are 32 to 40 Gbps f-d at best. The best of ultra-high-speed offload engines use expensive ASICs (application specific integrated circuits) or NPUs (network processing units). The present state of the art also includes bonding and the use of multiple NICs that are also in the planning stages for future portability to ASICs and software to accommodate data rates at 100 Gbps. The remaining industry solutions are for carrier-grade equipment manufacturers, with costly line cards having multiples of 10-Gbps ports, or 100-Gbps ports such as CFP modules that interface to costly ASICs and related circuitry. All of the existing solutions vary in configuration based on requirements of the host, motherboard, or carriergrade equipment. The purpose of the innovation is to eliminate data bottlenecks within cluster, grid, and cloud computing systems

  12. CERN database services for the LHC computing grid

    Energy Technology Data Exchange (ETDEWEB)

    Girone, M [CERN IT Department, CH-1211 Geneva 23 (Switzerland)], E-mail: maria.girone@cern.ch

    2008-07-15

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  13. CERN database services for the LHC computing grid

    International Nuclear Information System (INIS)

    Girone, M

    2008-01-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed

  14. LHCb: The Evolution of the LHCb Grid Computing Model

    CERN Multimedia

    Arrabito, L; Bouvet, D; Cattaneo, M; Charpentier, P; Clarke, P; Closier, J; Franchini, P; Graciani, R; Lanciotti, E; Mendez, V; Perazzini, S; Nandkumar, R; Remenska, D; Roiser, S; Romanovskiy, V; Santinelli, R; Stagni, F; Tsaregorodtsev, A; Ubeda Garcia, M; Vedaee, A; Zhelezov, A

    2012-01-01

    The increase of luminosity in the LHC during its second year of operation (2011) was achieved by delivering more protons per bunch and increasing the number of bunches. Taking advantage of these changed conditions, LHCb ran with a higher pileup as well as a much larger charm physics introducing a bigger event size and processing times. These changes led to shortages in the offline distributed data processing resources, an increased need of cpu capacity by a factor 2 for reconstruction, higher storage needs at T1 sites by 70\\% and subsequently problems with data throughput for file access from the storage elements. To accommodate these changes the online running conditions and the Computing Model for offline data processing had to be adapted accordingly. This paper describes the changes implemented for the offline data processing on the Grid, relaxing the Monarc model in a first step and going beyond it subsequently. It further describes other operational issues discovered and solved during 2011, present the ...

  15. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  16. Engineering of an Extreme Rainfall Detection System using Grid Computing

    Directory of Open Access Journals (Sweden)

    Olivier Terzo

    2012-10-01

    Full Text Available This paper describes a new approach for intensive rainfall data analysis. ITHACA's Extreme Rainfall Detection System (ERDS is conceived to provide near real-time alerts related to potential exceptional rainfalls worldwide, which can be used by WFP or other humanitarian assistance organizations to evaluate the event and understand the potentially floodable areas where their assistance is needed. This system is based on precipitation analysis and it uses rainfall data from satellite at worldwide extent. This project uses the Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis dataset, a NASA-delivered near real-time product for current rainfall condition monitoring over the world. Considering the great deal of data to process, this paper presents an architectural solution based on Grid Computing techniques. Our focus is on the advantages of using a distributed architecture in terms of performances for this specific purpose.

  17. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  18. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  19. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  20. Long term study of mechanical

    Directory of Open Access Journals (Sweden)

    Ahmed M. Diab

    2016-06-01

    Full Text Available In this study, properties of limestone cement concrete containing different replacement levels of limestone powder were examined. It includes 0%, 5%, 10%, 15%, 20% and 25% of limestone powder as a partial replacement of cement. Silica fume was added incorporated with limestone powder in some mixes to enhance the concrete properties. Compressive strength, splitting tensile strength and modulus of elasticity were determined. Also, durability of limestone cement concrete with different C3A contents was examined. The weight loss, length change and cube compressive strength loss were measured for concrete attacked by 5% sodium sulfate using an accelerated test up to 525 days age. The corrosion resistance was measured through accelerated corrosion test using first crack time, cracking width and steel reinforcement weight loss. Consequently, for short and long term, the use of limestone up to 10% had not a significant reduction in concrete properties. It is not recommended to use blended limestone cement in case of sulfate attack. The use of limestone cement containing up to 25% limestone has insignificant effect on corrosion resistance before cracking.

  1. Long-term competence restoration.

    Science.gov (United States)

    Morris, Douglas R; DeYoung, Nathaniel J

    2014-01-01

    While the United States Supreme Court's Jackson v. Indiana decision and most state statutes mandate determinations of incompetent defendants' restoration probabilities, courts and forensic clinicians continue to lack empirical evidence to guide these determinations and do not yet have a consensus regarding whether and under what circumstances incompetent defendants are restorable. The evidence base concerning the restoration likelihood of those defendants who fail initial restoration efforts is even further diminished and has largely gone unstudied. In this study, we examined the disposition of a cohort of defendants who underwent long-term competence restoration efforts (greater than six months) and identified factors related to whether these defendants were able to attain restoration and adjudicative success. Approximately two-thirds (n = 52) of the 81 individuals undergoing extended restoration efforts were eventually deemed restored to competence. Lengths of hospitalization until successful restoration are presented with implications for the reasonable length of time that restoration efforts should persist. Older individuals were less likely to be restored and successfully adjudicated, and individuals with more severe charges and greater factual legal understanding were more likely to be restored and adjudicated. The significance of these findings for courts and forensic clinicians is discussed.

  2. Uranium ... long-term confidence

    International Nuclear Information System (INIS)

    Anon.

    1983-01-01

    Half way through 1983 the outlook for the world's uranium producers was far from bright if one takes a short term view. The readily accessible facts present a gloomy picture. The spot prices of uranium over the past few years decreased from a high of $42-$43/lb to a low of $17 in 1982. It now hovers between $23 and $24. the contract prices negotiated between producers and consumers are not so accessible but they do not reflect the spot price. The reasons why contractual uranium prices do not follow the usual dictates of supply and demand are related to the position in which uranium and associated power industries find themselves. There is public reaction with strong emotional overtones as well as much reduced expectations about the electric power needs of the world. Furthermore the supply of uranium is not guaranteed despite present over production. However the people in the industry, taking the medium- and long-term view, are not despondent

  3. Long-term corrosion studies

    International Nuclear Information System (INIS)

    Gdowski, G.

    1998-01-01

    The scope of this activity is to assess the long-term corrosion properties of metallic materials under consideration for fabricating waste package containers. Three classes of metals are to be assessed: corrosion resistant, intermediate corrosion resistant, and corrosion allowance. Corrosion properties to be evaluated are general, pitting and crevice corrosion, stress-corrosion cracking, and galvanic corrosion. The performance of these materials will be investigated under conditions that are considered relevant to the potential emplacement site. Testing in four aqueous solutions, and vapor phases above them, and at two temperatures are planned for this activity. (The environmental conditions, test metals, and matrix are described in detail in Section 3.0.) The purpose and objective of this activity is to obtain the kinetic and mechanistic information on degradation of metallic alloys currently being considered for waste package containers. This information will be used to provide assistance to (1) waste package design (metal barrier selection) (E-20-90 to E-20-92), (2) waste package performance assessment activities (SIP-PA-2), (3) model development (E-20-75 to E-20-89). and (4) repository license application

  4. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  5. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  6. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  7. Kids at CERN Grids for Kids programme leads to advanced computing knowledge.

    CERN Multimedia

    2008-01-01

    Children as young as 10 are learning computing skills, such as middleware, parallel processing and supercomputing, at CERN, the European Organisation for Nuclear Research, last week. The initiative for 10 to 12 years olds is part of the Grids for Kids programme, which aims to introduce Grid computing as a tool for research.

  8. Grid computing in pakistan and: opening to large hadron collider experiments

    International Nuclear Information System (INIS)

    Batool, N.; Osman, A.; Mahmood, A.; Rana, M.A.

    2009-01-01

    A grid computing facility was developed at sister institutes Pakistan Institute of Nuclear Science and Technology (PINSTECH) and Pakistan Institute of Engineering and Applied Sciences (PIEAS) in collaboration with Large Hadron Collider (LHC) Computing Grid during early years of the present decade. The Grid facility PAKGRID-LCG2 as one of the grid node in Pakistan was developed employing mainly local means and is capable of supporting local and international research and computational tasks in the domain of LHC Computing Grid. Functional status of the facility is presented in terms of number of jobs performed. The facility developed provides a forum to local researchers in the field of high energy physics to participate in the LHC experiments and related activities at European particle physics research laboratory (CERN), which is one of the best physics laboratories in the world. It also provides a platform of an emerging computing technology (CT). (author)

  9. CDF GlideinWMS usage in Grid computing of high energy physics

    International Nuclear Information System (INIS)

    Zvada, Marian; Sfiligoi, Igor; Benjamin, Doug

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  10. Sort-Mid tasks scheduling algorithm in grid computing

    Directory of Open Access Journals (Sweden)

    Naglaa M. Reda

    2015-11-01

    Full Text Available Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  11. Sort-Mid tasks scheduling algorithm in grid computing.

    Science.gov (United States)

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  12. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    Valdivia, F; Crépeault, B; Duchesne, S

    2012-01-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  13. Campus Grids: Bringing Additional Computational Resources to HEP Researchers

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Bockelman, Brian; Swanson, David

    2012-01-01

    It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize these resources by forming a lightweight campus grid. The campus grids framework enables jobs that are submitted to one cluster to overflow, when necessary, to other clusters within the campus using whatever authentication mechanisms are available on campus. This framework is currently being used on several campuses to run HEP and other science jobs. Further, the framework has in some cases been expanded beyond the campus boundary by bridging campus grids into a regional grid, and can even be used to integrate resources from a national cyberinfrastructure such as the Open Science Grid. This paper will highlight 18 months of operational experiences creating campus grids in the US, and the different campus configurations that have successfully utilized the campus grid infrastructure.

  14. Porting of Scientific Applications to Grid Computing on GridWay

    Directory of Open Access Journals (Sweden)

    J. Herrera

    2005-01-01

    Full Text Available The expansion and adoption of Grid technologies is prevented by the lack of a standard programming paradigm to port existing applications among different environments. The Distributed Resource Management Application API has been proposed to aid the rapid development and distribution of these applications across different Distributed Resource Management Systems. In this paper we describe an implementation of the DRMAA standard on a Globus-based testbed, and show its suitability to express typical scientific applications, like High-Throughput and Master-Worker applications. The DRMAA routines are supported by the functionality offered by the GridWay2 framework, which provides the runtime mechanisms needed for transparently executing jobs on a dynamic Grid environment based on Globus. As cases of study, we consider the implementation with DRMAA of a bioinformatics application, a genetic algorithm and the NAS Grid Benchmarks.

  15. Two-dimensional speckle-tracking strain echocardiography in long-term heart transplant patients: a study comparing deformation parameters and ejection fraction derived from echocardiography and multislice computed tomography.

    Science.gov (United States)

    Syeda, Bonni; Höfer, Peter; Pichler, Philipp; Vertesich, Markus; Bergler-Klein, Jutta; Roedler, Susanne; Mahr, Stephane; Goliasch, Georg; Zuckermann, Andreas; Binder, Thomas

    2011-07-01

    Longitudinal strain determined by speckle tracking is a sensitive parameter to detect systolic left ventricular dysfunction. In this study, we assessed regional and global longitudinal strain values in long-term heart transplants and compared deformation indices with ejection fraction as determined by transthoracic echocardiography (TTE) and multislice computed tomographic coronary angiography (MSCTA). TTE and MSCTA were prospectively performed in 31 transplant patients (10.6 years post-transplantation) and in 42 control subjects. Grey-scale apical views were recorded for speckle tracking (EchoPAC 7.0, GE) of the 16 segments of the left ventricle. The presence of coronary artery disease (CAD) was assessed by MSCTA. Strain analysis was performed in 1168 segments [496 in transplant patients (42.5%), 672 in control subjects (57.7%)]. Global longitudinal peak systolic strain was significantly lower in the transplant recipients than in the healthy population (-13.9 ± 4.2 vs. -17.4 ± 5.8%, PSimpsons method) was 60.7 ± 10.1%/60.2 ± 6.7% in transplant recipients vs. 64.7 ± 6.4%/63.0 ± 6.2% in the healthy population, P=ns. Even though 'healthy' heart transplants without CAD exhibit normal ejection fraction, deformation indices are reduced in this population when compared with control subjects. Our findings suggests that strain analysis is more sensitive than assessment of ejection fraction for the detection of abnormalities of systolic function.

  16. Computational Needs for the Next Generation Electric Grid Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Birman, Kenneth; Ganesh, Lakshmi; Renessee, Robbert van; Ferris, Michael; Hofmann, Andreas; Williams, Brian; Sztipanovits, Janos; Hemingway, Graham; University, Vanderbilt; Bose, Anjan; Stivastava, Anurag; Grijalva, Santiago; Grijalva, Santiago; Ryan, Sarah M.; McCalley, James D.; Woodruff, David L.; Xiong, Jinjun; Acar, Emrah; Agrawal, Bhavna; Conn, Andrew R.; Ditlow, Gary; Feldmann, Peter; Finkler, Ulrich; Gaucher, Brian; Gupta, Anshul; Heng, Fook-Luen; Kalagnanam, Jayant R; Koc, Ali; Kung, David; Phan, Dung; Singhee, Amith; Smith, Basil

    2011-10-05

    The April 2011 DOE workshop, 'Computational Needs for the Next Generation Electric Grid', was the culmination of a year-long process to bring together some of the Nation's leading researchers and experts to identify computational challenges associated with the operation and planning of the electric power system. The attached papers provide a journey into these experts' insights, highlighting a class of mathematical and computational problems relevant for potential power systems research. While each paper defines a specific problem area, there were several recurrent themes. First, the breadth and depth of power system data has expanded tremendously over the past decade. This provides the potential for new control approaches and operator tools that can enhance system efficiencies and improve reliability. However, the large volume of data poses its own challenges, and could benefit from application of advances in computer networking and architecture, as well as data base structures. Second, the computational complexity of the underlying system problems is growing. Transmitting electricity from clean, domestic energy resources in remote regions to urban consumers, for example, requires broader, regional planning over multi-decade time horizons. Yet, it may also mean operational focus on local solutions and shorter timescales, as reactive power and system dynamics (including fast switching and controls) play an increasingly critical role in achieving stability and ultimately reliability. The expected growth in reliance on variable renewable sources of electricity generation places an exclamation point on both of these observations, and highlights the need for new focus in areas such as stochastic optimization to accommodate the increased uncertainty that is occurring in both planning and operations. Application of research advances in algorithms (especially related to optimization techniques and uncertainty quantification) could accelerate power

  17. On the long-term analysis with finite elements

    International Nuclear Information System (INIS)

    Argyris, J.H.; Szimmat, J.; Willam, K.J.

    1975-01-01

    Following a presentation of concrete creep, a brief summary of the direct and incremental calculation methods on long-term behaviour is given. This is followed by a survey of the method of the inner state variables, which on the one hand gives a uniform framework for the various formulations of concrete creep, and on the other hand leads to a computer-ready calculation process. Two examples on long-term behaviour illustrate the regions of application of the computer methods. (orig./LH) [de

  18. Financing long term liabilities (Germany)

    International Nuclear Information System (INIS)

    2003-01-01

    charges and fees levied from the waste producers. Altogether, financial resources for decommissioning are needed for the following steps: the post-operational phase in which the facility is prepared for dismantling after its final shut-down, dismantling of the radioactive part of the facility, management, storage and disposal of the radioactive waste, restoration of the site, licensing and regulatory supervision of all these steps. Additional means are necessary for the management, storage and disposal of the spent fuel. The way in which the availability of financial resources is secured differs between public owned installations and installations of the private power utilities. In Germany, past practices has resulted in singular contaminated sites of limited extent, mainly during the first half of the 20. century. Those contaminated sites have been or are being cleaned up and redeveloped. In large areas of Saxony and Thuringia, the geological formations permitted the surface and underground mining of Uranium ore. Facilities of the former Soviet-German WISMUT Ltd. where ore was mined and processed from 1946 until the early 1990's can be found at numerous sites. In the course of the re-unification of Germany, the soviet shares of the WISMUT were taken over by the Federal Republic of Germany and the closure of the WISMUT facilities was initiated. In that phase the extent of the damages to the environment and of the necessary remediation work became clear. All mining and milling sites are now closed and are under decommissioning. A comprehensive remediation concept covers all WISMUT sites. Heaps and mill-tailing ponds are transferred into a long-term stable condition. The area of the facilities to be remediated amounts to more than 30 km 2 . Heaps cover a total area of ca. 15,5 km 2 , tailing ponds in which the tailings resulting from the Uranium production are stored as sludges cover 6,3 km 2 ). In total, the remediation issues are very complex and without precedent. The

  19. Long term radiological impact of thorium extraction

    International Nuclear Information System (INIS)

    Menard, S.; Schapira, J.P.

    1995-01-01

    Thorium extraction produces a certain amount of radioactive wastes. Potential long term radiological impact of these residues has been calculated using the recent ICRP-68 ingestion dose factors in connection with the computing code DECAY, developed at Orsay and described in this work. This code solves the well known Bateman's equations which govern the time dependence of a set of coupled radioactive nuclei. Monazites will be very likely the minerals to be exploited first, in case of an extensive use of thorium as nuclear fuel. Because monazites contain uranium as well, mining residues will contain not only the descendants of 232 Th and a certain proportion of non-extracted thorium (taken here to be 5%), but also this uranium, if left in the wastes for economical reasons. If no uranium would be present at all in the mineral, the potential radiotoxicity would strongly decrease in approximately 60 years, at the pace of the 5.8 years period of 228 Ra, which becomes the longest-lived radionuclide of the 4n radioactive family in the residues. Moreover, there is no risk due to radon exhalation, because of the very short period of 220 Rn. These significant differences between uranium and thorium mining have to be considered in view of some estimated long term real radiological impacts due to uranium residues, which could reach a value of the order of 1 mSv/year, the dose limit recommended for the public by the recent ICRP-60. (authors). 15 refs., 4 figs., 3 tabs., 43 appendices

  20. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  1. Greedy and metaheuristics for the offline scheduling problem in grid computing

    DEFF Research Database (Denmark)

    Gamst, Mette

    In grid computing a number of geographically distributed resources connected through a wide area network, are utilized as one computations unit. The NP-hard offline scheduling problem in grid computing consists of assigning jobs to resources in advance. In this paper, five greedy heuristics and two....... All heuristics solve instances with up to 2000 jobs and 1000 resources, thus the results are useful both with respect to running times and to solution values....

  2. Application of Near-Surface Remote Sensing and computer algorithms in evaluating impacts of agroecosystem management on Zea mays (corn) phenological development in the Platte River - High Plains Aquifer Long Term Agroecosystem Research Network field sites.

    Science.gov (United States)

    Okalebo, J. A.; Das Choudhury, S.; Awada, T.; Suyker, A.; LeBauer, D.; Newcomb, M.; Ward, R.

    2017-12-01

    The Long-term Agroecosystem Research (LTAR) network is a USDA-ARS effort that focuses on conducting research that addresses current and emerging issues in agriculture related to sustainability and profitability of agroecosystems in the face of climate change and population growth. There are 18 sites across the USA covering key agricultural production regions. In Nebraska, a partnership between the University of Nebraska - Lincoln and ARD/USDA resulted in the establishment of the Platte River - High Plains Aquifer LTAR site in 2014. The site conducts research to sustain multiple ecosystem services focusing specifically on Nebraska's main agronomic production agroecosystems that comprise of abundant corn, soybeans, managed grasslands and beef production. As part of the national LTAR network, PR-HPA participates and contributes near-surface remotely sensed imagery of corn, soybean and grassland canopy phenology to the PhenoCam Network through high-resolution digital cameras. This poster highlights the application, advantages and usefulness of near-surface remotely sensed imagery in agroecosystem studies and management. It demonstrates how both Infrared and Red-Green-Blue imagery may be applied to monitor phenological events as well as crop abiotic stresses. Computer-based algorithms and analytic techniques proved very instrumental in revealing crop phenological changes such as green-up and tasseling in corn. This poster also reports the suitability and applicability of corn-derived computer based algorithms for evaluating phenological development of sorghum since both crops have similarities in their phenology; with sorghum panicles being similar to corn tassels. This later assessment was carried out using a sorghum dataset obtained from the Transportation Energy Resources from Renewable Agriculture Phenotyping Reference Platform project, Maricopa Agricultural Center, Arizona.

  3. A Long-term Plan for Kalk

    DEFF Research Database (Denmark)

    2017-01-01

    In this case, the author demonstrates together with the owner-manager of KALK A/S, Mr Rasmus Jorgensen, how to use the Family Business Map to frame a constructive discussion about long-term planning. The Family Business Map is a tool for long-term planning in family firms developed by Professor...

  4. Virtual Models of Long-Term Care

    Science.gov (United States)

    Phenice, Lillian A.; Griffore, Robert J.

    2012-01-01

    Nursing homes, assisted living facilities and home-care organizations, use web sites to describe their services to potential consumers. This virtual ethnographic study developed models representing how potential consumers may understand this information using data from web sites of 69 long-term-care providers. The content of long-term-care web…

  5. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  6. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    Directory of Open Access Journals (Sweden)

    Watthanai Pinthong

    2016-07-01

    Full Text Available Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software.

  7. High-throughput landslide modelling using computational grids

    Science.gov (United States)

    Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.

    2012-04-01

    physicists and geographical scientists are collaborating to develop methods for providing simple and effective access to landslide models and associated simulation data. Particle physicists have valuable experience in dealing with data complexity and management due to the scale of data generated by particle accelerators such as the Large Hadron Collider (LHC). The LHC generates tens of petabytes of data every year which is stored and analysed using the Worldwide LHC Computing Grid (WLCG). Tools and concepts from the WLCG are being used to drive the development of a Software-as-a-Service (SaaS) platform to provide access to hosted landslide simulation software and data. It contains advanced data management features and allows landslide simulations to be run on the WLCG, dramatically reducing simulation runtimes by parallel execution. The simulations are accessed using a web page through which users can enter and browse input data, submit jobs and visualise results. Replication of the data ensures a local copy can be accessed should a connection to the platform be unavailable. The platform does not know the details of the simulation software it runs, so it is therefore possible to use it to run alternative models at similar scales. This creates the opportunity for activities such as model sensitivity analysis and performance comparison at scales that are impractical using standalone software.

  8. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  9. Consolidation of long-term memory: Evidence and alternatives.

    NARCIS (Netherlands)

    Meeter, M.; Murre, J.M.J.

    2004-01-01

    Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how

  10. Input reduction for long-term morphodynamic simulations

    NARCIS (Netherlands)

    Walstra, D.J.R.; Ruessink, G.; Hoekstra, R.; Tonnon, P.K.

    2013-01-01

    Input reduction is imperative to long-term (> years) morphodynamic simulations to avoid excessive computation times. Here, we discuss the input-reduction framework for wave-dominated coastal settings introduced by Walstra et al. (2013). The framework comprised 4 steps, viz. (1) the selection of the

  11. 26 CFR 1.460-1 - Long-term contracts.

    Science.gov (United States)

    2010-04-01

    ... the manufacture of personal property is a manufacturing contract. In contrast, a contract for the... performance of engineering and design services, and the production of components and subassemblies that are..., enters into a single long-term contract to design and manufacture a satellite and to develop computer...

  12. Computational Fluid Dynamic (CFD) Analysis of a Generic Missile With Grid Fins

    National Research Council Canada - National Science Library

    DeSpirito, James

    2000-01-01

    This report presents the results of a study demonstrating an approach for using viscous computational fluid dynamic simulations to calculate the flow field and aerodynamic coefficients for a missile with grid fin...

  13. Taiwan links up to world's 1st LHC Computing Grid Project

    CERN Multimedia

    2003-01-01

    Taiwan's Academia Sinica was linked up to the Large Hadron Collider (LHC) Computing Grid Project to work jointly with 12 other countries to construct the world's largest and most powerful particle accelerator

  14. Long-term creep test with finite elements

    International Nuclear Information System (INIS)

    Argyris, J.H.; Szimmat, J.; Willam, K.J.

    1975-01-01

    Following a presentation of concrete creep, a brief summary of the direct and incremental calculation methods for long-term creep behaviour is given. In addition, a survey on the methods of the inner state variables is given which, on the one hand, gives a uniform framework for the various formulations of concrete creep, and on the other hand leads to a computable calculation method. Two examples on long-term creep behaviour illustrate the application field of the calculation method. (orig./LH) [de

  15. Software, component, and service deployment in computational Grids

    International Nuclear Information System (INIS)

    von Laszewski, G.; Blau, E.; Bletzinger, M.; Gawor, J.; Lane, P.; Martin, S.; Russell, M.

    2002-01-01

    Grids comprise an infrastructure that enables scientists to use a diverse set of distributed remote services and resources as part of complex scientific problem-solving processes. We analyze some of the challenges involved in deploying software and components transparently in Grids. We report on three practical solutions used by the Globus Project. Lessons learned from this experience lead us to believe that it is necessary to support a variety of software and component deployment strategies. These strategies are based on the hosting environment

  16. Task-and-role-based access-control model for computational grid

    Institute of Scientific and Technical Information of China (English)

    LONG Tao; HONG Fan; WU Chi; SUN Ling-li

    2007-01-01

    Access control in a grid environment is a challenging issue because the heterogeneous nature and independent administration of geographically dispersed resources in grid require access control to use fine-grained policies. We established a task-and-role-based access-control model for computational grid (CG-TRBAC model), integrating the concepts of role-based access control (RBAC) and task-based access control (TBAC). In this model, condition restrictions are defined and concepts specifically tailored to Workflow Management System are simplified or omitted so that role assignment and security administration fit computational grid better than traditional models; permissions are mutable with the task status and system variables, and can be dynamically controlled. The CG-TRBAC model is proved flexible and extendible. It can implement different control policies. It embodies the security principle of least privilege and executes active dynamic authorization. A task attribute can be extended to satisfy different requirements in a real grid system.

  17. CERN Services for Long Term Data Preservation

    CERN Document Server

    Shiers, Jamie; Blomer, Jakob; Ganis, Gerardo; Dallmeier-Tiessen, Sunje; Simko, Tibor; Cancio Melia, German; CERN. Geneva. IT Department

    2016-01-01

    In this paper we describe the services that are offered by CERN for Long Term preservation of High Energy Physics (HEP) data, with the Large Hadron Collider (LHC) as a key use case. Data preservation is a strategic goal for European High Energy Physics (HEP), as well as for the HEP community worldwide and we position our work in this global content. Specifically, we target the preservation of the scientific data, together with the software, documentation and computing environment needed to process, (re-)analyse or otherwise (re-)use the data. The target data volumes range from hundreds of petabytes (PB – 10^15 bytes) to hundreds of exabytes (EB – 10^18 bytes) for a target duration of several decades. The Use Cases driving data preservation are presented together with metrics that allow us to measure how close we are to meeting our goals, including the possibility for formal certification for at least part of this work. Almost all of the services that we describe are fully generic – the exception being A...

  18. Robotics for Long-Term Monitoring

    International Nuclear Information System (INIS)

    Shahin, Sarkis; Duran, Celso

    2002-01-01

    While long-term monitoring and stewardship means many things to many people, DOE has defined it as The physical controls, institutions, information, and other mechanisms needed to ensure protection of people and the environment at sites where DOE has completed or plans to complete cleanup (e.g., landfill closures, remedial actions, and facility stabilization). Across the United States, there are thousands of contaminated sites with multiple contaminants released from multiple sources where contaminants have transported and commingled. The U.S. government and U.S. industry are responsible for most of the contamination and are landowners of many of these contaminated properties. These sites must be surveyed periodically for various criteria including structural deterioration, water intrusion, integrity of storage containers, atmospheric conditions, and hazardous substance release. The surveys, however, are intrusive, time-consuming, and expensive and expose survey personnel to radioactive contamination. In long-term monitoring, there's a need for an automated system that will gather and report data from sensors without costly human labor. In most cases, a SCADA (Supervisory Control and Data Acquisition) unit is used to collect and report data from a remote location. A SCADA unit consists of an embedded computer with data acquisition capabilities. The unit can be configured with various sensors placed in different areas of the site to be monitored. A system of this type is static, i.e., the sensors, once placed, cannot be moved to other locations within the site. For those applications where the number of sampling locations would require too many sensors, or where exact location of future problems is unknown, a mobile sensing platform is an ideal solution. In many facilities that undergo regular inspections, the number of video cameras and air monitors required to eliminate the need for human inspections is very large and far too costly. HCET's remote harsh

  19. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  20. Long term wet spent nuclear fuel storage

    International Nuclear Information System (INIS)

    1987-04-01

    The meeting showed that there is continuing confidence in the use of wet storage for spent nuclear fuel and that long-term wet storage of fuel clad in zirconium alloys can be readily achieved. The importance of maintaining good water chemistry has been identified. The long-term wet storage behaviour of sensitized stainless steel clad fuel involves, as yet, some uncertainties. However, great reliance will be placed on long-term wet storage of spent fuel into the future. The following topics were treated to some extent: Oxidation of the external surface of fuel clad, rod consolidation, radiation protection, optimum methods of treating spent fuel storage water, physical radiation effects, and the behaviour of spent fuel assemblies of long-term wet storage conditions. A number of papers on national experience are included

  1. Industrial Foundations as Long-Term Owners

    DEFF Research Database (Denmark)

    Thomsen, Steen; Poulsen, Thomas; Børsting, Christa Winther

    Short-termism has become a serious concern for corporate governance, and this has inspired a search for institutional arrangements to promote long-term decision-making. In this paper, we call attention to long-term ownership by industrial foundations, which is common in Northern Europe but little...... known in the rest of the world. We use a unique Danish data set to document that industrial foundations are long-term owners that practice long-term governance. We show that foundation ownership is highly stable compared to other ownership structures. Foundation-owned companies replace managers less...... frequently. They have conservative capital structures with low financial leverage. They score higher on an index of long-termism in finance, investment, and employment. They survive longer. Overall, our paper supports the hypothesis that corporate time horizons are influenced by ownership structures...

  2. Coping with PH over the Long Term

    Science.gov (United States)

    ... a job, a volunteer commitment, or even a hobby can take a toll on long-term survivors ... people find solace in meditation, faith, humor, writing, hobbies and more. Find an outlet that you enjoy ...

  3. Long term effects of radiation in man

    International Nuclear Information System (INIS)

    Tso Chih Ping; Idris Besar

    1984-01-01

    An overview of the long term effects of radiation in man is presented, categorizing into somatic effects, genetic effects and teratogenic effects, and including an indication of the problems that arise in their determination. (author)

  4. Long term liquidity analysis of the firm

    Directory of Open Access Journals (Sweden)

    Jaroslav Gonos

    2009-09-01

    Full Text Available Liquidity control is a very difficult and important function. If the business is not liquid in the long term, it is under threatof bankruptcy, and on the other hand surplus of the cash in hand threaten its future efficiency, because the cash in hand is a sourceof only limited profitability. Long term liquidity is related to the ability of the short term and long term liabilities payment. Articleis trying to point out to the monitoring and analyzing of the long term liquidity in the concrete business, in this case the printing industrycompany. Hereby at the end of the article mentioned monitored and analyzed liquidity is evaluated in the five years time period.

  5. Long Term Care Minimum Data Set (MDS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...

  6. Sleep facilitates long-term face adaptation

    OpenAIRE

    Ditye, Thomas; Javadi, Amir Homayoun; Carbon, Claus-Christian; Walsh, Vincent

    2013-01-01

    Adaptation is an automatic neural mechanism supporting the optimization of visual processing on the basis of previous experiences. While the short-term effects of adaptation on behaviour and physiology have been studied extensively, perceptual long-term changes associated with adaptation are still poorly understood. Here, we show that the integration of adaptation-dependent long-term shifts in neural function is facilitated by sleep. Perceptual shifts induced by adaptation to a distorted imag...

  7. A priori modeling of chemical reactions on computational grid platforms: Workflows and data models

    International Nuclear Information System (INIS)

    Rampino, S.; Monari, A.; Rossi, E.; Evangelisti, S.; Laganà, A.

    2012-01-01

    Graphical abstract: The quantum framework of the Grid Empowered Molecular Simulator GEMS assembled on the European Grid allows the ab initio evaluation of the dynamics of small systems starting from the calculation of the electronic properties. Highlights: ► The grid based GEMS simulator accurately models small chemical systems. ► Q5Cost and D5Cost file formats provide interoperability in the workflow. ► Benchmark runs on H + H 2 highlight the Grid empowering. ► O + O 2 and N + N 2 calculated k (T)’s fall within the error bars of the experiment. - Abstract: The quantum framework of the Grid Empowered Molecular Simulator GEMS has been assembled on the segment of the European Grid devoted to the Computational Chemistry Virtual Organization. The related grid based workflow allows the ab initio evaluation of the dynamics of small systems starting from the calculation of the electronic properties. Interoperability between computational codes across the different stages of the workflow was made possible by the use of the common data formats Q5Cost and D5Cost. Illustrative benchmark runs have been performed on the prototype H + H 2 , N + N 2 and O + O 2 gas phase exchange reactions and thermal rate coefficients have been calculated for the last two. Results are discussed in terms of the modeling of the interaction and advantages of using the Grid is highlighted.

  8. Comparative Analysis of Stability to Induced Deadlocks for Computing Grids with Various Node Architectures

    Directory of Open Access Journals (Sweden)

    Tatiana R. Shmeleva

    2018-01-01

    Full Text Available In this paper, we consider the classification and applications of switching methods, their advantages and disadvantages. A model of a computing grid was constructed in the form of a colored Petri net with a node which implements cut-through packet switching. The model consists of packet switching nodes, traffic generators and guns that form malicious traffic disguised as usual user traffic. The characteristics of the grid model were investigated under a working load with different intensities. The influence of malicious traffic such as traffic duel was estimated on the quality of service parameters of the grid. A comparative analysis of the computing grids stability was carried out with nodes which implement the store-and-forward and cut-through switching technologies. It is shown that the grids performance is approximately the same under work load conditions, and under peak load conditions the grid with the node implementing the store-and-forward technology is more stable. The grid with nodes implementing SAF technology comes to a complete deadlock through an additional load which is less than 10 percent. After a detailed study, it is shown that the traffic duel configuration does not affect the grid with cut-through nodes if the workload is increases to the peak load, at which the grid comes to a complete deadlock. The execution intensity of guns which generate a malicious traffic is determined by a random function with the Poisson distribution. The modeling system CPN Tools is used for constructing models and measuring parameters. Grid performance and average package delivery time are estimated in the grid on various load options.

  9. Grid computing and collaboration technology in support of fusion energy sciences

    International Nuclear Information System (INIS)

    Schissel, D.P.

    2005-01-01

    Science research in general and magnetic fusion research in particular continue to grow in size and complexity resulting in a concurrent growth in collaborations between experimental sites and laboratories worldwide. The simultaneous increase in wide area network speeds has made it practical to envision distributed working environments that are as productive as traditionally collocated work. In computing power, it has become reasonable to decouple production and consumption resulting in the ability to construct computing grids in a similar manner as the electrical power grid. Grid computing, the secure integration of computer systems over high speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. For human interaction, advanced collaborative environments are being researched and deployed to have distributed group work that is as productive as traditional meetings. The DOE Scientific Discovery through Advanced Computing Program initiative has sponsored several collaboratory projects, including the National Fusion Collaboratory Project, to utilize recent advances in grid computing and advanced collaborative environments to further research in several specific scientific domains. For fusion, the collaborative technology being deployed is being used in present day research and is also scalable to future research, in particular, to the International Thermonuclear Experimental Reactor experiment that will require extensive collaboration capability worldwide. This paper briefly reviews the concepts of grid computing and advanced collaborative environments and gives specific examples of how these technologies are being used in fusion research today

  10. 11th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing

    CERN Document Server

    Barolli, Leonard; Amato, Flora

    2017-01-01

    P2P, Grid, Cloud and Internet computing technologies have been very fast established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources at large scale. The aim of this volume is to provide latest research findings, innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, Grid, Cloud and Internet computing as well as to reveal synergies among such large scale computing paradigms. This proceedings volume presents the results of the 11th International Conference on P2P, Parallel, Grid, Cloud And Internet Computing (3PGCIC-2016), held November 5-7, 2016, at Soonchunhyang University, Asan, Korea.

  11. ATLAS computing operations within the GridKa Cloud

    International Nuclear Information System (INIS)

    Kennedy, J; Walker, R; Olszewski, A; Nderitu, S; Serfon, C; Duckeck, G

    2010-01-01

    The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2's and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.

  12. Minimizing the negative effects of device mobility in cell-based ad-hoc wireless computational grids

    CSIR Research Space (South Africa)

    Mudali, P

    2006-09-01

    Full Text Available This paper provides an outline of research being conducted to minimize the disruptive effects of device mobility in wireless computational grid networks. The proposed wireless grid framework uses the existing GSM cellular architecture, with emphasis...

  13. Very long-term sequelae of craniopharyngioma.

    Science.gov (United States)

    Wijnen, Mark; van den Heuvel-Eibrink, Marry M; Janssen, Joseph A M J L; Catsman-Berrevoets, Coriene E; Michiels, Erna M C; van Veelen-Vincent, Marie-Lise C; Dallenga, Alof H G; van den Berge, J Herbert; van Rij, Carolien M; van der Lely, Aart-Jan; Neggers, Sebastian J C M M

    2017-06-01

    Studies investigating long-term health conditions in patients with craniopharyngioma are limited by short follow-up durations and generally do not compare long-term health effects according to initial craniopharyngioma treatment approach. In addition, studies comparing long-term health conditions between patients with childhood- and adult-onset craniopharyngioma report conflicting results. The objective of this study was to analyse a full spectrum of long-term health effects in patients with craniopharyngioma according to initial treatment approach and age group at craniopharyngioma presentation. Cross-sectional study based on retrospective data. We studied a single-centre cohort of 128 patients with craniopharyngioma treated from 1980 onwards (63 patients with childhood-onset disease). Median follow-up since craniopharyngioma presentation was 13 years (interquartile range: 5-23 years). Initial craniopharyngioma treatment approaches included gross total resection ( n  = 25), subtotal resection without radiotherapy ( n  = 44), subtotal resection with radiotherapy ( n  = 25), cyst aspiration without radiotherapy ( n  = 8), and 90 Yttrium brachytherapy ( n  = 21). Pituitary hormone deficiencies (98%), visual disturbances (75%) and obesity (56%) were the most common long-term health conditions observed. Different initial craniopharyngioma treatment approaches resulted in similar long-term health effects. Patients with childhood-onset craniopharyngioma experienced significantly more growth hormone deficiency, diabetes insipidus, panhypopituitarism, morbid obesity, epilepsy and psychiatric conditions compared with patients with adult-onset disease. Recurrence-/progression-free survival was significantly lower after initial craniopharyngioma treatment with cyst aspiration compared with other therapeutic approaches. Survival was similar between patients with childhood- and adult-onset craniopharyngioma. Long-term health conditions were comparable after

  14. Grid Computing at GSI for ALICE and FAIR - present and future

    International Nuclear Information System (INIS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-01-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE-CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  15. New data processing technologies at LHC: From Grid to Cloud Computing and beyond

    International Nuclear Information System (INIS)

    De Salvo, A.

    2011-01-01

    Since a few years the LHC experiments at CERN are successfully using the Grid Computing Technologies for their distributed data processing activities, on a global scale. Recently, the experience gained with the current systems allowed the design of the future Computing Models, involving new technologies like Could Computing, virtualization and high performance distributed database access. In this paper we shall describe the new computational technologies of the LHC experiments at CERN, comparing them with the current models, in terms of features and performance.

  16. Long-term prisoner in prison isolation

    Directory of Open Access Journals (Sweden)

    Karolina Grudzińska

    2013-06-01

    Full Text Available Long-term prisoner belongs to a particular category of people who are imprisoned in prisons. On the one hand in this group are often heavily demoralized people who committed the most serious crimes, on the other hand it is a group of prisoners, who should be well thought out and programmed the impact of rehabilitation. The situation of man trapped for years poses in a complicated situation not only the prisoners, but also the entire prison staff. They have to take care of the fact that the prison isolation did not cause the state in which convicts form itself in learned helplessness and lack of skills for self-planning and decision-making. In addition, planning the rehabilitation impact of long-term prisoners should not be forgotten that these prisoners in the short or the long term will return to the libertarian environment therefore, should prevent any negative effects of long-term imprisonment. This article presents the main issues related to the execution of imprisonment against long-term prisoners. It is an attempt to systematize the knowledge of this category of people living in prison isolation.

  17. CheckDen, a program to compute quantum molecular properties on spatial grids.

    Science.gov (United States)

    Pacios, Luis F; Fernandez, Alberto

    2009-09-01

    CheckDen, a program to compute quantum molecular properties on a variety of spatial grids is presented. The program reads as unique input wavefunction files written by standard quantum packages and calculates the electron density rho(r), promolecule and density difference function, gradient of rho(r), Laplacian of rho(r), information entropy, electrostatic potential, kinetic energy densities G(r) and K(r), electron localization function (ELF), and localized orbital locator (LOL) function. These properties can be calculated on a wide range of one-, two-, and three-dimensional grids that can be processed by widely used graphics programs to render high-resolution images. CheckDen offers also other options as extracting separate atom contributions to the property computed, converting grid output data into CUBE and OpenDX volumetric data formats, and perform arithmetic combinations with grid files in all the recognized formats.

  18. Reliable multicast for the Grid: a case study in experimental computer science.

    Science.gov (United States)

    Nekovee, Maziar; Barcellos, Marinho P; Daw, Michael

    2005-08-15

    In its simplest form, multicast communication is the process of sending data packets from a source to multiple destinations in the same logical multicast group. IP multicast allows the efficient transport of data through wide-area networks, and its potentially great value for the Grid has been highlighted recently by a number of research groups. In this paper, we focus on the use of IP multicast in Grid applications, which require high-throughput reliable multicast. These include Grid-enabled computational steering and collaborative visualization applications, and wide-area distributed computing. We describe the results of our extensive evaluation studies of state-of-the-art reliable-multicast protocols, which were performed on the UK's high-speed academic networks. Based on these studies, we examine the ability of current reliable multicast technology to meet the Grid's requirements and discuss future directions.

  19. Long-term follow-up study and long-term care of childhood cancer survivors

    Directory of Open Access Journals (Sweden)

    Hyeon Jin Park

    2010-04-01

    Full Text Available The number of long-term survivors is increasing in the western countries due to remarkable improvements in the treatment of childhood cancer. The long-term complications of childhood cancer survivors in these countries were brought to light by the childhood cancer survivor studies. In Korea, the 5-year survival rate of childhood cancer patients is approaching 70%; therefore, it is extremely important to undertake similar long-term follow-up studies and comprehensive long-term care for our population. On the basis of the experiences of childhood cancer survivorship care of the western countries and the current Korean status of childhood cancer survivors, long-term follow-up study and long-term care systems need to be established in Korea in the near future. This system might contribute to the improvement of the quality of life of childhood cancer survivors through effective intervention strategies.

  20. ATLAS computing activities and developments in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Rinaldi, L; Ciocca, C; K, M; Annovi, A; Antonelli, M; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Barberis, S; Carminati, L; Campana, S; Di, A; Capone, V; Carlino, G; Doria, A; Esposito, R; Merola, L; De, A; Luminari, L

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  1. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  2. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  3. Erasmus Computing Grid: Het bouwen van een 20 Tera-FLOPS Virtuele Supercomputer.

    NARCIS (Netherlands)

    L.V. de Zeeuw (Luc); T.A. Knoch (Tobias); J.H. van den Berg (Jan); F.G. Grosveld (Frank)

    2007-01-01

    textabstractHet Erasmus Medisch Centrum en de Hogeschool Rotterdam zijn in 2005 een samenwerking begonnen teneinde de ongeveer 95% onbenutte rekencapaciteit van hun computers beschikbaar te maken voor onderzoek en onderwijs. Deze samenwerking heeft geleid tot het Erasmus Computing GRID (ECG),

  4. The GLOBE-Consortium: The Erasmus Computing Grid and The Next Generation Genome Viewer

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    markdownabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  5. Qualities of Grid Computing that can last for Ages | Asagba | Journal ...

    African Journals Online (AJOL)

    Grid computing has emerged as an important new field, distinguished from conventional distributed computing based on its abilities on large-scale resource sharing and services. And it will even become more popular because of the benefits it can offer over the traditional supercomputers, and other forms of distributed ...

  6. Secure grid-based computing with social-network based trust management in the semantic web

    Czech Academy of Sciences Publication Activity Database

    Špánek, Roman; Tůma, Miroslav

    2006-01-01

    Roč. 16, č. 6 (2006), s. 475-488 ISSN 1210-0552 R&D Projects: GA AV ČR 1ET100300419; GA MŠk 1M0554 Institutional research plan: CEZ:AV0Z10300504 Keywords : semantic web * grid computing * trust management * reconfigurable networks * security * hypergraph model * hypergraph algorithms Subject RIV: IN - Informatics, Computer Science

  7. Asia Federation Report on International Symposium on Grid Computing (ISGC) 2010

    Science.gov (United States)

    Grey, Francois; Lin, Simon C.

    This report provides an overview of developments in the Asia-Pacific region, based on presentations made at the International Symposium on Grid Computing 2010 (ISGC 2010), held 5-12 March at Academia Sinica, Taipei. The document includes a brief overview of the EUAsiaGrid project as well as progress reports by representatives of 13 Asian countries presented at ISGC 2010. In alphabetical order, these are: Australia, China, India, Indonesia, Japan, Malaysia, Pakistan, Philippines, Singapore, South Korea, Taiwan, Thailand and Vietnam.

  8. Asia Federation Report on International Symposium on Grid Computing 2009 (ISGC 2009)

    Science.gov (United States)

    Grey, Francois

    This report provides an overview of developments in the Asia-Pacific region, based on presentations made at the International Symposium on Grid Computing 2009 (ISGC 09), held 21-23 April. This document contains 14 sections, including a progress report on general Asia-EU Grid activities as well as progress reports by representatives of 13 Asian countries presented at ISGC 09. In alphabetical order, these are: Australia, China, India, Indonesia, Japan, Malaysia, Pakistan, Philippines, Singapore, South Korea, Taiwan, Thailand and Vietnam.

  9. Long-Term Prognosis of Plantar Fasciitis

    DEFF Research Database (Denmark)

    Hansen, Liselotte; Krogh, Thøger Persson; Ellingsen, Torkell

    2018-01-01

    , exercise-induced symptoms, bilateral heel pain, fascia thickness, and presence of a heel spur) could predict long-term outcomes, (3) to assess the long-term ultrasound (US) development in the fascia, and (4) to assess whether US-guided corticosteroid injections induce atrophy of the heel fat pad. Study....... The risk was significantly greater for women (P heel...... regardless of symptoms and had no impact on prognosis, and neither did the presence of a heel spur. Only 24% of asymptomatic patients had a normal fascia on US at long-term follow-up. A US-guided corticosteroid injection did not cause atrophy of the heel fat pad. Our observational study did not allow us...

  10. Long-term dependence in exchange rates

    Directory of Open Access Journals (Sweden)

    A. Karytinos

    2000-01-01

    Full Text Available The extent to which exchange rates of four major currencies against the Greek Drachma exhibit long-term dependence is investigated using a R/S analysis testing framework. We show that both classic R/S analysis and the modified R/S statistic if enhanced by bootstrapping techniques can be proven very reliable tools to this end. Our findings support persistence and long-term dependence with non-periodic cycles for the Deutsche Mark and the French Franc series. In addition a noisy chaos explanation is favored over fractional Brownian motion. On the contrary, the US Dollar and British Pound were found to exhibit a much more random behavior and lack of any long-term structure.

  11. The Electrification of Energy: Long-Term Trends and Opportunities

    Energy Technology Data Exchange (ETDEWEB)

    Tsao, Jeffrey Y. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fouquet, Roger [London School of Economics and Political Science (United Kingdom); Schubert, E. Fred [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-11-01

    Here, we present and analyze three powerful long-term historical trends in energy, particularly electrical energy, as well as the opportunities and challenges associated with these trends. The first trend is from a world containing a diversity of energy currencies to one whose predominant currency is electricity, driven by electricity’s transportability, exchangeability, and steadily decreasing cost. The second trend is from electricity generated from a diversity of sources to electricity generated predominantly by free-fuel sources, driven by their steadily decreasing cost and long-term abundance. These trends necessitate a just-emerging third trend: from a grid in which electricity is transported uni-directionally, traded at near-static prices, and consumed under direct human control; to a grid in which electricity is transported bi-directionally, traded at dynamic prices, and consumed under human-tailored agential control. Early acceptance and appreciation of these trends will accelerate their remaking of humanity’s energy landscape into one in which energy is much more affordable, abundant and efficiently deployed than it is today; with major economic, geo-political, and environmental benefits to human society.

  12. Long term planning for wind energy development

    International Nuclear Information System (INIS)

    Trinick, M.

    1995-01-01

    In a planning system intended to be governed primarily by policies in statutory plans a reasonable horizon for long term planning is 10 years or longer. Because of statutory requirements, developers have no option but to pay due regard to, and take a full part in, long term planning. The paper examines the type of policies which have emerged in the last few years to cater for wind energy development. It canvasses the merits of different types of policies. Finally, it discusses the policy framework which may emerge to cater for development outside NFFO. (Author)

  13. Long-term characteristics of nuclear emulsion

    International Nuclear Information System (INIS)

    Naganawa, N; Kuwabara, K

    2010-01-01

    Long-term characteristics of the nuclear emulsion so called 'OPERA film' used in the neutrino oscillation experiment, OPERA, has been studied for 8 years since its production or refreshing after it. In the results, it turned out to be excellent in sensitivity, amount of random noise, and refreshing characteristics. The retention capacity of latent image of tracks was also studied. The result will open the way to the recycling of 7,000,000 emulsion films which will remain not developed after 5 years of OPERA's run, and other long-term experiments with emulsion.

  14. Long-term characteristics of nuclear emulsion

    Science.gov (United States)

    Naganawa, N.; Kuwabara, K.

    2010-02-01

    Long-term characteristics of the nuclear emulsion so called ``OPERA film'' used in the neutrino oscillation experiment, OPERA, has been studied for 8 years since its production or refreshing after it. In the results, it turned out to be excellent in sensitivity, amount of random noise, and refreshing characteristics. The retention capacity of latent image of tracks was also studied. The result will open the way to the recycling of 7,000,000 emulsion films which will remain not developed after 5 years of OPERA's run, and other long-term experiments with emulsion.

  15. Long-term home care scheduling

    DEFF Research Database (Denmark)

    Gamst, Mette; Jensen, Thomas Sejr

    In several countries, home care is provided for certain citizens living at home. The long-term home care scheduling problem is to generate work plans spanning several days such that a high quality of service is maintained and the overall cost is kept as low as possible. A solution to the problem...... provides detailed information on visits and visit times for each employee on each of the covered days. We propose a branch-and-price algorithm for the long-term home care scheduling problem. The pricing problem generates one-day plans for an employee, and the master problem merges the plans with respect...

  16. Long term storage techniques for 85Kr

    International Nuclear Information System (INIS)

    Foster, B.A.; Pence, D.T.; Staples, B.A.

    1975-01-01

    As new nuclear fuel reprocessing plants go on stream, the collection of fission product 85 Kr will be required to avoid potential local release problems and long-term atmospheric buildup. Storage of the collected 85 Kr for a period of at least 100 years will be necessary to allow approximately 99.9 percent decay before it is released. A program designed to develop and evaluate proposed methods for long-term storage of 85 Kr is discussed, and the results of a preliminary evaluation of three methods, high pressure steel cylinders, zeolite encapsulation, and clathrate inclusion are presented. (U.S.)

  17. Backfilling the Grid with Containerized BOINC in the ATLAS computing

    CERN Document Server

    Wu, Wenjing; The ATLAS collaboration

    2018-01-01

    Virtualization is a commonly used solution for utilizing the opportunistic computing resources in the HEP field, as it provides a unified software and OS layer that the HEP computing tasks require over the heterogeneous opportunistic computing resources. However there is always performance penalty with virtualization, especially for short jobs which are always the case for volunteer computing tasks, the overhead of virtualization becomes a big portion in the wall time, hence it leads to low CPU efficiency of the jobs. With the wide usage of containers in HEP computing, we explore the possibility of adopting the container technology into the ATLAS BOINC project, hence we implemented a Native version in BOINC, which uses the singularity container or direct usage of the target OS to replace VirtualBox. In this paper, we will discuss 1) the implementation and workflow of the Native version in the ATLAS BOINC; 2) the performance measurement of the Native version comparing to the previous Virtualization version. 3)...

  18. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  19. Consolidation of long-term memory: Evidence and alternatives.

    OpenAIRE

    Meeter, M.; Murre, J.M.J.

    2004-01-01

    Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how consolidation can explain characteristics of amnesia, but they have not elucidated how consolidation must be envisaged. Here findings are reviewed that shed light on how consolidation may be impleme...

  20. Long-term predictive capability of erosion models

    Science.gov (United States)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  1. Long-term Stable Conservative Multiscale Methods for Vortex Flows

    Science.gov (United States)

    2017-10-31

    Computing Department, Florida State (January 2016) - L. Rebholz, SIAM Southeast 2016, Special session on Recent advances in fluid flow and...Multiscale Methods for Vortex Flows (x) Material has been given an OPSEC review and it has been determined to be non sensitive and, except for...distribution is unlimited. UU UU UU UU 31-10-2017 1-Aug-2014 31-Jul-2017 Final Report: Long-term Stable Conservative Multiscale Methods for Vortex Flows

  2. How to build a high-performance compute cluster for the Grid

    CERN Document Server

    Reinefeld, A

    2001-01-01

    The success of large-scale multi-national projects like the forthcoming analysis of the LHC particle collision data at CERN relies to a great extent on the ability to efficiently utilize computing and data-storage resources at geographically distributed sites. Currently, much effort is spent on the design of Grid management software (Datagrid, Globus, etc.), while the effective integration of computing nodes has been largely neglected up to now. This is the focus of our work. We present a framework for a high- performance cluster that can be used as a reliable computing node in the Grid. We outline the cluster architecture, the management of distributed data and the seamless integration of the cluster into the Grid environment. (11 refs).

  3. Experiences of long-term tranquillizer use

    DEFF Research Database (Denmark)

    Skinhoj, K T; Larsson, S; Helweg-Joergensen, S

    2001-01-01

    , the psychodynamic perspective is integrated within a multi-dimensional model that considers biological, cognitive, identity, gender and social learning factors. The analysis reveals the possibility of achieving a detailed understanding of the dynamic processes involved in the development of long-term tranquillizer...

  4. Long-Term Orientation in Trade

    NARCIS (Netherlands)

    Hofstede, G.J.; Jonker, C.M.; Verwaart, D.

    2008-01-01

    Trust does not work in the same way across cultures. This paper presents an agent model of behavior in trade across Hofstedes cultural dimension of long-term vs. short-term orientation. The situation is based on a gaming simulation, the Trust and Tracing game. The paper investigates the

  5. Safety of long-term PPI therapy

    DEFF Research Database (Denmark)

    Reimer, Christina

    2013-01-01

    Proton pump inhibitors have become the mainstay of medical treatment of acid-related disorders. Long-term use is becoming increasingly common, in some cases without a proper indication. A large number of mainly observational studies on a very wide range of possible associations have been publishe...... to a careful evaluation of the indication for PPI treatment....

  6. Long term consequences of early childhood malnutrition

    NARCIS (Netherlands)

    Kinsey, B.H.; Hoddinott, J; Alderman, H.

    2006-01-01

    This paper examines the impact of pre-school malnutrition on subsequent human capital formation in rural Zimbabwe using a maternal fixed effects - instrumental variables (MFE-IV) estimator with a long term panel data set. Representations of civil war and drought shocks are used to identify

  7. Financial Incentives in Long-Term Care

    NARCIS (Netherlands)

    P.L.H. Bakx (Pieter)

    2015-01-01

    markdownabstract__Abstract__ Long-term care (ltc) aims to help individuals to cope with their impairments. In my thesis, I describe ltc financing alternatives and their consequences for the allocation of ltc. This thesis consists of two parts. In the first part, I investigate how alternative ways

  8. Long-term outcomes of patellofemoral arthroplasty.

    NARCIS (Netherlands)

    Jonbergen, J.P.W. van; Werkman, D.M.; Barnaart, L.F.; Kampen, A. van

    2010-01-01

    The purpose of this study was to correlate the long-term survival of patellofemoral arthroplasty with primary diagnosis, age, sex, and body mass index. One hundred eighty-five consecutive Richards type II patellofemoral arthroplasties were performed in 161 patients with isolated patellofemoral

  9. Long-Term Memory and Learning

    Science.gov (United States)

    Crossland, John

    2011-01-01

    The English National Curriculum Programmes of Study emphasise the importance of knowledge, understanding and skills, and teachers are well versed in structuring learning in those terms. Research outcomes into how long-term memory is stored and retrieved provide support for structuring learning in this way. Four further messages are added to the…

  10. The 2013 Long-Term Budget Outlook

    Science.gov (United States)

    2013-09-01

    number of years, leading to substantial additional federal spending. For example, the nation could experience a massive earthquake, a nuclear meltdown...budget surpluses remaining after paying down publicly held debt available for redemption . a. For comparison with the current long-term projections, CBO

  11. Long-term effects of ionizing radiation

    International Nuclear Information System (INIS)

    Kaul, Alexander; Burkart, Werner; Grosche, Bernd; Jung, Thomas; Martignoni, Klaus; Stephan, Guenther

    1997-01-01

    This paper approaches the long-term effects of ionizing radiation considering the common thought that killing of cells is the basis for deterministic effects and that the subtle changes in genetic information are important in the development of radiation-induced cancer, or genetic effects if these changes are induced in germ cells

  12. Pituitary diseases : long-term psychological consequences

    NARCIS (Netherlands)

    Tiemensma, Jitske

    2012-01-01

    Nowadays, pituitary adenomas can be appropriately treated, but patients continue to report impaired quality of life (QoL) despite long-term remission or cure. In patients with Cushing’s disease, Cushing’s syndrome or acromegaly, doctors should be aware of subtle cognitive impairments and the

  13. The long term stability of lidar calibrations

    DEFF Research Database (Denmark)

    Courtney, Michael; Gayle Nygaard, Nicolai

    Wind lidars are now used extensively for wind resource measurements. One of the requirements for the data to be accepted in support of project financing (so-called ‘banka-bility’) is to demonstrate the long-term stability of lidar cali-brations. Calibration results for six Leosphere WindCube li...

  14. Rebalancing for Long-Term Investors

    NARCIS (Netherlands)

    Driessen, Joost; Kuiper, Ivo

    2017-01-01

    In this study we show that the rebalance frequency of a multi-asset portfolio has only limited impact on the utility of a long-term passive investor. Although continuous rebalancing is optimal, the loss of a suboptimal strategy corresponds to up to only 30 basis points of the initial wealth of the

  15. Status of the Grid Computing for the ALICE Experiment in the Czech Republic

    International Nuclear Information System (INIS)

    Adamova, D; Hampl, J; Chudoba, J; Kouba, T; Svec, J; Mendez, Lorenzo P; Saiz, P

    2010-01-01

    The Czech Republic (CR) has been participating in the LHC Computing Grid project (LCG) ever since 2003 and gradually, a middle-sized Tier-2 center has been built in Prague, delivering computing services for national HEP experiments groups including the ALICE project at the LHC. We present a brief overview of the computing activities and services being performed in the CR for the ALICE experiment.

  16. Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic.

    Science.gov (United States)

    Sanduja, S; Jewell, P; Aron, E; Pharai, N

    2015-09-01

    Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic.

  17. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    Science.gov (United States)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  18. Surface Modeling, Grid Generation, and Related Issues in Computational Fluid Dynamic (CFD) Solutions

    Science.gov (United States)

    Choo, Yung K. (Compiler)

    1995-01-01

    The NASA Steering Committee for Surface Modeling and Grid Generation (SMAGG) sponsored a workshop on surface modeling, grid generation, and related issues in Computational Fluid Dynamics (CFD) solutions at Lewis Research Center, Cleveland, Ohio, May 9-11, 1995. The workshop provided a forum to identify industry needs, strengths, and weaknesses of the five grid technologies (patched structured, overset structured, Cartesian, unstructured, and hybrid), and to exchange thoughts about where each technology will be in 2 to 5 years. The workshop also provided opportunities for engineers and scientists to present new methods, approaches, and applications in SMAGG for CFD. This Conference Publication (CP) consists of papers on industry overview, NASA overview, five grid technologies, new methods/ approaches/applications, and software systems.

  19. Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data

    Science.gov (United States)

    Koranda, Scott

    2004-03-01

    The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.

  20. Densidade de um planossolo sob sistemas de cultivo avaliada por meio da tomografia computadorizada de raios gama Bulk density of an alfisol under cultivation systems in a long-term experiment evaluated with gamma ray computed tomography

    Directory of Open Access Journals (Sweden)

    Adilson Luís Bamberg

    2009-10-01

    lowland soils is based on the use of crop rotation and succession, which are essential for the control of red and black rice. The effects on the soil properties deserve studies, particularly on soil compaction. The objective of this study was to identify compacted layers in an Albaqualf under different cultivation and tillage systems, by evaluating the soil bulk density (Ds with Gamma Ray Computed Tomography (TC. The analysis was carried out in a long-term experiment, from 1985 to 2004, at an experimental station of Embrapa Clima Temperado, Capão do Leão, RS, Brazil, in a random block design with seven treatments, with four replications (T1 - one year rice with conventional tillage followed by two years fallow; T2 - continuous rice under conventional tillage; T4 - rice and soybean (Glycine Max L. rotation under conventional tillage; T5 - rice, soybean and corn (Zea maize L. rotation under conventional tillage; T6 - rice under no-tillage in the summer in succession to rye-grass (Lolium multiflorum L. in the winter; T7 - rice under no-tillage and soybean under conventional tillage rotation; T8 - control: uncultivated soil. The Gamma Ray Computed Tomography method did not identify compacted soil layers under no-tillage rice in succession to rye-grass; two fallow years in the irrigated rice production system did not prevent the formation of a compacted layer at the soil surface; and in the rice, soybean and corn rotation under conventional tillage two compacted layers were identified (0.0 to 1.5 cm and 11 to 14 cm, indicating that they may restrict the agricultural production in this cultivation system on Albaqualf soils.

  1. GridFactory - Distributed computing on ephemeral resources

    DEFF Research Database (Denmark)

    Orellana, Frederik; Niinimaki, Marko

    2011-01-01

    A novel batch system for high throughput computing is presented. The system is specifically designed to leverage virtualization and web technology to facilitate deployment on cloud and other ephemeral resources. In particular, it implements a security model suited for forming collaborations...

  2. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  3. Porting of Bio-Informatics Tools for Plant Virology on a Computational Grid

    International Nuclear Information System (INIS)

    Lanzalone, G.; Lombardo, A.; Muoio, A.; Iacono-Manno, M.

    2007-01-01

    The goal of Tri Grid Project and PI2S2 is the creation of the first Sicilian regional computational Grid. In particular, it aims to build various software-hardware interfaces between the infrastructure and some scientific and industrial applications. In this context, we have integrated some among the most innovative computing applications in virology research inside these Grid infrastructure. Particularly, we have implemented in a complete work flow, various tools for pairwise or multiple sequence alignment and phylogeny tree construction (ClustalW-MPI), phylogenetic networks (Splits Tree), detection of recombination by phylogenetic methods (TOPALi) and prediction of DNA or RNA secondary consensus structures (KnetFold). This work will show how the ported applications decrease the execution time of the analysis programs, improve the accessibility to the data storage system and allow the use of metadata for data processing. (Author)

  4. Long-term selenium status in humans

    International Nuclear Information System (INIS)

    Baskett, C.K.; Spate, V.L.; Mason, M.M.; Nichols, T.A.; Williams, A.; Dubman, I.M.; Gudino, A.; Denison, J.; Morris, J.S.

    2001-01-01

    The association of sub-optimal selenium status with increased risk factors for some cancers has been reported in two recent epidemiological studies. In both studies the same threshold in selenium status was observed, below which, cancer incidence increased. To assess the use of nails as a biologic monitor to measure the long-term selenium status, an eight-year longitudinal study was undertaken with a group of 11 adult subjects, 5 women and 6 men. Selenium has been measured by instrumental neutron activation analysis. Differences between fingernails and toenails with be discussed. In addition, the results will be discussed in the context of the long-term stability of the nail monitor to measure selenium status during those periods when selenium determinants are static; and the changes that occur as a result of selenium supplementation. (author)

  5. Influenza in long-term care facilities.

    Science.gov (United States)

    Lansbury, Louise E; Brown, Caroline S; Nguyen-Van-Tam, Jonathan S

    2017-09-01

    Long-term care facility environments and the vulnerability of their residents provide a setting conducive to the rapid spread of influenza virus and other respiratory pathogens. Infections may be introduced by staff, visitors or new or transferred residents, and outbreaks of influenza in such settings can have devastating consequences for individuals, as well as placing extra strain on health services. As the population ages over the coming decades, increased provision of such facilities seems likely. The need for robust infection prevention and control practices will therefore remain of paramount importance if the impact of outbreaks is to be minimised. In this review, we discuss the nature of the problem of influenza in long-term care facilities, and approaches to preventive and control measures, including vaccination of residents and staff, and the use of antiviral drugs for treatment and prophylaxis, based on currently available evidence. © 2017 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  6. Long-Term Ownership by Industrial Foundations

    DEFF Research Database (Denmark)

    Børsting, Christa Winther; Kuhn, Johan Moritz; Poulsen, Thomas

    2016-01-01

    in Denmark. Industrial foundations are independent legal entities without owners or members typically with the dual objective of preserving the company and using excess profits for charity. We use a unique Danish data set to examine the governance of foundation-owned companies. We show that they are long......-term in several respects. Foundations hold on to their shares for longer. Foundation-owned companies replace managers less frequently. They have more conservative capital structures with less leverage. Their companies survive longer. Their business decisions appear to be more long term. This paper supports...... the hypothesis that time horizons are influenced by ownership structures and particularly that industrial foundations promote longtermism. Policymakers which are interested in promoting longtermism should allow and perhaps even encourage the creation of industrial foundations. More generally they should consider...

  7. Analysis of long-term energy scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lemming, J.; Morthorst, P.E.

    1998-09-01

    When addressing the role of fusion energy in the 21. century, the evaluation of possible future structures in the electricity market and the energy sector as a whole, can be a useful tool. Because fusion energy still needs demonstration, commercialized fusion energy is not likely to be a reality within the next few decades. Therefore long-term scenarios are needed describing the energy markets, which fusion energy eventually will be part of. This report performs an analysis of two of the most detailed existing long-term scenarios describing possible futures of the energy system. The aim is to clarify the frames in which the future development of the global energy demand, as well as the structure of the energy system can be expected to develop towards the year 2100. (au) 19 refs.

  8. Long-term effects of radiation

    International Nuclear Information System (INIS)

    Smith, J.; Smith, T.

    1981-01-01

    It is pointed out that sources of long-term damage from radiation are two-fold. People who have been exposed to doses of radiation from initial early fallout but have recovered from the acute effects may still suffer long-term damage from their exposure. Those who have not been exposed to early fallout may be exposed to delayed fallout, the hazards from which are almost exclusively from ingesting strontium, caesium and carbon isotopes present in food; the damage caused is relatively unimportant compared with that caused by the brief doses from initial radiation and early fallout. A brief discussion is presented of the distribution of delayed long-lived isotope fallout, and an outline is sketched of late biological effects, such as malignant disease, cataracts, retarded development, infertility and genetic effects. (U.K.)

  9. Proceedings of the second workshop of LHC Computing Grid, LCG-France

    International Nuclear Information System (INIS)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin

    2007-03-01

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3 leadership expressed

  10. Long term economic relationships from cointegration maps

    Science.gov (United States)

    Vicente, Renato; Pereira, Carlos de B.; Leite, Vitor B. P.; Caticha, Nestor

    2007-07-01

    We employ the Bayesian framework to define a cointegration measure aimed to represent long term relationships between time series. For visualization of these relationships we introduce a dissimilarity matrix and a map based on the sorting points into neighborhoods (SPIN) technique, which has been previously used to analyze large data sets from DNA arrays. We exemplify the technique in three data sets: US interest rates (USIR), monthly inflation rates and gross domestic product (GDP) growth rates.

  11. Murine model of long term obstructive jaundice

    Science.gov (United States)

    Aoki, Hiroaki; Aoki, Masayo; Yang, Jing; Katsuta, Eriko; Mukhopadhyay, Partha; Ramanathan, Rajesh; Woelfel, Ingrid A.; Wang, Xuan; Spiegel, Sarah; Zhou, Huiping; Takabe, Kazuaki

    2016-01-01

    Background With the recent emergence of conjugated bile acids as signaling molecules in cancer, a murine model of obstructive jaundice by cholestasis with long-term survival is in need. Here, we investigated the characteristics of 3 murine models of obstructive jaundice. Methods C57BL/6J mice were used for total ligation of the common bile duct (tCL), partial common bile duct ligation (pCL), and ligation of left and median hepatic bile duct with gallbladder removal (LMHL) models. Survival was assessed by Kaplan-Meier method. Fibrotic change was determined by Masson-Trichrome staining and Collagen expression. Results 70% (7/10) of tCL mice died by Day 7, whereas majority 67% (10/15) of pCL mice survived with loss of jaundice. 19% (3/16) of LMHL mice died; however, jaundice continued beyond Day 14, with survival of more than a month. Compensatory enlargement of the right lobe was observed in both pCL and LMHL models. The pCL model demonstrated acute inflammation due to obstructive jaundice 3 days after ligation but jaundice rapidly decreased by Day 7. The LHML group developed portal hypertension as well as severe fibrosis by Day 14 in addition to prolonged jaundice. Conclusion The standard tCL model is too unstable with high mortality for long-term studies. pCL may be an appropriate model for acute inflammation with obstructive jaundice but long term survivors are no longer jaundiced. The LHML model was identified to be the most feasible model to study the effect of long-term obstructive jaundice. PMID:27916350

  12. Long-term course of opioid addiction.

    Science.gov (United States)

    Hser, Yih-Ing; Evans, Elizabeth; Grella, Christine; Ling, Walter; Anglin, Douglas

    2015-01-01

    Opioid addiction is associated with excess mortality, morbidities, and other adverse conditions. Guided by a life-course framework, we review the literature on the long-term course of opioid addiction in terms of use trajectories, transitions, and turning points, as well as other factors that facilitate recovery from addiction. Most long-term follow-up studies are based on heroin addicts recruited from treatment settings (mostly methadone maintenance treatment), many of whom are referred by the criminal justice system. Cumulative evidence indicates that opioid addiction is a chronic disorder with frequent relapses. Longer treatment retention is associated with a greater likelihood of abstinence, whereas incarceration is negatively related to subsequent abstinence. Over the long term, the mortality rate of opioid addicts (overdose being the most common cause) is about 6 to 20 times greater than that of the general population; among those who remain alive, the prevalence of stable abstinence from opioid use is low (less than 30% after 10-30 years of observation), and many continue to use alcohol and other drugs after ceasing to use opioids. Histories of sexual or physical abuse and comorbid mental disorders are associated with the persistence of opioid use, whereas family and social support, as well as employment, facilitates recovery. Maintaining opioid abstinence for at least five years substantially increases the likelihood of future stable abstinence. Recent advances in pharmacological treatment options (buprenorphine and naltrexone) include depot formulations offering longer duration of medication; their impact on the long-term course of opioid addiction remains to be assessed.

  13. Long-term economic outlook. Annual review

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-01

    This review provides economic growth forecast tables for Ontario, Canada, the US, Western Europe, and Japan. Economic growth, government policy, the long-term prospects for inflation, interest rates and foreign exchange rates, trends in the Canadian dollar, and energy markets and prices are also reviewed. Data generally cover 1965-2025. Appendices give a summary of historical and forecast data. 18 figs., 16 tabs.

  14. Long-term data storage in diamond

    OpenAIRE

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A.

    2016-01-01

    The negatively charged nitrogen vacancy (NV?) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV? optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multic...

  15. French Approach for Long Term Storage Safety

    International Nuclear Information System (INIS)

    Marciano, Jacob; Carreton, Jean-Pierre; Lizot, Marie Therese; Lhomme, Veronique

    2014-01-01

    IRSN presents its statement regarding long-term storage facilities; in France, the regulatory documents do not define the long term duration. The storage facility lifetime can only be appreciated according to the needs and materials stored therein. However, the magnitude of the long-term can be estimated at a few hundred years compared to a few decades for current storage. Usually, in France, construction of storage facilities is driven from the necessity various necessities, linked to the management of radioactive material (eg spent fuel) and to the management of radioactive waste. Because of the variety of 'stored materials and objects' (fission product solutions, plutonium oxide powders, activated solids, drums containing technological waste, spent fuel...), a great number of storage facility design solutions have been developed (surface, subsurface areas, dry or wet conditions...) in the World. After describing the main functions of a storage facility, IRSN displays the safety principles and the associated design principles. The specific design principles applied to particular storage (dry or wet spent fuel storage, depleted uranium or reprocessed uranium storage, plutonium storage, waste containing tritium storage, HLW and ILLW storage...) are also presented. Finally, the concerns due to the long-term duration storage and related safety assessment are developed. After discussing these issues, IRSN displays its statement. The authorization procedures governing the facility lifetime are similar to those of any basic nuclear installation, the continuation of the facility operation remaining subject to periodic safety reviews (in France, every 10 years). The applicant safety cases have to show, that the safety requirements are always met; this requires, at minimum, to take into account at the design stage, comfortable design margins. (author)

  16. Accounting of Long-Term Biological Assets

    OpenAIRE

    Valeriy Mossakovskyy; Vasyl Korytnyy

    2015-01-01

    The article is devoted to generalization of experience in valuation of long-term biological assets of plant-growing and animal-breeding, and preparation of suggestions concerning improvement of accounting in this field. Recommendations concerning accounting of such assets are given based on the study of accounting practice at specific agricultural company during long period of time. Authors believe that fair value is applicable only if price level for agricultural products is fixed by the gov...

  17. Optimal long-term contracting with learning

    OpenAIRE

    He, Zhiguo; Wei, Bin; Yu, Jianfeng; Gao, Feng

    2016-01-01

    We introduce uncertainty into Holmstrom and Milgrom (1987) to study optimal long-term contracting with learning. In a dynamic relationship, the agent's shirking not only reduces current performance but also increases the agent's information rent due to the persistent belief manipulation effect. We characterize the optimal contract using the dynamic programming technique in which information rent is the unique state variable. In the optimal contract, the optimal effort is front-loaded and decr...

  18. Timber joints under long-term loading

    DEFF Research Database (Denmark)

    Feldborg, T.; Johansen, M.

    This report describes tests and results from stiffness and strength testing of splice joints under long-term loading. During two years of loading the spicimens were exposed to cyclically changing relative humidity. After the loading period the specimens were short-term tested. The connectors were...... integral nail-plates and nailed steel and plywood gussets. The report is intended for designers and researchers in timber engineering....

  19. Inflation Hedging for Long-Term Investors

    OpenAIRE

    Shaun K. Roache; Alexander P. Attie

    2009-01-01

    Long-term investors face a common problem-how to maintain the purchasing power of their assets over time and achieve a level of real returns consistent with their investment objectives. While inflation-linked bonds and derivatives have been developed to hedge the effects of inflation, their limited supply and liquidity lead many investors to continue to rely on the indirect hedging properties of traditional asset classes. In this paper, we assess these properties over different time horizons,...

  20. Long term evolution 4G and beyond

    CERN Document Server

    Yacoub, Michel; Figueiredo, Fabrício; Tronco, Tania

    2016-01-01

    This book focus on Long Term Evolution (LTE) and beyond. The chapters describe different aspects of research and development in LTE, LTE-Advanced (4G systems) and LTE-450 MHz such as telecommunications regulatory framework, voice over LTE, link adaptation, power control, interference mitigation mechanisms, performance evaluation for different types of antennas, cognitive mesh network, integration of LTE network and satellite, test environment, power amplifiers and so on. It is useful for researchers in the field of mobile communications.

  1. Long-Term Care Services for Veterans

    Science.gov (United States)

    2017-02-14

    includes but is not limited to home physical , occupational, or speech therapy ; wound care; and intravenous (IV) care. A VA physician determines that a...restoring/rehabilitating the veteran’s health, such as skilled nursing care, physical therapy , occupational therapy , and IV therapy Same as HBPC... geriatric evaluation,  palliative care,  adult day health care,  homemaker/home health aide care,  respite care, Long-Term Care Services for

  2. Long term adequacy of uranium resources

    International Nuclear Information System (INIS)

    Steyn, J.

    1990-01-01

    This paper examines the adequacy of world economic uranium resources to meet requirements in the very long term, that is until at least 2025 and beyond. It does so by analysing current requirements forecasts, existing and potential production centre supply capability schedules and national resource estimates. It takes into account lead times from resource discovery to production and production rate limitations. The institutional and political issues surrounding the question of adequacy are reviewed. (author)

  3. Survey of Energy Computing in the Smart Grid Domain

    OpenAIRE

    Rajesh Kumar; Arun Agarwala

    2013-01-01

    Resource optimization, with advance computing tools, improves the efficient use of energy resources. The renewable energy resources are instantaneous and needs to be conserve at the same time. To optimize real time process, the complex design, includes plan of resources and control for effective utilization. The advances in information communication technology tools enables data formatting and analysis results in optimization of use the renewable resources for sustainable energy solution on s...

  4. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    International Nuclear Information System (INIS)

    Brun, Rene; Carminati, Federico; Galli Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  5. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  6. A portable grid-enabled computing system for a nuclear material study

    International Nuclear Information System (INIS)

    Tsujita, Yuichi; Arima, Tatsumi; Takekawa, Takayuki; Suzuki, Yoshio

    2010-01-01

    We have built a portable grid-enabled computing system specialized for our molecular dynamics (MD) simulation program to study Pu material easily. Experimental approach to reveal properties of Pu materials is often accompanied by some difficulties such as radiotoxicity of actinides. Since a computational approach reveals new aspects to researchers without such radioactive facilities, we address an MD computation. In order to have more realistic results about e.g., melting point or thermal conductivity, we need a large scale of parallel computations. Most of application users who don't have supercomputers in their institutes should use a remote supercomputer. For such users, we have developed the portable and secured grid-enabled computing system to utilize a grid computing infrastructure provided by Information Technology Based Laboratory (ITBL). This system enables us to access remote supercomputers in the ITBL system seamlessly from a client PC through its graphical user interface (GUI). Typically it enables seamless file accesses on the GUI. Furthermore monitoring of standard output or standard error is available to see progress of an executed program. Since the system provides fruitful functionalities which are useful for parallel computing on a remote supercomputer, application users can concentrate on their researches. (author)

  7. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    Science.gov (United States)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  8. From the Web to the Grid and beyond computing paradigms driven by high-energy physics

    CERN Document Server

    Carminati, Federico; Galli-Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the ...

  9. User's Manual for FOMOCO Utilities-Force and Moment Computation Tools for Overset Grids

    Science.gov (United States)

    Chan, William M.; Buning, Pieter G.

    1996-01-01

    In the numerical computations of flows around complex configurations, accurate calculations of force and moment coefficients for aerodynamic surfaces are required. When overset grid methods are used, the surfaces on which force and moment coefficients are sought typically consist of a collection of overlapping surface grids. Direct integration of flow quantities on the overlapping grids would result in the overlapped regions being counted more than once. The FOMOCO Utilities is a software package for computing flow coefficients (force, moment, and mass flow rate) on a collection of overset surfaces with accurate accounting of the overlapped zones. FOMOCO Utilities can be used in stand-alone mode or in conjunction with the Chimera overset grid compressible Navier-Stokes flow solver OVERFLOW. The software package consists of two modules corresponding to a two-step procedure: (1) hybrid surface grid generation (MIXSUR module), and (2) flow quantities integration (OVERINT module). Instructions on how to use this software package are described in this user's manual. Equations used in the flow coefficients calculation are given in Appendix A.

  10. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  11. Sexuality and Physical Intimacy in Long Term Care: Sexuality, long term care, capacity assessment

    OpenAIRE

    Lichtenberg, Peter A.

    2014-01-01

    Sexuality and sexual needs in older adults remains a neglected area of clinical intervention, particularly so in long term care settings. Because older adults in medical rehabilitation and long term care beds present with significant frailties, and often significant neurocognitive disorders it makes it difficult for occupational therapists and other staff to evaluate the capacity of an older adult resident to participate in sexual relationships. The current paper reviews the current literatur...

  12. High resolution numerical investigation on the effect of convective instability on long term CO2 storage in saline aquifers

    International Nuclear Information System (INIS)

    Lu, C; Lichtner, P C

    2007-01-01

    CO 2 sequestration (capture, separation, and long term storage) in various geologic media including depleted oil reservoirs, saline aquifers, and oceanic sediments is being considered as a possible solution to reduce green house gas emissions. Dissolution of supercritical CO 2 in formation brines is considered an important storage mechanism to prevent possible leakage. Accurate prediction of the plume dissolution rate and migration is essential. Analytical analysis and numerical experiments have demonstrated that convective instability (Rayleigh instability) has a crucial effect on the dissolution behavior and subsequent mineralization reactions. Global stability analysis indicates that a certain grid resolution is needed to capture the features of density-driven fingering phenomena. For 3-D field scale simulations, high resolution leads to large numbers of grid nodes, unfeasible for a single workstation. In this study, we investigate the effects of convective instability on geologic sequestration of CO 2 by taking advantage of parallel computing using the code PFLOTRAN, a massively parallel 3-D reservoir simulator for modeling subsurface multiphase, multicomponent reactive flow and transport based on continuum scale mass and energy conservation equations. The onset, development and long-term fate of a supercritical CO 2 plume will be resolved with high resolution numerical simulations to investigate the rate of plume dissolution caused by fingering phenomena

  13. Grid today, clouds on the horizon

    Science.gov (United States)

    Shiers, Jamie

    2009-04-01

    By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.

  14. Computer experiments with a coarse-grid hydrodynamic climate model

    International Nuclear Information System (INIS)

    Stenchikov, G.L.

    1990-01-01

    A climate model is developed on the basis of the two-level Mintz-Arakawa general circulation model of the atmosphere and a bulk model of the upper layer of the ocean. A detailed model of the spectral transport of shortwave and longwave radiation is used to investigate the radiative effects of greenhouse gases. The radiative fluxes are calculated at the boundaries of five layers, each with a pressure thickness of about 200 mb. The results of the climate sensitivity calculations for mean-annual and perpetual seasonal regimes are discussed. The CCAS (Computer Center of the Academy of Sciences) climate model is used to investigate the climatic effects of anthropogenic changes of the optical properties of the atmosphere due to increasing CO 2 content and aerosol pollution, and to calculate the sensitivity to changes of land surface albedo and humidity

  15. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  16. Long-term uranium supply-demand analyses

    International Nuclear Information System (INIS)

    1986-12-01

    It is the intention of this study to investigate the long-term uranium supply demand situation using a number of supply and demand related assumptions. For supply, these assumptions as used in the Resources and Production Projection (RAPP) model include country economic development status, and consequent lead times for exploration and development, uranium development status, country infrastructure, and uranium resources including the Reasonably Assured (RAR), Estimated Additional, Categories I and II, (EAR-I and II) and Speculative Resource categories. The demand assumptions were based on the ''pure'' reactor strategies developed by the NEA Working Party on Nuclear Fuel Cycle Requirements for the 1986 OECD (NEA)/IAEA reports ''Nuclear Energy and its Fuel Cycle: Prospects to 2025''. In addition for this study, a mixed strategy case was computed using the averages of the Plutonium (Pu) burning LWR high, and the improved LWR low cases. It is understandable that such a long-term analysis cannot present hard facts, but it can show which variables may in fact influence the long-term supply-demand situation. It is hoped that results of this study will provide valuable information for planners in the uranium supply and demand fields. Periodical re-analyses with updated data bases will be needed from time to time

  17. LONG-TERM OUTCOME IN PEDIATRIC TRICHOTILLOMANIA.

    Science.gov (United States)

    Schumer, Maya C; Panza, Kaitlyn E; Mulqueen, Jilian M; Jakubovski, Ewgeni; Bloch, Michael H

    2015-10-01

    To examine long-term outcome in children with trichotillomania. We conducted follow-up clinical assessments an average of 2.8 ± 0.8 years after baseline evaluation in 30 of 39 children who previously participated in a randomized, double-blind, placebo-controlled trial of N-acetylcysteine (NAC) for pediatric trichotillomania. Our primary outcome was change in hairpulling severity on the Massachusetts General Hospital Hairpulling Hospital Hairpulling Scale (MGH-HPS) between the end of the acute phase and follow-up evaluation. We also obtained secondary measures examining styles of hairpulling, comorbid anxiety and depressive symptoms, as well as continued treatment utilization. We examined both correlates and predictors of outcome (change in MGH-HPS score) using linear regression. None of the participants continued to take NAC at the time of follow-up assessment. No significant changes in hairpulling severity were reported over the follow-up period. Subjects reported significantly increased anxiety and depressive symptoms but improvement in automatic pulling symptoms. Increased hairpulling symptoms during the follow-up period were associated with increased depression and anxiety symptoms and increased focused pulling. Older age and greater focused pulling at baseline assessment were associated with poor long-term prognosis. Our findings suggest that few children with trichotillomania experience a significant improvement in trichotillomania symptoms if behavioral treatments are inaccessible or have failed to produce adequate symptom relief. Our findings also confirm results of previous cross-sectional studies that suggest an increased risk of depression and anxiety symptoms with age in pediatric trichotillomania. Increased focused pulling and older age among children with trichotillomania symptoms may be associated with poorer long-term prognosis. © 2015 Wiley Periodicals, Inc.

  18. Long-term EEG in children.

    Science.gov (United States)

    Montavont, A; Kaminska, A; Soufflet, C; Taussig, D

    2015-03-01

    Long-term video-EEG corresponds to a recording ranging from 1 to 24 h or even longer. It is indicated in the following situations: diagnosis of epileptic syndromes or unclassified epilepsy, pre-surgical evaluation for drug-resistant epilepsy, follow-up of epilepsy or in cases of paroxysmal symptoms whose etiology remains uncertain. There are some specificities related to paediatric care: a dedicated pediatric unit; continuous monitoring covering at least a full 24-hour period, especially in the context of pre-surgical evaluation; the requirement of presence by the parents, technician or nurse; and stronger attachment of electrodes (cup electrodes), the number of which is adapted to the age of the child. The chosen duration of the monitoring also depends on the frequency of seizures or paroxysmal events. The polygraphy must be adapted to the type and topography of movements. It is essential to have at least an electrocardiography (ECG) channel, respiratory sensor and electromyography (EMG) on both deltoids. There is no age limit for performing long-term video-EEG even in newborns and infants; nevertheless because of scalp fragility, strict surveillance of the baby's skin condition is required. In the specific context of pre-surgical evaluation, long-term video-EEG must record all types of seizures observed in the child. This monitoring is essential in order to develop hypotheses regarding the seizure onset zone, based on electroclinical correlations, which should be adapted to the child's age and the psychomotor development. Copyright © 2015. Published by Elsevier SAS.

  19. Long-term governance for sustainability

    International Nuclear Information System (INIS)

    Martell, M.

    2007-01-01

    Meritxell Martell spoke of the long-term aspects of radioactive waste management. She pointed out that decision-making processes need to be framed within the context of sustainability, which means that a balance should be sought between scientific considerations, economic aspects and structural conditions. Focusing on structural aspects, Working Group 3 of COWAM-Spain came to the conclusion that the activity of the regulator is a key factor of long-term management. Another finding is that from a sustainability perspective multi-level governance is more effective for coping with the challenges of radioactive waste management than one tier of government-making decisions. The working group also felt that the current Local Information Committees need to evolve towards more institutionalized and legitimized mechanisms for long-term involvement. Ms. Martell introduced a study comparing the efficiency of economic instruments to advance sustainable development in nuclear communities vs. municipalities in mining areas. The study found that funds transferred to nuclear zones had become a means to facilitate local acceptance of nuclear facilities rather than a means to promote socio-economic development. Another finding is that economic instruments are not sufficient guarantees of sustainable development by themselves; additional preconditions include leadership, vision and entrepreneur-ship on the part of community leaders, private or public investments, among others. Finally, Ms. Martell summarised the challenges faced by the Spanish radioactive waste management programme, which include the need for strategic thinking, designing the future in a participatory fashion, and working with local and regional governments and citizens to devise mechanisms for social learning, economic development and environmental protection. (author)

  20. Long term aspects of uranium tailings management

    International Nuclear Information System (INIS)

    Bragg, K.

    1980-05-01

    This paper sets out the background issues which lead to the development of interim close-out criteria for uranium mill tailings. It places the current state-of-the-art for tailings management into both a national and international perspective and shows why such interim criteria are needed now. There are seven specific criteria proposed dealing with the need to have: passive barriers, limits on surface water recharge, durable systems, long term performance guarantees, limits to access, controls on water and airborne releases and finally to have a knowledge of exposure pathways. This paper is intended to serve as a focus for subsequent discussions with all concerned parties. (auth)

  1. Human Behaviour in Long-Term Missions

    Science.gov (United States)

    1997-01-01

    In this session, Session WP1, the discussion focuses on the following topics: Psychological Support for International Space Station Mission; Psycho-social Training for Man in Space; Study of the Physiological Adaptation of the Crew During A 135-Day Space Simulation; Interpersonal Relationships in Space Simulation, The Long-Term Bed Rest in Head-Down Tilt Position; Psychological Adaptation in Groups of Varying Sizes and Environments; Deviance Among Expeditioners, Defining the Off-Nominal Act in Space and Polar Field Analogs; Getting Effective Sleep in the Space-Station Environment; Human Sleep and Circadian Rhythms are Altered During Spaceflight; and Methodological Approach to Study of Cosmonauts Errors and Its Instrumental Support.

  2. Optimal Long-Term Financial Contracting

    OpenAIRE

    Peter M. DeMarzo; Michael J. Fishman

    2007-01-01

    We develop an agency model of financial contracting. We derive long-term debt, a line of credit, and equity as optimal securities, capturing the debt coupon and maturity; the interest rate and limits on the credit line; inside versus outside equity; dividend policy; and capital structure dynamics. The optimal debt-equity ratio is history dependent, but debt and credit line terms are independent of the amount financed and, in some cases, the severity of the agency problem. In our model, the ag...

  3. Experimental and computational investigations of heat and mass transfer of intensifier grids

    International Nuclear Information System (INIS)

    Kobzar, Leonid; Oleksyuk, Dmitry; Semchenkov, Yuriy

    2015-01-01

    The paper discusses experimental and numerical investigations on intensification of thermal and mass exchange which were performed by National Research Centre ''Kurchatov Institute'' over the past years. Recently, many designs of heat mass transfer intensifier grids have been proposed. NRC ''Kurchatov Institute'' has accomplished a large scope of experimental investigations to study efficiency of intensifier grids of various types. The outcomes of experimental investigations can be used in verification of computational models and codes. On the basis of experimental data, we derived correlations to calculate coolant mixing and critical heat flux mixing in rod bundles equipped with intensifier grids. The acquired correlations were integrated in subchannel code SC-INT.

  4. A Global Computing Grid for LHC; Una red global de computacion para LHC

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Calama, J. M.; Colino Arriero, N.

    2013-06-01

    An innovative computing infrastructure has played an instrumental role in the recent discovery of the Higgs boson in the LHC and has enabled scientists all over the world to store, process and analyze enormous amounts of data in record time. The Grid computing technology has made it possible to integrate computing center resources spread around the planet, including the CIEMAT, into a distributed system where these resources can be shared and accessed via Internet on a transparent, uniform basis. A global supercomputer for the LHC experiments. (Author)

  5. Long-term environmental behaviour of radionuclides

    International Nuclear Information System (INIS)

    Brechignac, F.; Moberg, L.; Suomela, M.

    2000-04-01

    The radioactive pollution of the environment results from the atmospheric nuclear weapons testing (during the mid-years of twentieth century), from the development of the civilian nuclear industry and from accidents such as Chernobyl. Assessing the resulting radiation that humans might receive requires a good understanding of the long-term behaviour of radionuclides in the environment. This document reports on a joint European effort to advance this understanding, 3 multinational projects have been coordinated: PEACE, EPORA and LANDSCAPE. This report proposes an overview of the results obtained and they are presented in 6 different themes: i) redistribution in the soil-plant system, ii) modelling, iii) countermeasures, iv) runoff v) spatial variations, and vi) dose assessment. The long term behaviour of the radionuclides 137 Cs, 90 Sr and 239-240 Pu is studied through various approaches, these approaches range from in-situ experiments designed to exploit past contamination events to laboratory simulations. A broad scope of different ecosystems ranging from arctic and boreal regions down to mediterranean ones has been considered. (A.C.)

  6. Craniopharyngioma in Children: Long-term Outcomes

    Science.gov (United States)

    STEINBOK, Paul

    2015-01-01

    The survival rate for childhood craniopharyngioma has been improving, with more long-term survivors. Unfortunately it is rare for the patient to be normal, either from the disease itself or from the effects of treatment. Long-term survivors of childhood craniopharyngioma suffer a number of impairments, which include visual loss, endocrinopathy, hypothalamic dysfunction, cerebrovascular problems, neurologic and neurocognitive dysfunction. Pituitary insufficiency is present in almost 100%. Visual and hypothalamic dysfunction is common. There is a high risk of metabolic syndrome and increased risk of cerebrovascular disease, including stroke and Moyamoya syndrome. Cognitive, psychosocial, and emotional problems are prevalent. Finally, there is a higher risk of premature death among survivors of craniopharyngioma, and often this is not from tumor recurrence. It is important to consider craniopharyngioma as a chronic disease. There is no perfect treatment. The treatment has to be tailored to the individual patient to minimize dysfunction caused by tumor and treatments. So “cure” of the tumor does not mean a normal patient. The management of the patient and family needs multidisciplinary evaluation and should involve ophthalmology, endocrinology, neurosurgery, oncology, and psychology. Furthermore, it is also important to address emotional issues and social integration. PMID:26345668

  7. Institutionalization and Organizational Long-term Success

    Directory of Open Access Journals (Sweden)

    Denise L. Fleck

    2007-05-01

    Full Text Available Institutionalization processes have an ambivalent effect on organizational long-term success. Even though they foster organizational stability and permanence, they also bring about rigidity and resistance to change. As a result, successful organizations are likely to lose their competitive advantage over time. The paper addresses this issue through the investigation of the institutionalization processes of two long-lived companies: General Electric, a firm that has been a long-term success and its rival, Westinghouse, which was broken up after eleven decades of existence. The longitudinal, multilevel analysis of firms and industry has identified two different modes of organizational institutionalization. The reactive mode gives rise to rigidity and change resistance, much like institutional theory predicts; the proactive mode, on the other hand, neutralizes those negative effects of institutionalization processes. In the reactive mode, structure predominates. In the proactive mode, agency plays a major role in organizational institutionalization, and in managing the organization’s relations with the environment, clearly contributing to environmental institutionalization.

  8. Long term testing of PSI-membranes

    Energy Technology Data Exchange (ETDEWEB)

    Huslage, J; Brack, H P; Geiger, F; Buechi, F N; Tsukada, A; Scherer, G G [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Long term tests of PSI membranes based on radiation-grafted FEP and ETFE films were carried out and FEP-based membranes were evaluated by monitoring the in-situ membrane area resistance measured by a current pulse method. By modifying our irradiation procedure and using the double crosslinking concept we obtain reproducible membrane cell lifetimes (in term of in-situ membrane resistance) of greater than 5000 hours at 60-65{sup o}C. Preliminary tests at 80-85{sup o}C with lifetimes of greater than 2500 demonstrate the potential long term stability of PSI proton exchange membranes based on FEP over the whole operating temperature range of low-temperature polymer electrolyte fuel cells. Radiation grafted PSI membranes based on ETFE have better mechanical properties than those of the FEP membranes. Mechanical properties are particularly important in large area cells and fuel cell stacks. ETFE membranes have been tested successfully for approximately 1000 h in a 2-cell stack (100 cm{sup 2} active area each cell). (author) 4 figs., 4 refs.

  9. Neo bladder long term follow-up

    International Nuclear Information System (INIS)

    Fakhr, I.; Mohamed, A. M.; Moustafa, A.; Al-Sherbiny, M.; Salama, M.

    2013-01-01

    One of the commonest forms of orthotopic bladder substitution for bladder cancer surivors, used in our institute, is the use of ileocecal segment. Sometimes, the need for Indiana pouch heterotropic continent diversion arises. Aim: To compare the long-term effect of orthotopic ileocecal bladder and heterotropic Indiana pouch following radical cystectomy in bladder cancer patients. Patients and methods: Between January 2008 and December 2011, 91 patients underwent radical cystectomy/anterior pelvic exentration and ortho topic ileocecal bladder reconstruction (61 patients) and Indiana pouch (30 patients), when orthotopic diversion could not be technically or oncologically feasible. Results: Convalescence was uneventful in most patients. All minor and major urinary leakage cases, in both diversions groups, where successfully conservatively treated. Only one patient in the ileocecal group with major urinary leak required re-exploration with successful revision of uretro-colonic anastomosis. Only one patient in the Indiana pouch group had accidentally discovered sub-centimetric stone, which was simply expelled. The overall survival proportion of ileocecal group was 100% compared to 80% in the Indiana pouch group (p < 0.001). The disease free survival proportion of ileocecal group was 90.8% compared to 80% in the Indiana pouch group (p = 0.076). Effective comparative daytime and nighttime urinary continence as well as renal function deterioration were not statistically significant between both reconstruction types. Conclusion: Both ileocecal bladder and Indiana pouch are safe procedures in regard to long-term effects over kidney function following radical cystectomy

  10. Toward a comprehensive long term nicotine policy.

    Science.gov (United States)

    Gray, N; Henningfield, J E; Benowitz, N L; Connolly, G N; Dresler, C; Fagerstrom, K; Jarvis, M J; Boyle, P

    2005-06-01

    Global tobacco deaths are high and rising. Tobacco use is primarily driven by nicotine addiction. Overall tobacco control policy is relatively well agreed upon but a long term nicotine policy has been less well considered and requires further debate. Reaching consensus is important because a nicotine policy is integral to the target of reducing tobacco caused disease, and the contentious issues need to be resolved before the necessary political changes can be sought. A long term and comprehensive nicotine policy is proposed here. It envisages both reducing the attractiveness and addictiveness of existing tobacco based nicotine delivery systems as well as providing alternative sources of acceptable clean nicotine as competition for tobacco. Clean nicotine is defined as nicotine free enough of tobacco toxicants to pass regulatory approval. A three phase policy is proposed. The initial phase requires regulatory capture of cigarette and smoke constituents liberalising the market for clean nicotine; regulating all nicotine sources from the same agency; and research into nicotine absorption and the role of tobacco additives in this process. The second phase anticipates clean nicotine overtaking tobacco as the primary source of the drug (facilitated by use of regulatory and taxation measures); simplification of tobacco products by limitation of additives which make tobacco attractive and easier to smoke (but tobacco would still be able to provide a satisfying dose of nicotine). The third phase includes a progressive reduction in the nicotine content of cigarettes, with clean nicotine freely available to take the place of tobacco as society's main nicotine source.

  11. Long term ground movement of TRISTAN synchrotron

    International Nuclear Information System (INIS)

    Endo, K.; Ohsawa, Y.; Miyahara, M.

    1989-01-01

    The long term ground movement is estimated through the geological survey before a big accelerator is planned. For the case of TRISTAN-MR (main ring), its site was surveyed to reflect the underground information to the building prior to the construction. The movement of the synchrotron magnet mainly results from the structure of the tunnel. If an individual movement of the magnet exceeds a certain threshold limit, it gives a significant effect on the particle behavior in a synchrotron. Height of the quadrupole magnets were observed periodically during past two years at the TRISTAN-MR and their height differences along the 3 km circumference of the accelerator ring were decomposed into the Fourier components depicting the causes of the movements. Results shows the movement of the tunnel foundation which was also observed by the simultaneous measurement of both magnets and fiducial marks on the tunnel wall. The long term movement of the magnets is summarized with the geological survey prior to construction. 1 ref., 6 figs., 1 tab

  12. Long-term environmental behaviour of radionuclides

    Energy Technology Data Exchange (ETDEWEB)

    Brechignac, F.; Moberg, L.; Suomela, M

    2000-04-01

    The radioactive pollution of the environment results from the atmospheric nuclear weapons testing (during the mid-years of twentieth century), from the development of the civilian nuclear industry and from accidents such as Chernobyl. Assessing the resulting radiation that humans might receive requires a good understanding of the long-term behaviour of radionuclides in the environment. This document reports on a joint European effort to advance this understanding, 3 multinational projects have been coordinated: PEACE, EPORA and LANDSCAPE. This report proposes an overview of the results obtained and they are presented in 6 different themes: (i) redistribution in the soil-plant system, (ii) modelling, (iii) countermeasures, (iv) runoff (v) spatial variations, and (vi) dose assessment. The long term behaviour of the radionuclides {sup 137}Cs, {sup 90}Sr and {sup 239-240}Pu is studied through various approaches, these approaches range from in-situ experiments designed to exploit past contamination events to laboratory simulations. A broad scope of different ecosystems ranging from arctic and boreal regions down to mediterranean ones has been considered. (A.C.)

  13. Long-term preservation of anammox bacteria.

    Science.gov (United States)

    Rothrock, Michael J; Vanotti, Matias B; Szögi, Ariel A; Gonzalez, Maria Cruz Garcia; Fujii, Takao

    2011-10-01

    Deposit of useful microorganisms in culture collections requires long-term preservation and successful reactivation techniques. The goal of this study was to develop a simple preservation protocol for the long-term storage and reactivation of the anammox biomass. To achieve this, anammox biomass was frozen or lyophilized at two different freezing temperatures (-60°C and in liquid nitrogen (-200°C)) in skim milk media (with and without glycerol), and the reactivation of anammox activity was monitored after a 4-month storage period. Of the different preservation treatments tested, only anammox biomass preserved via freezing in liquid nitrogen followed by lyophilization in skim milk media without glycerol achieved stoichiometric ratios for the anammox reaction similar to the biomass in both the parent bioreactor and in the freshly harvested control treatment. A freezing temperature of -60°C alone, or in conjunction with lyophilization, resulted in the partial recovery of the anammox bacteria, with an equal mixture of anammox and nitrifying bacteria in the reactivated biomass. To our knowledge, this is the first report of the successful reactivation of anammox biomass preserved via sub-zero freezing and/or lyophilization. The simple preservation protocol developed from this study could be beneficial to accelerate the integration of anammox-based processes into current treatment systems through a highly efficient starting anammox biomass.

  14. Andra long term memory project - 59277

    International Nuclear Information System (INIS)

    Charton, Patrick; Boissier, Fabrice; Martin, Guillaume

    2012-01-01

    Document available in abstract form only. Full text of publication follows: Long term memory of repositories is required by safety, reversibility and social expectations. Thus Andra has implemented since 2010 a long-term memory project to reinforce and diversify its current arrangements in this field, as well as to explore opportunities to extend memory keeping over thousands years. The project includes opportunity studies of dedicated facilities. The 'Ecotheque' and 'Geotheque' projects contribute to memory respectively through environmental and geological samples preservation. The options of creating (i) an archive centre for Andra's interim and permanent archives, (ii) an artist center to study the contribution of arts to memory preservation, (iii) a museum of radioactive waste disposal history and technology (radium industry..., sea disposal, current solutions...) are considered. Other studies provided by the project examine our heritage. This includes the continuity of languages and symbolic systems, the continuity of writing and engraving methods, the archaeology of landscapes (memory of the earths evolution, multi-century memory of industrial and agricultural landscapes), the archaeology practices (how might a future archaeologist be interested in our current activity?), the preservation of historical sites and industrial memory, the continuity of institutional organizations, the memory and history of science evolution as well as broad history

  15. Long term creep behavior of concrete

    International Nuclear Information System (INIS)

    Kennedy, T.W.

    1975-01-01

    This report presents the findings of an experimental investigation to evaluate the long term creep behavior of concrete subjected to sustained uniaxial loads for an extended period of time at 75 0 F. The factors investigated were (1) curing time (90, 183, and 365 days); (2) curing history (as-cast and air-dried); and (3) uniaxial stress (600 and 2400 psi). The experimental investigation applied uniaxial compressive loads to cylindrical concrete specimens and measured strains with vibrating wire strain gages that were cast in the concrete specimen along the axial and radial axes. Specimens cured for 90 days prior to loading were subjected to a sustained load for a period of one year, at which time the loads were removed; the specimens which were cured for 183 or 365 days, however, were not unloaded and have been under load for 5 and 4.5 years, respectively. The effect of each of the above factors on the instantaneous and creep behavior is discussed and the long term creep behavior of the specimens cured for 183 or 365 days is evaluated. The findings of these evaluations are summarized. (17 figures, 10 tables) (U.S.)

  16. Sleep facilitates long-term face adaptation.

    Science.gov (United States)

    Ditye, Thomas; Javadi, Amir Homayoun; Carbon, Claus-Christian; Walsh, Vincent

    2013-10-22

    Adaptation is an automatic neural mechanism supporting the optimization of visual processing on the basis of previous experiences. While the short-term effects of adaptation on behaviour and physiology have been studied extensively, perceptual long-term changes associated with adaptation are still poorly understood. Here, we show that the integration of adaptation-dependent long-term shifts in neural function is facilitated by sleep. Perceptual shifts induced by adaptation to a distorted image of a famous person were larger in a group of participants who had slept (experiment 1) or merely napped for 90 min (experiment 2) during the interval between adaptation and test compared with controls who stayed awake. Participants' individual rapid eye movement sleep duration predicted the size of post-sleep behavioural adaptation effects. Our data suggest that sleep prevented decay of adaptation in a way that is qualitatively different from the effects of reduced visual interference known as 'storage'. In the light of the well-established link between sleep and memory consolidation, our findings link the perceptual mechanisms of sensory adaptation--which are usually not considered to play a relevant role in mnemonic processes--with learning and memory, and at the same time reveal a new function of sleep in cognition.

  17. The Barrier code for predicting long-term concrete performance

    International Nuclear Information System (INIS)

    Shuman, R.; Rogers, V.C.; Shaw, R.A.

    1989-01-01

    There are numerous features incorporated into a LLW disposal facility that deal directly with critical safety objectives required by the NRC in 10 CFR 61. Engineered barriers or structures incorporating concrete are commonly being considered for waste disposal facilities. The Barrier computer code calculates the long-term degradation of concrete structures in LLW disposal facilities. It couples this degradation with water infiltration into the facility, nuclide leaching from the waste, contaminated water release from the facility, and associated doses to members of the critical population group. The concrete degradation methodology of Barrier is described

  18. The performance model of dynamic virtual organization (VO) formations within grid computing context

    International Nuclear Information System (INIS)

    Han Liangxiu

    2009-01-01

    Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. Within the grid computing context, successful dynamic VO formations mean a number of individuals and institutions associated with certain resources join together and form new VOs in order to effectively execute tasks within given time steps. To date, while the concept of VOs has been accepted, few research has been done on the impact of effective dynamic virtual organization formations. In this paper, we develop a performance model of dynamic VOs formation and analyze the effect of different complex organizational structures and their various statistic parameter properties on dynamic VO formations from three aspects: (1) the probability of a successful VO formation under different organizational structures and statistic parameters change, e.g. average degree; (2) the effect of task complexity on dynamic VO formations; (3) the impact of network scales on dynamic VO formations. The experimental results show that the proposed model can be used to understand the dynamic VO formation performance of the simulated organizations. The work provides a good path to understand how to effectively schedule and utilize resources based on the complex grid network and therefore improve the overall performance within grid environment.

  19. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  20. Long term data preservation for CDF at INFN-CNAF

    International Nuclear Information System (INIS)

    Amerio, S; Chiarelli, L; Dell'Agnello, L; Girolamo, D De; Gregori, D; Pezzi, M; Prosperini, A; Ricci, P; Rosso, F; Zani, S

    2014-01-01

    Long-term preservation of experimental data (intended as both raw and derived formats) is one of the emerging requirements coming from scientific collaborations. Within the High Energy Physics community the Data Preservation in High Energy Physics (DPHEP) group coordinates this effort. CNAF is not only one of the Tier-1s for the LHC experiments, it is also a computing center providing computing and storage resources to many other HEP and non-HEP scientific collaborations, including the CDF experiment. After the end of data taking in 2011, CDF is now facing the challenge to both preserve the large amount of data produced during several years of data taking and to retain the ability to access and reuse it in the future. CNAF is heavily involved in the CDF Data Preservation activities, in collaboration with the Fermilab National Laboratory (FNAL) computing sector. At the moment about 4 PB of data (raw data and analysis-level ntuples) are starting to be copied from FNAL to the CNAF tape library and the framework to subsequently access the data is being set up. In parallel to the data access system, a data analysis framework is being developed which allows to run the complete CDF analysis chain in the long term future, from raw data reprocessing to analysis-level ntuple production. In this contribution we illustrate the technical solutions we put in place to address the issues encountered as we proceeded in this activity.

  1. The QUANTGRID Project (RO)—Quantum Security in GRID Computing Applications

    Science.gov (United States)

    Dima, M.; Dulea, M.; Petre, M.; Petre, C.; Mitrica, B.; Stoica, M.; Udrea, M.; Sterian, R.; Sterian, P.

    2010-01-01

    The QUANTGRID Project, financed through the National Center for Programme Management (CNMP-Romania), is the first attempt at using Quantum Crypted Communications (QCC) in large scale operations, such as GRID Computing, and conceivably in the years ahead in the banking sector and other security tight communications. In relation with the GRID activities of the Center for Computing & Communications (Nat.'l Inst. Nucl. Phys.—IFIN-HH), the Quantum Optics Lab. (Nat.'l Inst. Plasma and Lasers—INFLPR) and the Physics Dept. (University Polytechnica—UPB) the project will build a demonstrator infrastructure for this technology. The status of the project in its incipient phase is reported, featuring tests for communications in classical security mode: socket level communications under AES (Advanced Encryption Std.), both proprietary code in C++ technology. An outline of the planned undertaking of the project is communicated, highlighting its impact in quantum physics, coherent optics and information technology.

  2. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    Science.gov (United States)

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  3. Remote data access in computational jobs on the ATLAS data grid

    CERN Document Server

    Begy, Volodimir; The ATLAS collaboration; Lassnig, Mario

    2018-01-01

    This work describes the technique of remote data access from computational jobs on the ATLAS data grid. In comparison to traditional data movement and stage-in approaches it is well suited for data transfers which are asynchronous with respect to the job execution. Hence, it can be used for optimization of data access patterns based on various policies. In this study, remote data access is realized with the HTTP and WebDAV protocols, and is investigated in the context of intra- and inter-computing site data transfers. In both cases, the typical scenarios for application of remote data access are identified. The paper also presents an analysis of parameters influencing the data goodput between heterogeneous storage element - worker node pairs on the grid.

  4. An Efficient Approach for Fast and Accurate Voltage Stability Margin Computation in Large Power Grids

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-11-01

    Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.

  5. Automatic knowledge extraction in sequencing analysis with multiagent system and grid computing.

    Science.gov (United States)

    González, Roberto; Zato, Carolina; Benito, Rocío; Bajo, Javier; Hernández, Jesús M; De Paz, Juan F; Vera, Vicente; Corchado, Juan M

    2012-12-01

    Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.

  6. Automatic knowledge extraction in sequencing analysis with multiagent system and grid computing

    Directory of Open Access Journals (Sweden)

    González Roberto

    2012-12-01

    Full Text Available Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.

  7. Long-term stability of salivary cortisol

    DEFF Research Database (Denmark)

    Garde, A H; Hansen, Åse Marie

    2005-01-01

    either stored in small vials or spiked to polyester Salivette tampons before analysis for cortisol using Spectria RIA kits. The effects of storage were evaluated by a linear regression model (mixed procedure) on a logarithmic scale. No effects on cortisol concentrations were found after storage of saliva......The measurement of salivary cortisol provides a simple, non-invasive, and stress-free measure frequently used in studies of the hypothalamic-pituitary-adrenal axis activity. In research projects, samples are often required to be stored for longer periods of time either because of the protocol...... of the project or because of lack of funding for analysis. The aim of the present study was to explore the effects of long-term storage of samples on the amounts of measurable cortisol. Ten pools of saliva were collected on polyester Salivette tampons from five subjects. After centrifugation the samples were...

  8. Long-term cryogenic space storage system

    Science.gov (United States)

    Hopkins, R. A.; Chronic, W. L.

    1973-01-01

    Discussion of the design, fabrication and testing of a 225-cu ft spherical cryogenic storage system for long-term subcritical applications under zero-g conditions in storing subcritical cryogens for space vehicle propulsion systems. The insulation system design, the analytical methods used, and the correlation between the performance test results and analytical predictions are described. The best available multilayer insulation materials and state-of-the-art thermal protection concepts were applied in the design, providing a boiloff rate of 0.152 lb/hr, or 0.032% per day, and an overall heat flux of 0.066 Btu/sq ft hr based on a 200 sq ft surface area. A six to eighteen month cryogenic storage is provided by this system for space applications.

  9. Long-term control of root growth

    Science.gov (United States)

    Burton, Frederick G.; Cataldo, Dominic A.; Cline, John F.; Skiens, W. Eugene

    1992-05-26

    A method and system for long-term control of root growth without killing the plants bearing those roots involves incorporating a 2,6-dinitroaniline in a polymer and disposing the polymer in an area in which root control is desired. This results in controlled release of the substituted aniline herbicide over a period of many years. Herbicides of this class have the property of preventing root elongation without translocating into other parts of the plant. The herbicide may be encapsulated in the polymer or mixed with it. The polymer-herbicide mixture may be formed into pellets, sheets, pipe gaskets, pipes for carrying water, or various other forms. The invention may be applied to other protection of buried hazardous wastes, protection of underground pipes, prevention of root intrusion beneath slabs, the dwarfing of trees or shrubs and other applications. The preferred herbicide is 4-difluoromethyl-N,N-dipropyl-2,6-dinitro-aniline, commonly known as trifluralin.

  10. Long term performance of radon mitigation systems

    International Nuclear Information System (INIS)

    Prill, R.; Fisk, W.J.

    2002-01-01

    Researchers installed radon mitigation systems in 12 houses in Spokane, Washington and Coeur d'Alene, Idaho during the heating season 1985--1986 and continued to monitor indoor radon quarterly and annually for ten years. The mitigation systems included active sub-slab ventilation, basement over-pressurization, and crawlspace isolation and ventilation. The occupants reported various operational problems with these early mitigation systems. The long-term radon measurements were essential to track the effectiveness of the mitigation systems over time. All 12 homes were visited during the second year of the study, while a second set 5 homes was visited during the fifth year to determine the cause(s) of increased radon in the homes. During these visits, the mitigation systems were inspected and measurements of system performance were made. Maintenance and modifications were performed to improve system performance in these homes

  11. Rising Long-term Interest Rates

    DEFF Research Database (Denmark)

    Hallett, Andrew Hughes

    Rather than chronicle recent developments in European long-term interest rates as such, this paper assesses the impact of increases in those interest rates on economic performance and inflation. That puts us in a position to evaluate the economic pressures for further rises in those rates......, the first question posed in this assignment, and the scope for overshooting (the second question), and then make some illustrative predictions of future interest rates in the euro area. We find a wide range of effects from rising interest rates, mostly small and mostly negative, focused on investment...... till the emerging European recovery is on a firmer basis and capable of overcoming increases in the cost of borrowing and shrinking fiscal space. There is also an implication that worries about rising/overshooting interest rates often reflect the fact that inflation risks are unequally distributed...

  12. Prediction of long-term creep curves

    International Nuclear Information System (INIS)

    Oikawa, Hiroshi; Maruyama, Kouichi

    1992-01-01

    This paper aims at discussing how to predict long-term irradiation enhanced creep properties from short-term tests. The predictive method based on the θ concept was examined by using creep data of ferritic steels. The method was successful in predicting creep curves including the tertiary creep stage as well as rupture lifetimes. Some material constants involved in the method are insensitive to the irradiation environment, and their values obtained in thermal creep are applicable to irradiation enhanced creep. The creep mechanisms of most engineering materials definitely change at the athermal yield stress in the non-creep regime. One should be aware that short-term tests must be carried out at stresses lower than the athermal yield stress in order to predict the creep behavior of structural components correctly. (orig.)

  13. Hanford grout: predicting long-term performance

    International Nuclear Information System (INIS)

    Sewart, G.H.; Mitchell, D.H.; Treat, R.L.; McMakin, A.H.

    1987-01-01

    Grouted disposal is being planned for the low-level portion of liquid radioactive wastes at the Hanford site in Washington state. The performance of the disposal system must be such that it will protect people and the environment for thousands of years after disposal. To predict whether a specific grout disposal system will comply with existing and foreseen regulations, a performance assessment (PA) is performed. Long-term PAs are conducted for a range of performance conditions. Performance assessment is an inexact science. Quantifying projected impacts is especially difficult when only scant data exist on the behavior of certain components of the disposal system over thousands of years. To develop defensible results, we are honing the models and obtaining experimental data. The combination of engineered features and PA refinements is being used to ensure that Hanford grout will meet its principal goal: to protect people and the environment in the future

  14. The discovery of long-term potentiation.

    Science.gov (United States)

    Lømo, Terje

    2003-04-29

    This paper describes circumstances around the discovery of long-term potentiation (LTP). In 1966, I had just begun independent work for the degree of Dr medicinae (PhD) in Per Andersen's laboratory in Oslo after an eighteen-month apprenticeship with him. Studying the effects of activating the perforant path to dentate granule cells in the hippocampus of anaesthetized rabbits, I observed that brief trains of stimuli resulted in increased efficiency of transmission at the perforant path-granule cell synapses that could last for hours. In 1968, Tim Bliss came to Per Andersen's laboratory to learn about the hippocampus and field potential recording for studies of possible memory mechanisms. The two of us then followed up my preliminary results from 1966 and did the experiments that resulted in a paper that is now properly considered to be the basic reference for the discovery of LTP.

  15. Long-term opioid therapy in Denmark

    DEFF Research Database (Denmark)

    Birke, H; Ekholm, Ola; Sjøgren, P

    2017-01-01

    significantly associated with initiation of L-TOT in individuals with CNCP at baseline. During follow-up, L-TOT in CNCP patients increased the likelihood of negative changes in pain interference with work (OR 9.2; 95% CI 1.9-43.6) and in moderate activities (OR 3.7; 95% CI 1.1-12.6). The analysis of all......,145). A nationally representative subsample of individuals (n = 2015) completed the self-administered questionnaire in both 2000 and 2013. Collected information included chronic pain (≥6 months), health behaviour, self-rated health, pain interference with work activities and physical activities. Long-term users were...... individuals indicated a dose-response relationship between longer treatment duration and the risk of experiencing negative changes. CONCLUSIONS: Individuals on L-TOT seemed not to achieve the key goals of opioid therapy: pain relief, improved quality of life and functional capacity. SIGNIFICANCE: Long...

  16. Long Term Planning at IQ Metal

    DEFF Research Database (Denmark)

    2017-01-01

    This is a Danish version. This case about long term planning at the owner-managed manufacturing firm IQ Metal shows how the future management and ownership may be organized to utilize owner assets and minimize roadblocks. Initially, the owner-manager Bo Fischer Larsen explains how he acquired...... a stake in 2007 in the company which at that time was named Braendstrup Maskinfabrik. He furthermore expalins how he has developed the company based on a strategic plan focusing on professionalization and outsourcing. Next, the video shows how to type Bo Fischer Larsen's replies to the questions...... in The Owner Strategy Map into the questionnaire available on www.ejerstrategi-kortet.dk. Lastly, the Owner Strategy Map's recommendation of how to organize the future management and ownership of IQ Metal is explained....

  17. Long-term Consequences of Early Parenthood

    DEFF Research Database (Denmark)

    Johansen, Eva Rye; Nielsen, Helena Skyt; Verner, Mette

    (and to lesser extent employment), as fathers appear to support the family, especially when early parenthood is combined with cohabitation with the mother and the child. Heterogeneous effects reveal that individuals with a more favorable socioeconomic background are affected more severely than......Having children at an early age is known to be associated with unfavorable economic outcomes, such as lower education, employment and earnings. In this paper, we study the long-term consequences of early parenthood for mothers and fathers. Our study is based on rich register-based data that......, importantly, merges all childbirths to the children’s mothers and fathers, allowing us to study the consequences of early parenthood for both parents. We perform a sibling fixed effects analysis in order to account for unobserved family attributes that are possibly correlated with early parenthood...

  18. Grid computing

    CERN Multimedia

    2007-01-01

    "Some of today's large-scale scientific activities - modelling climate change, Earth observation, studying the human genome and particle physics experiments - involve handling millions of bytes of data very rapidly." (1 page)

  19. Managing Records for the Long Term - 12363

    Energy Technology Data Exchange (ETDEWEB)

    Montgomery, John V. [U.S. Department of Energy, Office of Legacy Management, Morgantown, West Virginia (United States); Gueretta, Jeanie [U.S. Department of Energy, Office of Legacy Management, Grand Junction, Colorado (United States)

    2012-07-01

    The U.S. Department of Energy (DOE) is responsible for managing vast amounts of information documenting historical and current operations. This information is critical to the operations of the DOE Office of Legacy Management. Managing legacy records and information is challenging in terms of accessibility and changing technology. The Office of Legacy Management is meeting these challenges by making records and information management an organizational priority. The Office of Legacy Management mission is to manage DOE post-closure responsibilities at former Cold War weapons sites to ensure the future protection of human health and the environment. These responsibilities include environmental stewardship and long-term preservation and management of operational and environmental cleanup records associated with each site. A primary organizational goal for the Office of Legacy Management is to 'Preserve, Protect, and Share Records and Information'. Managing records for long-term preservation is an important responsibility. Adequate and dedicated resources and management support are required to perform this responsibility successfully. Records tell the story of an organization and may be required to defend an organization in court, provide historical information, identify lessons learned, or provide valuable information for researchers. Loss of records or the inability to retrieve records because of poor records management processes can have serious consequences and even lead to an organisation's downfall. Organizations must invest time and resources to establish a good records management program because of its significance to the organization as a whole. The Office of Legacy Management will continue to research and apply innovative ways of doing business to ensure that the organization stays at the forefront of effective records and information management. DOE is committed to preserving records that document our nation's Cold War legacy, and the

  20. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems.

    Directory of Open Access Journals (Sweden)

    Hajara Idris

    Full Text Available The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user's Quality of Service (QoS requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user's QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time.

  1. A transport layer protocol for the future high speed grid computing: SCTP versus fast tcp multihoming

    International Nuclear Information System (INIS)

    Arshad, M.J.; Mian, M.S.

    2010-01-01

    TCP (Transmission Control Protocol) is designed for reliable data transfer on the global Internet today. One of its strong points is its use of flow control algorithm that allows TCP to adjust its congestion window if network congestion is occurred. A number of studies and investigations have confirmed that traditional TCP is not suitable for each and every type of application, for example, bulk data transfer over high speed long distance networks. TCP sustained the time of low-capacity and short-delay networks, however, for numerous factors it cannot be capable to efficiently deal with today's growing technologies (such as wide area Grid computing and optical-fiber networks). This research work surveys the congestion control mechanism of transport protocols, and addresses the different issues involved for transferring the huge data over the future high speed Grid computing and optical-fiber networks. This work also presents the simulations to compare the performance of FAST TCP multihoming with SCTP (Stream Control Transmission Protocol) multihoming in high speed networks. These simulation results show that FAST TCP multihoming achieves bandwidth aggregation efficiently and outperforms SCTP multihoming under a similar network conditions. The survey and simulation results presented in this work reveal that multihoming support into FAST TCP does provide a lot of benefits like redundancy, load-sharing and policy-based routing, which largely improves the whole performance of a network and can meet the increasing demand of the future high-speed network infrastructures (such as in Grid computing). (author)

  2. The Grid

    CERN Document Server

    Klotz, Wolf-Dieter

    2005-01-01

    Grid technology is widely emerging. Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple, robust yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources. This talk will give a short history how, out of lessons learned from the Internet, the vision of Grids was born. Then the extensible anatomy of a Grid architecture will be discussed. The talk will end by presenting a selection of major Grid projects in Europe and US and if time permits a short on-line demonstration.

  3. Solution of Poisson equations for 3-dimensional grid generations. [computations of a flow field over a thin delta wing

    Science.gov (United States)

    Fujii, K.

    1983-01-01

    A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.

  4. Modelling the long-term deployment of electricity storage in the global energy system

    International Nuclear Information System (INIS)

    Despres, Jacques

    2015-01-01

    The current development of wind and solar power sources calls for an improvement of long-term energy models. Indeed, high shares of variable wind and solar productions have short- and long-term impacts on the power system, requiring the development of flexibility options: fast-reacting power plants, demand response, grid enhancement or electricity storage. Our first main contribution is the modelling of electricity storage and grid expansion in the POLES model (Prospective Outlook on Long-term Energy Systems). We set up new investment mechanisms, where storage development is based on several combined economic values. After categorising the long-term energy models and the power sector modelling tools in a common typology, we showed the need for a better integration of both approaches. Therefore, the second major contribution of our work is the yearly coupling of POLES to a short-term optimisation of the power sector operation, with the European Unit Commitment and Dispatch model (EUCAD). The two-way data exchange allows the long-term coherent scenarios of POLES to be directly backed by the short-term technical detail of EUCAD. Our results forecast a strong and rather quick development of the cheapest flexibility options: grid interconnections, pumped hydro storage and demand response programs, including electric vehicle charging optimisation and vehicle-to-grid storage. The more expensive battery storage presumably finds enough system value in the second half of the century. A sensitivity analysis shows that improving the fixed costs of batteries impacts more the investments than improving their efficiency. We also show the explicit dependency between storage and variable renewable energy sources. (author) [fr

  5. Computational model for turbulent flow around a grid spacer with mixing vane

    International Nuclear Information System (INIS)

    Tsutomu Ikeno; Takeo Kajishima

    2005-01-01

    Turbulent mixing coefficient and pressure drop are important factors in subchannel analysis to predict onset of DNB. However, universal correlations are difficult since these factors are significantly affected by the geometry of subchannel and a grid spacer with mixing vane. Therefore, we propose a computational model to estimate these factors. Computational model: To represent the effect of geometry of grid spacer in computational model, we applied a large eddy simulation (LES) technique in couple with an improved immersed-boundary method. In our previous work (Ikeno, et al., NURETH-10), detailed properties of turbulence in subchannel were successfully investigated by developing the immersed boundary method in LES. In this study, additional improvements are given: new one-equation dynamic sub-grid scale (SGS) model is introduced to account for the complex geometry without any artificial modification; the higher order accuracy is maintained by consistent treatment for boundary conditions for velocity and pressure. NUMERICAL TEST AND DISCUSSION: Turbulent mixing coefficient and pressure drop are affected strongly by the arrangement and inclination of mixing vane. Therefore, computations are carried out for each of convolute and periodic arrangements, and for each of 30 degree and 20 degree inclinations. The difference in turbulent mixing coefficient due to these factors is reasonably predicted by our method. (An example of this numerical test is shown in Fig. 1.) Turbulent flow of the problem includes unsteady separation behind the mixing vane and vortex shedding in downstream. Anisotropic distribution of turbulent stress is also appeared in rod gap. Therefore, our computational model has advantage for assessing the influence of arrangement and inclination of mixing vane. By coarser computational mesh, one can screen several candidates for spacer design. Then, by finer mesh, more quantitative analysis is possible. By such a scheme, we believe this method is useful

  6. Thermal Protection System Cavity Heating for Simplified and Actual Geometries Using Computational Fluid Dynamics Simulations with Unstructured Grids

    Science.gov (United States)

    McCloud, Peter L.

    2010-01-01

    Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.

  7. Long-term potentiation and long-term depression: a clinical perspective

    Directory of Open Access Journals (Sweden)

    Timothy V.P. Bliss

    2011-01-01

    Full Text Available Long-term potentiation and long-term depression are enduring changes in synaptic strength, induced by specific patterns of synaptic activity, that have received much attention as cellular models of information storage in the central nervous system. Work in a number of brain regions, from the spinal cord to the cerebral cortex, and in many animal species, ranging from invertebrates to humans, has demonstrated a reliable capacity for chemical synapses to undergo lasting changes in efficacy in response to a variety of induction protocols. In addition to their physiological relevance, long-term potentiation and depression may have important clinical applications. A growing insight into the molecular mechanisms underlying these processes, and technological advances in non-invasive manipulation of brain activity, now puts us at the threshold of harnessing long-term potentiation and depression and other forms of synaptic, cellular and circuit plasticity to manipulate synaptic strength in the human nervous system. Drugs may be used to erase or treat pathological synaptic states and non-invasive stimulation devices may be used to artificially induce synaptic plasticity to ameliorate conditions arising from disrupted synaptic drive. These approaches hold promise for the treatment of a variety of neurological conditions, including neuropathic pain, epilepsy, depression, amblyopia, tinnitus and stroke.

  8. Intelligent battery energy management and control for vehicle-to-grid via cloud computing network

    International Nuclear Information System (INIS)

    Khayyam, Hamid; Abawajy, Jemal; Javadi, Bahman; Goscinski, Andrzej; Stojcevski, Alex; Bab-Hadiashar, Alireza

    2013-01-01

    Highlights: • The intelligent battery energy management substantially reduces the interactions of PEV with parking lots. • The intelligent battery energy management improves the energy efficiency. • The intelligent battery energy management predicts the road load demand for vehicles. - Abstract: Plug-in Electric Vehicles (PEVs) provide new opportunities to reduce fuel consumption and exhaust emission. PEVs need to draw and store energy from an electrical grid to supply propulsive energy for the vehicle. As a result, it is important to know when PEVs batteries are available for charging and discharging. Furthermore, battery energy management and control is imperative for PEVs as the vehicle operation and even the safety of passengers depend on the battery system. Thus, scheduling the grid power electricity with parking lots would be needed for efficient charging and discharging of PEV batteries. This paper aims to propose a new intelligent battery energy management and control scheduling service charging that utilize Cloud computing networks. The proposed intelligent vehicle-to-grid scheduling service offers the computational scalability required to make decisions necessary to allow PEVs battery energy management systems to operate efficiently when the number of PEVs and charging devices are large. Experimental analyses of the proposed scheduling service as compared to a traditional scheduling service are conducted through simulations. The results show that the proposed intelligent battery energy management scheduling service substantially reduces the required number of interactions of PEV with parking lots and grid as well as predicting the load demand calculated in advance with regards to their limitations. Also it shows that the intelligent scheduling service charging using Cloud computing network is more efficient than the traditional scheduling service network for battery energy management and control

  9. WISDOM-II: Screening against multiple targets implicated in malaria using computational grid infrastructures

    Directory of Open Access Journals (Sweden)

    Kenyon Colin

    2009-05-01

    Full Text Available Abstract Background Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Motivation Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR, and on a new promising one, glutathione-S-transferase. Methods In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. Results On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. Conclusion The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software

  10. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    International Nuclear Information System (INIS)

    Lavoie-Courchesne, S; Chouinard-Decorte, F; Doyon, J; Bellec, P; Rioux, P; Sherif, T; Rousseau, M-E; Das, S; Adalat, R; Evans, A C; Craddock, C; Margulies, D; Chu, C; Lyttelton, O

    2012-01-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  11. Emulation of long-term changes in global climate: Application to the late Pliocene and future

    KAUST Repository

    Lord, Natalie S.

    2017-04-26

    Multi-millennial transient simulations of climate changes have a range of important applications, such as for investigating key geologic events and transitions for which high-resolution palaeoenvironmental proxy data are available, or for projecting the long-term impacts of future climate evolution on the performance of geological repositories for the disposal of radioactive wastes. However, due to the high computational requirements of current fully coupled general circulation models (GCMs), long-term simulations can generally only be performed with less complex models and/or at lower spatial resolution. In this study, we present novel long-term

  12. Emulation of long-term changes in global climate: application to the late Pliocene and future

    KAUST Repository

    Lord, Natalie S.

    2017-11-16

    Multi-millennial transient simulations of climate changes have a range of important applications, such as for investigating key geologic events and transitions for which high-resolution palaeoenvironmental proxy data are available, or for projecting the long-term impacts of future climate evolution on the performance of geological repositories for the disposal of radioactive wastes. However, due to the high computational requirements of current fully coupled general circulation models (GCMs), long-term simulations can generally only be performed with less complex models and/or at lower spatial resolution. In this study, we present novel long-term

  13. Emulation of long-term changes in global climate: application to the late Pliocene and future

    KAUST Repository

    Lord, Natalie S.; Crucifix, Michel; Lunt, Dan J.; Thorne, Mike C.; Bounceur, Nabila; Dowsett, Harry; O& apos; Brien, Charlotte L.; Ridgwell, Andy

    2017-01-01

    Multi-millennial transient simulations of climate changes have a range of important applications, such as for investigating key geologic events and transitions for which high-resolution palaeoenvironmental proxy data are available, or for projecting the long-term impacts of future climate evolution on the performance of geological repositories for the disposal of radioactive wastes. However, due to the high computational requirements of current fully coupled general circulation models (GCMs), long-term simulations can generally only be performed with less complex models and/or at lower spatial resolution. In this study, we present novel long-term

  14. Emulation of long-term changes in global climate: Application to the late Pliocene and future

    KAUST Repository

    Lord, Natalie S.; Crucifix, Michel; Lunt, Dan J.; Thorne, Mike C.; Bounceur, Nabila; Dowsett, Harry; O& apos; Brien, Charlotte L.; Ridgwell, Andy

    2017-01-01

    Multi-millennial transient simulations of climate changes have a range of important applications, such as for investigating key geologic events and transitions for which high-resolution palaeoenvironmental proxy data are available, or for projecting the long-term impacts of future climate evolution on the performance of geological repositories for the disposal of radioactive wastes. However, due to the high computational requirements of current fully coupled general circulation models (GCMs), long-term simulations can generally only be performed with less complex models and/or at lower spatial resolution. In this study, we present novel long-term

  15. Long-Term Clock Behavior of GPS IIR Satellites

    National Research Council Canada - National Science Library

    Epstein, Marvin; Dass, Todd; Rajan, John; Gilmour, Paul

    2007-01-01

    .... Rubidium clocks, as opposed to cesium clocks, have significant long-term drift. The current literature describes an initial model of drift aging for rubidium atomic clocks followed by a long-term characteristic...

  16. Elevated rheumatoid factor and long term risk of rheumatoid arthritis

    DEFF Research Database (Denmark)

    Nielsen, Sune F; Bojesen, Stig E; Schnohr, Peter

    2012-01-01

    To test whether elevated concentration of rheumatoid factor is associated with long term development of rheumatoid arthritis.......To test whether elevated concentration of rheumatoid factor is associated with long term development of rheumatoid arthritis....

  17. The Future of Distributed Computing Systems in ATLAS: Boldly Venturing Beyond Grids

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2018-01-01

    The Production and Distributed Analysis system (PanDA) for the ATLAS experiment at the Large Hadron Collider has seen big changes over the past couple of years to accommodate new types of distributed computing resources: clouds, HPCs, volunteer computers and other external resources. While PanDA was originally designed for fairly homogeneous resources available through the Worldwide LHC Computing Grid, the new resources are heterogeneous, at diverse scales and with diverse interfaces. Up to a fifth of the resources available to ATLAS are of such new types and require special techniques for integration into PanDA. In this talk, we present the nature and scale of these resources. We provide an overview of the various challenges faced, spanning infrastructure, software distribution, workload requirements, scaling requirements, workflow management, data management, network provisioning, and associated software and computing facilities. We describe the strategies for integrating these heterogeneous resources into ...

  18. Effect of long-term impact-loading on mass, size, and estimated strength of humerus and radius of female racquet-sports players: a peripheral quantitative computed tomography study between young and old starters and controls.

    Science.gov (United States)

    Kontulainen, Saija; Sievänen, Harri; Kannus, Pekka; Pasanen, Matti; Vuori, Ilkka

    2003-02-01

    % greater in young starters compared with that of the old starters and 14% compared with that in controls, whereas the difference between old starters and controls was 6%, in favor of the former. All these between-group differences were statistically significant. At the distal radius, the player groups differed significantly from controls in the side-to-side bone mineral content, TrD, and aBMD differences only: the young starters' bone mineral content difference was 9% greater, TrD and aBMD differences were 5% greater than those in the controls, and the old starters' TrD and aBMD differences were both 7% greater than those in the controls. In summary, in both of the female player groups, the structural adaptation of the humeral shaft to long-term loading seemed to be achieved through periosteal enlargement of the bone cortex, although this adaptation was clearly better in the young starters. Exercise-induced cortical enlargement was not so clear at the distal radius (a trabecular bone site), and the study suggested that at long bone ends, the trabecular density could be a modifiable factor to built a stronger bone structure. Conventional DXA-based aBMD measurement detected the intergroup differences in the exercise-induced bone gains, although, because it measured two dimensions of bone only, it seemed to underestimate the effect of exercise on the apparent bone strength, especially if the playing had been started during the growing years.

  19. Forecasting Model for Network Throughput of Remote Data Access in Computing Grids

    CERN Document Server

    Begy, Volodimir; The ATLAS collaboration

    2018-01-01

    Computing grids are one of the key enablers of eScience. Researchers from many fields (e.g. High Energy Physics, Bioinformatics, Climatology, etc.) employ grids to run computational jobs in a highly distributed manner. The current state of the art approach for data access in the grid is data placement: a job is scheduled to run at a specific data center, and its execution starts only when the complete input data has been transferred there. This approach has two major disadvantages: (1) the jobs are staying idle while waiting for the input data; (2) due to the limited infrastructure resources, the distributed data management system handling the data placement, may queue the transfers up to several days. An alternative approach is remote data access: a job may stream the input data directly from storage elements, which may be located at local or remote data centers. Remote data access brings two innovative benefits: (1) the jobs can be executed asynchronously with respect to the data transfer; (2) when combined...

  20. From the CERN web: grid computing, night shift, ridge effect and more

    CERN Multimedia

    2015-01-01

    This section highlights articles, blog posts and press releases published in the CERN web environment over the past weeks. This way, you won’t miss a thing...   Schoolboy uses grid computing to analyse satellite data 9 December - by David Lugmayer  At just 16, Cal Hewitt, a student at Simon Langton Grammar School for Boys in the United Kingdom became the youngest person to receive grid certification – giving him access to huge grid-computing resources. Hewitt uses these resources to help analyse data from the LUCID satellite detector, which a team of students from the school launched into space last year. Continue to read…    Night shift in the CMS Control Room (Photo: Andrés Delannoy). On Seagull Soup and Coffee Deficiency: Night Shift at CMS 8 December – CMS Collaboration More than half a year, a school trip to CERN, and a round of 13 TeV collisions later, the week-long internship we completed at CMS over E...

  1. Computation for LHC experiments: a worldwide computing grid; Le calcul scientifique des experiences LHC: une grille de production mondiale

    Energy Technology Data Exchange (ETDEWEB)

    Fairouz, Malek [Universite Joseph-Fourier, LPSC, CNRS-IN2P3, Grenoble I, 38 (France)

    2010-08-15

    In normal operating conditions the LHC detectors are expected to record about 10{sup 10} collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10{sup 9} octets per second and recording capacity of a few tens of 10{sup 15} octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  2. Long term results of mandibular distraction

    Directory of Open Access Journals (Sweden)

    Batra Puneet

    2006-03-01

    Full Text Available Mandibular distraction osteogenesis has become a popular surgical modality due to its many advantages over conventional orthognathic surgical procedures. However, in spite of the technique having been used for over 15 years, no concrete long term results are available regarding the stability of results. We discuss the various studies which have reported either in favour or against the stablility of results after distraction. We report a series of 6 cases (3 unilateral and 3 bilateral distraction where distraction was carried out before puberty and followed them up to seven years after removal of distractors. This case series shows that results achieved by distraction osteogenesis are unstable or best unpredictable with respect to producing a permanent size increase in the mandible. The role of the distraction osteogenesis in overcoming the pterygomassetric sling is questionable. We suggest a multicenter study with adequate patient numbers treated with a similar protocol and documented after growth cessation to have meaningful conclusions on the debate of distraction osteogenesis versus orthognathic surgery.

  3. [Perioperative management of long-term medication].

    Science.gov (United States)

    Vogel Kahmann, I; Ruppen, W; Lurati Buse, G; Tsakiris, D A; Bruggisser, M

    2011-01-01

    Anesthesiologists and surgeons are increasingly faced with patients who are under long-term medication. Some of these drugs can interact with anaesthetics or anaesthesia and/or surgical interventions. As a result, patients may experience complications such as bleeding, ischemia, infection or severe circulatory reactions. On the other hand, perioperative discontinuation of medication is often more dangerous. The proportion of outpatient operations has increased dramatically in recent years and will probably continue to increase. Since the implementation of DRGs (pending in Switzerland, introduced in Germany for some time), the patient enters the hospital the day before operation. This means that the referring physician as well as anesthesiologists and surgeons at an early stage must deal with issues of perioperative pharmacotherapy. This review article is about the management of the major drug classes during the perioperative period. In addition to cardiac and centrally acting drugs and drugs that act on hemostasis and the endocrine system, special cases such as immunosuppressants and herbal remedies are mentioned.

  4. Long term agreements energy efficiency. Progress 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-11-01

    Long Term Agreements (LTAs) on energy efficiency have been contracted with various business sectors since 1992, as part of energy conservation policy: industrial sectors, commercial services, agrarian sectors and non-profit services. LTAs are voluntary agreements between a specific sector and the Minister of Economic Affairs. In some cases, the Minister of Agriculture, Nature Management and Fisheries is also involved. The sector commits to an effort to improve energy efficiency by a particular percentage within an agreed period. As at 31 December 1999, a total of 29 LTAs had been contracted with industrial sectors and 14 with non-industrial ones. This report describes the progress of the LTAs in 1999. It reviews the energy efficiency improvements realised through the LTAs, both overall and in each individual sector. The aim is to make the efforts and results in the various sectors accessible to the general public. Appendix 1 describes the positioning of the LTA instrument. This Appendix provides and insight into the position of the LTAs within the overall set of policy instruments. It also covers the subsidy schemes and fiscal instruments that support the LTAs, the relationships between LTAs and environmental policy and new developments relating to the LTAs in the years ahead. Appendices 2 to 6 contain the reports on the LTAs and a list of abbreviations (Appendix 7)

  5. Long-term outcome of neuroparacoccidioidomycosis treatment

    Directory of Open Access Journals (Sweden)

    Fabio Francesconi

    2011-02-01

    Full Text Available INTRODUCTION: Neuroparacoccidioidomycosis (NPCM is a term used to describe the invasion of the central nervous system by the pathogenic fungus Paracoccidioides brasiliensis. NPCM has been described sporadically in some case reports and small case series, with little or no focus on treatment outcome and long-term follow-up. METHODS: All patients with NPCM from January 1991 to December 2006 were analyzed and were followed until December 2009. RESULTS: Fourteen (3.8% cases of NPCM were identified out of 367 patients with paracoccidioidomycosis (PCM. A combination of oral fluconazole and sulfamethoxazole/trimethoprim (SMZ/TMP was the regimen of choice, with no documented death due to Paracoccidioides brasiliensis infection. Residual neurological deficits were observed in 8 patients. Residual calcification was a common finding in neuroimaging follow-up. CONCLUSIONS: All the patients in this study responded positively to the association of oral fluconazole and sulfamethoxazole/trimethoprim, a regimen that should be considered a treatment option in cases of NPCM. Neurological sequela was a relatively common finding. For proper management of these patients, anticonvulsant treatment and physical therapy support were also needed.

  6. Long term prospects for world gas trade

    International Nuclear Information System (INIS)

    Linder, P.T.

    1991-01-01

    Results are presented from a world gas trade model used to forecast long term gas markets. Assumptions that went into the model are described, including the extent of current proven gas reserves, production ratios, total energy and gas demand, gas supply cost curves for each producing country, available gas liquefaction and transportation facilities, and liquefied natural gas (LNG) shipping costs. The results indicate that even with generally very low supply costs for most gas producing basins, gas trade will continue to be restricted by the relatively high cost of transportation, whether by pipeline or tanker. As a consequence, future gas trade will tend to be regionally oriented. United States gas imports will come mostly from Canada, Venezuela, and Mexico; Western Europe will largely be supplied by the Soviet Union and Africa, and Japan's requirements will generally be met by Pacific Rim producers. Although the Middle East has vast quantities of gas reserves, its export growth will continue to be hampered by its remote location from major markets. 16 figs

  7. Long term results of childhood dysphonia treatment.

    Science.gov (United States)

    Mackiewicz-Nartowicz, Hanna; Sinkiewicz, Anna; Bielecka, Arleta; Owczarzak, Hanna; Mackiewicz-Milewska, Magdalena; Winiarski, Piotr

    2014-05-01

    The aim of this study was to assess the long term results of treatment and rehabilitation of childhood dysphonia. This study included a group of adolescents (n=29) aged from 15 to 20 who were treated due to pediatric hyperfunctional dysphonia and soft vocal fold nodules during their pre-mutational period (i.e. between 5 and 12 years of age). The pre-mutational therapy was comprised of proper breathing pattern training, voice exercises and psychological counseling. Laryngostroboscopic examination and perceptual analysis of voice were performed in each patient before treatment and one to four years after mutation was complete. The laryngostroboscopic findings, i.e. symmetry, amplitude, mucosal wave and vocal fold closure, were graded with NAPZ scale, and the GRBAS scale was used for the perceptual voice analysis. Complete regression of the childhood dysphonia was observed in all male patients (n=14). Voice disorders regressed completely also in 8 out of 15 girls, but symptoms of dysphonia documented on perceptual scale persisted in the remaining seven patients. Complex voice therapy implemented in adolescence should be considered as either the treatment or preventive measure of persistent voice strain, especially in girls. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Long-term data storage in diamond

    Science.gov (United States)

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A.

    2016-01-01

    The negatively charged nitrogen vacancy (NV−) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV− optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV− ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center’s charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV− ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies. PMID:27819045

  9. Long-term data storage in diamond.

    Science.gov (United States)

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A

    2016-10-01

    The negatively charged nitrogen vacancy (NV - ) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV - optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV - ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center's charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV - ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies.

  10. Long-term predictions using natural analogues

    International Nuclear Information System (INIS)

    Ewing, R.C.

    1995-01-01

    One of the unique and scientifically most challenging aspects of nuclear waste isolation is the extrapolation of short-term laboratory data (hours to years) to the long time periods (10 3 -10 5 years) required by regulatory agencies for performance assessment. The direct validation of these extrapolations is not possible, but methods must be developed to demonstrate compliance with government regulations and to satisfy the lay public that there is a demonstrable and reasonable basis for accepting the long-term extrapolations. Natural systems (e.g., open-quotes natural analoguesclose quotes) provide perhaps the only means of partial open-quotes validation,close quotes as well as data that may be used directly in the models that are used in the extrapolation. Natural systems provide data on very large spatial (nm to km) and temporal (10 3 -10 8 years) scales and in highly complex terranes in which unknown synergisms may affect radionuclide migration. This paper reviews the application (and most importantly, the limitations) of data from natural analogue systems to the open-quotes validationclose quotes of performance assessments

  11. Long-term effects of islet transplantation.

    Science.gov (United States)

    Holmes-Walker, D Jane; Kay, Thomas W H

    2016-10-01

    Islet transplantation has made great progress in recent years. This is a remarkable technical feat but raises the question of what the long-term benefits and risks are for type I diabetes recipients. Graft survival continues to improve, and recent multicenter studies show that islet transplantation is particularly effective to prevent hypoglycemic events even in those who do not become insulin-independent and to achieve excellent glycemic control. Concerns include histocompatability leucocyte antigen (HLA) sensitization and other risks including from immunosuppression that islet transplantation shares with other forms of allotransplantation. Reversal of hypoglycemia unawareness and protection from severe hypoglycemia events are two of the main benefits of islet transplantation and they persist for the duration of graft function. Islet transplantation compares favorably with other therapies for those with hypoglycemia unawareness, although new technologies have not been tested head-to-head with transplantation. HLA sensitization increases with time after transplantation especially if immunosuppression is ceased and is a risk for those who may require future transplantation as well as being associated with loss of graft function.

  12. Containment long-term operational integrity

    International Nuclear Information System (INIS)

    Sammataro, R.F.

    1990-01-01

    Periodic integrated leak rate tests are required to assure that containments continue to meet allowable leakage limits. Although overall performance has been quite good to date, several major containment aging and degradation mechanisms have been identified. Two pilot plant life extension (PLEX) studies serve as models for extending the operational integrity of present containments for light-water cooled nuclear power plants in the United States. One study is for a Boiling-Water Reactor (BWR) and the second is for a Pressurized-Water Reactor (PWR). Research and testing programs for determining the ultimate pressure capacity and failure mechanisms for containments under severe loading conditions and studies for extending the life of current plants beyond the present 40-year licensed lifetime are under way. This paper presents an overview of containment designs in the United States. Also presented are a discussion of the American Society of Mechanical Engineers Boiler and Pressure Vessel Code (ASME Code) and regulatory authority requirements for the design, construction, inservice inspection, leakage testing and repair of steel and concrete containments. Findings for containments from the pilot PLEX studies and continuing containment integrity research and testing programs are discussed. The ASME Code and regulatory requirements together with recommendations from the PLEX studies and containment integrity research and testing provide a basis for continued containment long-term operational integrity. (orig./GL)

  13. Long term results of compression sclerotherapy.

    Science.gov (United States)

    Labas, P; Ohradka, B; Cambal, M; Reis, R; Fillo, J

    2003-01-01

    To compare the short and long term results of different techniques of compression sclerotherapy. In the past 10 years the authors treated 1622 pts due to chronic venous insufficiency. There were 3 groups of patients: 1) Pts treated by Sigg's technique using Aethoxysclerol, 2) Pts treated by Fegan's technique with Fibrovein, and 3) Pts treated by Fegan's procedure, but using a combination of both sclerosants. In all cases, the techniques of empty vein, bubble air, uninterrupted 6-week compression and forced mobilisation were used. In the group of pats. treated by Sigg's procedure, the average cure rate was 67.47% after 6 months, 60.3% after 5 years of follow-up. In Fegan's group this rate was 83.6% after 6 months and 78.54% after 5 year assessment. Statistically, significant differences were found only by the disappearance of varices and reduction of pain in favour of Fegan's technique. In the group of pts treated by Fegan's (Aethoxysclerol + Fibrovein) this rate after 5 years was 86%. The only statistically significant difference was found by the disappearance of varices in favour of Fegan's technique using a combination of 2 detergent sclerosants. Sclerotherapy is effective when properly executed in any length of vein no matter how dilated it has become. The recurrences are attributed more to inadequate technique than to the shortcoming of the procedure. Sclerotherapy is miniinvasive, with few complications, and can be repeated on out-patient basis. (Tab. 1, Ref. 22.).

  14. Transuranic waste: long-term planning

    International Nuclear Information System (INIS)

    Young, K.C.

    1985-07-01

    Societal concerns for the safe handling and disposal of toxic waste are behind many of the regulations and the control measures in effect today. Transuranic waste, a specific category of toxic (radioactive) waste, serves as a good example of how regulations and controls impact changes in waste processing - and vice versa. As problems would arise with waste processing, changes would be instituted. These changes improved techniques for handling and disposal of transuranic waste, reduced the risk of breached containment, and were usually linked with regulatory changes. Today, however, we face a greater public awareness of and concern for toxic waste control; thus, we must anticipate potential problems and work on resolving them before they can become real problems. System safety analyses are valuable aids in long-term planning for operations involving transuranic as well as other toxic materials. Examples of specific system safety analytical methods demonstrate how problems can be anticipated and resolution initiated in a timely manner having minimal impacts upon allocation of resource and operational goals. 7 refs., 1 fig

  15. Long-term plutonium storage: Design concepts

    International Nuclear Information System (INIS)

    Wilkey, D.D.; Wood, W.T.; Guenther, C.D.

    1994-01-01

    An important part of the Department of Energy (DOE) Weapons Complex Reconfiguration (WCR) Program is the development of facilities for long-term storage of plutonium. The WCR design goals are to provide storage for metals, oxides, pits, and fuel-grade plutonium, including material being held as part of the Strategic Reserve and excess material. Major activities associated with plutonium storage are sorting the plutonium inventory, material handling and storage support, shipping and receiving, and surveillance of material in storage for both safety evaluations and safeguards and security. A variety of methods for plutonium storage have been used, both within the DOE weapons complex and by external organizations. This paper discusses the advantages and disadvantages of proposed storage concepts based upon functional criteria. The concepts discussed include floor wells, vertical and horizontal sleeves, warehouse storage on vertical racks, and modular storage units. Issues/factors considered in determining a preferred design include operational efficiency, maintenance and repair, environmental impact, radiation and criticality safety, safeguards and security, heat removal, waste minimization, international inspection requirements, and construction and operational costs

  16. Effective Moment Of Inertia And Deflections Of Reinforced Concrete Beams Under Long-Term Loading

    OpenAIRE

    Mahmood, Khalid M.; Ashour, Samir A.; Al-Noury, Soliman I.

    1995-01-01

    The paper presents a method for estimating long-term deflections of reinforced concrete beams by considering creep and shrinkage effects separately. Based on equilibrium and compatibility conditions a method is developed for investigating the properties of a cracked transformed section under sustained load. The concept of effective moment of inertia is extended to predict initial-plus-creep deflections. Long-term deflections computed by the proposed method are compared with the experimental r...

  17. Data grids a new computational infrastructure for data-intensive science

    CERN Document Server

    Avery, P

    2002-01-01

    Twenty-first-century scientific and engineering enterprises are increasingly characterized by their geographic dispersion and their reliance on large data archives. These characteristics bring with them unique challenges. First, the increasing size and complexity of modern data collections require significant investments in information technologies to store, retrieve and analyse them. Second, the increased distribution of people and resources in these projects has made resource sharing and collaboration across significant geographic and organizational boundaries critical to their success. In this paper I explore how computing infrastructures based on data grids offer data-intensive enterprises a comprehensive, scalable framework for collaboration and resource sharing. A detailed example of a data grid framework is presented for a Large Hadron Collider experiment, where a hierarchical set of laboratory and university resources comprising petaflops of processing power and a multi- petabyte data archive must be ...

  18. Understanding and Mastering Dynamics in Computing Grids Processing Moldable Tasks with User-Level Overlay

    CERN Document Server

    Moscicki, Jakub Tomasz

    Scientic communities are using a growing number of distributed systems, from lo- cal batch systems, community-specic services and supercomputers to general-purpose, global grid infrastructures. Increasing the research capabilities for science is the raison d'^etre of such infrastructures which provide access to diversied computational, storage and data resources at large scales. Grids are rather chaotic, highly heterogeneous, de- centralized systems where unpredictable workloads, component failures and variability of execution environments are commonplace. Understanding and mastering the hetero- geneity and dynamics of such distributed systems is prohibitive for end users if they are not supported by appropriate methods and tools. The time cost to learn and use the interfaces and idiosyncrasies of dierent distributed environments is another challenge. Obtaining more reliable application execution times and boosting parallel speedup are important to increase the research capabilities of scientic communities. L...

  19. Essays on long-term mortality and interest rate risk

    NARCIS (Netherlands)

    de Kort, J.P.

    2017-01-01

    This dissertation comprises a study of long-term risks which play a major role in actuarial science. In Part I we analyse long-term mortality risk and its impact on consumption and investment decisions of economic agents, while Part II focuses on the mathematical modelling of long-term interest

  20. Numerical Nuclear Second Derivatives on a Computing Grid: Enabling and Accelerating Frequency Calculations on Complex Molecular Systems.

    Science.gov (United States)

    Yang, Tzuhsiung; Berry, John F

    2018-06-04

    The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.

  1. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    International Nuclear Information System (INIS)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-01-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we describe the WNoDeS architecture.

  2. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    Science.gov (United States)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-12-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.

  3. Impact of energy conservation policy measures on innovation, investment and long-term development of the Swiss economy. Results from the computable induced technical change and energy (CITE) model - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Bretschger, L.; Ramer, R.; Schwark, F.

    2010-09-15

    This comprehensive final report for the Swiss Federal Office of Energy (SFOE) presents the results of a study made on the Computable Induced Technical Change and Energy (CITE) model. The authors note that, in the past two centuries, the Swiss economy experienced an unprecedented increase in living standards. At the same time, the stock of various natural resources declined and the environmental conditions changed substantially. The evaluation of the sustainability of a low energy and low carbon society as well as an optimum transition to this state is discussed. An economic analysis is made and the CITE and GCE (Computable General Equilibrium) numerical simulation models are discussed. The results obtained are presented and discussed.

  4. Scientific Understanding from Long Term Observations: Insights from the Long Term Ecological Research (LTER) Program

    Science.gov (United States)

    Gosz, J.

    2001-12-01

    The network dedicated to Long Term Ecological Research (LTER) in the United States has grown to 24 sites since it was formed in 1980. Long-term research and monitoring are performed on parameters thatare basic to all ecosystems and are required to understand patterns, processes, and relationship to change. Collectively, the sites in the LTER Network provide opportunities to contrast marine, coastal, and continental regions, the full range of climatic gradients existing in North America, and aquatic and terrestrial habitats in a range of ecosystem types. The combination of common core areas and long-term research and monitoring in many habitats have allowed unprecedented abilities to understand and compare complex temporal and spatial dynamics associated with issues like climate change, effects of pollution, biodiversity and landuse. For example, McMurdo Dry Valley in the Antarctic has demonstrated an increase in glacier mass since 1993 which coincides with a period of cooler than normal summers and more than average snowfall. In contrast, the Bonanza Creek and Toolik Lake sites in Alaska have recorded a warming period unprecedented in the past 200 years. Nitrogen deposition effects have been identified through long-term watershed studies on biogeochemical cycles, especially at Coweeta Hydrological Lab, Harvard Forest, and the Hubbard Brook Experimental Forest. In aquatic systems, such as the Northern Temperate Lakes site, long-term data revealed time lags in effects of invaders and disturbance on lake communities. Biological recovery from an effect such as lake acidification was shown to lag behind chemical recovery. The long-term changes documented over 2 decades have been instrumental in influencing management practices in many of the LTER areas. In Puerto Rico, the Luquillo LTER demonstrated that dams obstruct migrations of fish and freshwater shrimp and water abstraction at low flows can completely obliterate downstream migration of juveniles and damage

  5. Energy in 2010 - 2020. Long term challenges; Energie 2010-2020. Les defis du long terme

    Energy Technology Data Exchange (ETDEWEB)

    Dessus, Benjamin [ed.] [Centre National de la Recherche Scientifique (CNRS), 75 - Paris (France)

    2000-02-02

    This report presents the results of a workshop intending to anticipate the long term challenges, to guide better the short term power options, to understand the available political, economical and technical assumptions for the prospective world situation, to give some strategic hints on the necessary transition. Indeed, the difficult issue which the workshop tried to tackle was how should we prepare to reveal the energetic challenge of the development of the eight to ten billion inhabitants of our Planet in the next century without jeopardizing its existence. The energetic problems, a hardcore of the international preoccupation of both growth and environment, as it was recently evidenced by the climatic conference in Kyoto, have ever been the object of a particular attention on the part of General Commissariat of Plan. Thus, the commission 'Energy in 2010 - 2020' has been instituted in April 1996 in order to update the works done in 1990 - 1991 by the commission 'Energy 2010'. Soon it occurred to this new commission the task of illuminating its works by a long term (2050 - 2100) world prospective analysis of the challenges and problems linked to energy, growth and environment. In conclusion, this document tried to find answers to questions like: - which are the risks the energy consumption augmentation entail? - can we control them by appropriate urbanism and transport policies or technological innovation?. Four options for immediate action are suggested: - the energy efficiency should become a priority objective of policies; -coping with the long term challenges requires acting at present; - building the transition between governmental leadership and market; - taking profit of all the possible synergies between short and long term planning.

  6. Dosimetry in radiotherapy and brachytherapy by Monte-Carlo GATE simulation on computing grid

    International Nuclear Information System (INIS)

    Thiam, Ch.O.

    2007-10-01

    Accurate radiotherapy treatment requires the delivery of a precise dose to the tumour volume and a good knowledge of the dose deposit to the neighbouring zones. Computation of the treatments is usually carried out by a Treatment Planning System (T.P.S.) which needs to be precise and fast. The G.A.T.E. platform for Monte-Carlo simulation based on G.E.A.N.T.4 is an emerging tool for nuclear medicine application that provides functionalities for fast and reliable dosimetric calculations. In this thesis, we studied in parallel a validation of the G.A.T.E. platform for the modelling of electrons and photons low energy sources and the optimized use of grid infrastructures to reduce simulations computing time. G.A.T.E. was validated for the dose calculation of point kernels for mono-energetic electrons and compared with the results of other Monte-Carlo studies. A detailed study was made on the energy deposit during electrons transport in G.E.A.N.T.4. In order to validate G.A.T.E. for very low energy photons (<35 keV), three models of radioactive sources used in brachytherapy and containing iodine 125 (2301 of Best Medical International; Symmetra of Uro- Med/Bebig and 6711 of Amersham) were simulated. Our results were analyzed according to the recommendations of task group No43 of American Association of Physicists in Medicine (A.A.P.M.). They show a good agreement between G.A.T.E., the reference studies and A.A.P.M. recommended values. The use of Monte-Carlo simulations for a better definition of the dose deposited in the tumour volumes requires long computing time. In order to reduce it, we exploited E.G.E.E. grid infrastructure where simulations are distributed using innovative technologies taking into account the grid status. Time necessary for the computing of a radiotherapy planning simulation using electrons was reduced by a factor 30. A Web platform based on G.E.N.I.U.S. portal was developed to make easily available all the methods to submit and manage G

  7. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  8. Monitoring long-term oral corticosteroids.

    Science.gov (United States)

    Mundell, Lewis; Lindemann, Roberta; Douglas, James

    2017-01-01

    Corticosteroids are synthetic analogues of human hormones normally produced by the adrenal cortex. They have both glucocorticoid and mineralocorticoid properties. The glucocortoid components are anti-inflammatory, immunosuppressive, anti-proliferative and vasoconstrictive. They influence the metabolism of carbohydrate and protein, in addition to playing a key role in the body's stress response. Mineralocorticoid's main significance is in the balance of salt and water concentrations. Due to the combination of these effects, corticosteroids can cause many adverse effects. Oral corticosteroids are absorbed systemically and are therefore more likely to cause adverse effects than topical or inhaled corticosteroids. Furthermore, it is assumed that greater duration of treatment will lead to a greater number of adverse effects, and therefore the most at risk group are those taking high dose, long-term oral corticosteroids (LTOC). High dose is defined as a prescription of >5 mg oral prednisolone and long term as duration of treatment >1 month (based on National Institute for Health and Care Excellence guidance for patient's 'at risk' of systemic side effects). Parameters to be monitored in primary care include weight, blood pressure, triglycerides, glucose and urea and electrolytes. From clinical experience within the general practice setting, the authors propose that these patients do not receive adequate baseline monitoring before starting corticosteroids nor are these markers monitored consistently thereafter. This project intended to evidence this claim, evaluate the adverse effect profile and improve monitoring in this patient group. The initial audit of 22 patients, within a single general practice, detected at least one documented adverse effect in 64% of patients, while 41% reported more than one adverse effect. 45% had recorded weight gain, 18% had recorded osteoporosis, 18% had at least one recorded cataract, 14% had recorded Hypertension, 14% had recorded

  9. A security/safety survey of long term care facilities.

    Science.gov (United States)

    Acorn, Jonathan R

    2010-01-01

    What are the major security/safety problems of long term care facilities? What steps are being taken by some facilities to mitigate such problems? Answers to these questions can be found in a survey of IAHSS members involved in long term care security conducted for the IAHSS Long Term Care Security Task Force. The survey, the author points out, focuses primarily on long term care facilities operated by hospitals and health systems. However, he believes, it does accurately reflect the security problems most long term facilities face, and presents valuable information on security systems and practices which should be also considered by independent and chain operated facilities.

  10. HLW Long-term Management Technology Development

    International Nuclear Information System (INIS)

    Choi, Jong Won; Kang, C. H.; Ko, Y. K.

    2010-02-01

    Permanent disposal of spent nuclear fuels from the power generation is considered to be the unique method for the conservation of human being and nature in the present and future. In spite of spent nuclear fuels produced from power generation, based on the recent trends on the gap between supply and demand of energy, the advance on energy price and reduction of carbon dioxide, nuclear energy is expected to play a role continuously in Korea. It means that a new concept of nuclear fuel cycle is needed to solve problems on spent nuclear fuels. The concept of the advanced nuclear fuel cycle including PYRO processing and SFR was presented at the 255th meeting of the Atomic Energy Commission. According to the concept of the advanced nuclear fuel cycle, actinides and long-term fissile nuclides may go out of existence in SFR. And then it is possible to dispose of short term decay wastes without a great risk bearing. Many efforts had been made to develop the KRS for the direct disposal of spent nuclear fuels in the representative geology of Korea. But in the case of the adoption of Advanced nuclear fuel cycle, the disposal of PYRO wastes should be considered. For this, we carried out the Safety Analysis on HLW Disposal Project with 5 sub-projects such as Development of HLW Disposal System, Radwaste Disposal Safety Analysis, Feasibility study on the deep repository condition, A study on the Nuclide Migration and Retardation Using Natural Barrier, and In-situ Study on the Performance of Engineered Barriers

  11. The long-term nuclear explosives predicament

    International Nuclear Information System (INIS)

    Swahn, J.

    1992-01-01

    A scenario is described, where the production of new military fissile materials is halted and where civil nuclear power is phased out in a 'no-new orders' case. It is found that approximately 1100 tonnes of weapons-grade uranium, 233 tonnes of weapons-grade plutonium and 3795 tonnes of reactor-grade plutonium have to be finally disposed of as nuclear waste. This material could be used for the construction of over 1 million nuclear explosives. Reactor-grade plutonium is found to be easier to extract from spent nuclear fuel with time and some physical characteristics important for the construction of nuclear explosives are improved. Alternative methods for disposal of the fissile material that will avoid the long-term nuclear explosives predicament are examined. Among these methods are dilution, denaturing or transmutation of the fissile material and options for practicably irrecoverable disposal in deep boreholes, on the sea-bed, and in space. It is found that the deep boreholes method for disposal should be the primary alternative to be examined further. This method can be combined with an effort to 'forget' where the material was put. Included in the thesis is also an evaluation of the possibilities of controlling the limited civil nuclear activities in a post-nuclear world. Some surveillance technologies for a post-nuclear world are described, including satellite surveillance. In a review part of the thesis, methods for the production of fissile material for nuclear explosives are described, the technological basis for the construction of nuclear weapons is examined, including use of reactor-grade plutonium for such purposes; also plans for the disposal of spent fuel from civil nuclear power reactors and for the handling of the fissile material from dismantled warheads is described. The Swedish plan for the handling and disposal of spent nuclear fuel is described in detail. (490 refs., 66 figs., 27 tabs.)

  12. Long term results of pyeloplasty in adults

    International Nuclear Information System (INIS)

    Tayib, Abdul Malik

    2004-01-01

    To determine the presenting systems, complications, stone coincidence in adult patients with primary ureteropelvic junction (UPJ) obstruction seen at King Abdul-Aziz University Hospital, Jeddah, Kingdom of Saudi Arabia. We are also reporting the success rate and long term results of adult pyeloplasty. We reviewed the records of 34 patients who underwent 37 pyeloplasty operations during the period January 1992 through to June 2002. The preoperative radiological diagnosis made by intravenous urogram or renal isotopes scan. We excluded from our study patients with previous history of passage of stones, renal or ureteral surgeries, large renal pelvis stone that may cause UPJ obstruction or abnormalities that may lead to secondry UPJ obstruction such as vesicoureteral reflux. There were 28 male patients and 8 females, their age varied between 16 and 51-years, the mean age was 36.1 years, and 18 (52.9%) patients had concomitant renal stones. Ispsilateral split renal function improved by 3-7% post pyeloplasty in 23 patients, while in one patient the function stayed the same, and in another patient the split function reduced by 4%. T1/2 renal isotopes washout time became less than 15 minutes in 19 patients and less than 20 minutes in 6 patients. Intravenous urogram revealed disappearence of the obstruction at UPJ in 7 patients while in 2 patients it became poorly functioning. Anderson Hynes pyeloplasty is an excellent procedure for treating UPJ obstruction in adults. Our success rate is comparable to the international repoted rates, while our study revealed a higher incidence of concomitant renal stones than the international studies. (author)

  13. Nutritional deficit and Long Term Potentiation alterations

    Directory of Open Access Journals (Sweden)

    M. Petrosino

    2009-01-01

    Full Text Available In the present work we examined the ability of prenatally malnourished offspring to produce and maintain long-term potentiation (LTP of the perforant path/dentate granule cell synapse in freely moving rats at 15,30, and 90 days of age. Population spike amplitude (PSA was calculated from dentate field potential recordings prior to and at 15, 30, 60 min. and 3, 5, 18 and 24 h following tetanization of the perforant pathway. All animals of both malnourished and well-nourished diet groups at 15 days of age showed potentiation of PSA measures but the measures obtained from 15-day-old prenatally malnourished animals were significantly less than that of age-matched, well-nourished controls. At 30 days of age, remarkable effect of tetanization was likely observed from PSA measures for this age group followed much the same pattern. At 90 days of age, PSA measures obtained from malnourished animals decreased from pretetanization levels immediately following tetanization. At this age, however, at three hours time recordings, this measure growing up to a level which did not differ significantly from that of the control group. These results indicate that the width of tetanization induced enhancement of dentate granule cell response in preweanling rats (15-day-old animals is signifacantly affected fromgestational protein malnutrition and this trend is kept in animals tested at 30 and 90 days of age. The fact, however, that considerable limitation in LTP generation was gained from prenatally malnourished animals at 90 days of age, implying that dietary rehabilitation starting at birth is an intervention strategy not capable to imbrove the effects of the gestational stress.

  14. Perinatal respiratory infections and long term consequences

    Directory of Open Access Journals (Sweden)

    Luciana Indinnimeo

    2015-10-01

    Full Text Available Respiratory syncytial virus (RSV is the most important pathogen in the etiology of respiratory infections in early life. 50% of children are affected by RSV within the first year of age, and almost all children become infected within two years. Numerous retrospective and prospective studies linking RSV and chronic respiratory morbidity show that RSV bronchiolitis in infancy is followed by recurrent wheezing after the acute episod. According to some authors a greater risk of wheezing in children with a history of RSV bronchiolitis would be limited to childhood, while according to others this risk would be extended into adolescence and adulthood. To explain the relationship between RSV infection and the development of bronchial asthma or the clinical pathogenetic patterns related to a state of bronchial hyperreactivity, it has been suggested that RSV may cause alterations in the response of the immune system (immunogenic hypothesis, activating directly mast cells and basophils and changing the pattern of differentiation of immune cells present in the bronchial tree as receptors and inflammatory cytokines. It was also suggested that RSV infection can cause bronchial hyperreactivity altering nervous airway modulation, acting on nerve fibers present in the airways (neurogenic hypothesis.The benefits of passive immunoprophylaxis with palivizumab, which seems to represent an effective approach in reducing the sequelae of RSV infection in the short- and long-term period, strengthen the implementation of prevention programs with this drug, as recommended by the national guidelines of the Italian Society of Neonatology. Proceedings of the 11th International Workshop on Neonatology and Satellite Meetings · Cagliari (Italy · October 26th-31st, 2015 · From the womb to the adultGuest Editors: Vassilios Fanos (Cagliari, Italy, Michele Mussap (Genoa, Italy, Antonio Del Vecchio (Bari, Italy, Bo Sun (Shanghai, China, Dorret I. Boomsma (Amsterdam, the

  15. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  16. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    Science.gov (United States)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  17. Long-term change of activity of very low-frequency earthquakes in southwest Japan

    Science.gov (United States)

    Baba, S.; Takeo, A.; Obara, K.; Kato, A.; Maeda, T.; Matsuzawa, T.

    2017-12-01

    On plate interface near seismogenic zone of megathrust earthquakes, various types of slow earthquakes were detected including non-volcanic tremors, slow slip events (SSEs) and very low-frequency earthquakes (VLFEs). VLFEs are classified into deep VLFEs, which occur in the downdip side of the seismogenic zone, and shallow VLFEs, occur in the updip side, i.e. several kilometers in depth in southwest Japan. As a member of slow earthquake family, VLFE activity is expected to be a proxy of inter-plate slipping because VLFEs have the same mechanisms as inter-plate slipping and are detected during Episodic tremor and slip (ETS). However, long-term change of the VLFE seismicity has not been well constrained compared to deep low-frequency tremor. We thus studied long-term changes in the activity of VLFEs in southwest Japan where ETS and long-term SSEs have been most intensive. We used continuous seismograms of F-net broadband seismometers operated by NIED from April 2004 to March 2017. After applying the band-pass filter with a frequency range of 0.02—0.05 Hz, we adopted the matched-filter technique in detecting VLFEs. We prepared templates by calculating synthetic waveforms for each hypocenter grid assuming typical focal mechanisms of VLFEs. The correlation coefficients between templates and continuous F-net seismograms were calculated at each grid every 1s in all components. The grid interval is 0.1 degree for both longitude and latitude. Each VLFE was detected as an event if the average of correlation coefficients exceeds the threshold. We defined the detection threshold as eight times as large as the median absolute deviation of the distribution. At grids in the Bungo channel, where long-term SSEs occurred frequently, the cumulative number of detected VLFEs increases rapidly in 2010 and 2014, which were modulated by stress loading from the long-term SSEs. At inland grids near the Bungo channel, the cumulative number increases steeply every half a year. This stepwise

  18. Distributed and grid computing projects with research focus in human health.

    Science.gov (United States)

    Diomidous, Marianna; Zikos, Dimitrios

    2012-01-01

    Distributed systems and grid computing systems are used to connect several computers to obtain a higher level of performance, in order to solve a problem. During the last decade, projects use the World Wide Web to aggregate individuals' CPU power for research purposes. This paper presents the existing active large scale distributed and grid computing projects with research focus in human health. There have been found and presented 11 active projects with more than 2000 Processing Units (PUs) each. The research focus for most of them is molecular biology and, specifically on understanding or predicting protein structure through simulation, comparing proteins, genomic analysis for disease provoking genes and drug design. Though not in all cases explicitly stated, common target diseases include research to find cure against HIV, dengue, Duchene dystrophy, Parkinson's disease, various types of cancer and influenza. Other diseases include malaria, anthrax, Alzheimer's disease. The need for national initiatives and European Collaboration for larger scale projects is stressed, to raise the awareness of citizens to participate in order to create a culture of internet volunteering altruism.

  19. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  20. Variability in Hurricane Boundary Layer Characteristics Observed in a Long-Term Noaa Dropsonde Archive

    Science.gov (United States)

    2014-06-01

    Figure 6. Schematic of the NCAR GPS dropsonde identifying vital capabilities, components, and subsystems (From EOL 2014a...Archive in EOL sounding files (from Y13). ................21 Figure 9. The relative frequency of tropical cyclone categories sampled in the Long- Term...inscribed concentric circle corresponds to a radial grid increment of 200 km. Each blue point indicates the EOL estimate for dropsonde position after a

  1. Editorial for special section of grid computing journal on “Cloud Computing and Services Science‿

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Ivanov, Ivan I.

    This editorial briefly discusses characteristics, technology developments and challenges of cloud computing. It then introduces the papers included in the special issue on "Cloud Computing and Services Science" and positions the work reported in these papers with respect to the previously mentioned

  2. Towards a global service registry for the world-wide LHC computing grid

    International Nuclear Information System (INIS)

    Field, Laurence; Pradillo, Maria Alandes; Girolamo, Alessandro Di

    2014-01-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages

  3. Towards a Global Service Registry for the World-Wide LHC Computing Grid

    Science.gov (United States)

    Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro

    2014-06-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the

  4. Long term results of trabeculectomy surgery

    Directory of Open Access Journals (Sweden)

    Ferhat Evliyaoğlu

    2015-09-01

    Full Text Available Objective: Evaluation of long-term results of primary trabeculectomy operation Methods: The cases that are followed up with diagnosis of glaucoma in Okmeydanı Training and Research Hospital Eye Clinic between January 2000 and December 2001 were evaluated retrospectively. All of the cases, despite maximum therapy, have high intraocular pressure (IOP, undergone primary trabeculectomy operation, are followed at least 6 months and regularly followed through 10 years were included in this study. IOP with or without medical treatment 18mmg or less than 18 mmHg accepted as successful. IOL pressure measured with applanation tonometry. Results: 89 eyes of 70 cases were included in this study. The cases included in the study, 42 male (60%, and 28 (40% were female. The mean age was 63.65±12.18 years. Preoperative intraocular pressure determined as 30.36 ± 3.2 mmHg. In the follow up examination mean intra ocular pressure was 15.31 ± 1.2 mmHg at 1st month, 15.47± 1.1mmHg at 3rd month, 15.02±1.8 mmHg at 6th month, 15.34± 2.1 mmHg at 1st year, 15.82 ± 2.1mmHg at 2nd year, 17.06 ± 2.3mmHg at 5th year and 18.02 ± 2.2 mmHg at 10th year. Statistical analysis of these data showed significant decreased of intra ocular pressure in the post operative period in compare to the preoperative period, 1st month, 3rd month, 6th month, 1st year, 2nd year, 5th year an 10th year (p < 0.01. The follow-up period in the study was 91.10 ± 40.15 months (6-120 months. Conclusion: Primary trabeculectomy can be considered as an alternative treatment procedure especially in patients who does not use drugs regularly and unable to attend regular medical examination. J Clin Exp Invest 2015; 6 (3: 263-268

  5. Northern European long term climate archives

    Energy Technology Data Exchange (ETDEWEB)

    Hohl, Veronica [Stockholm Univ. (Sweden)

    2005-01-01

    The Swedish Nuclear Fuel and Waste Management Company is responsible for the management and disposal of Sweden's radioactive waste. It is intended to deposit the spent nuclear fuel in a deep geological repository. This repository shall keep the radiotoxic material separated from humans and the environment for extended periods, from decades to millennia and possibly to geological timescales. During this time perspective climate induced changes such as shore-level displacement and evolution of permafrost and ice sheets are expected to occur which may affect the repository. The possible occurrence, extent and duration of these long-term changes, are therefore of interest when considering the assessment of repository performance and safety. The main climate parameters determining both surface and subsurface conditions are temperature and precipitation. As a result of the last advance of the Weichselian ice sheet only few geological archives exist, which contain information on past climatic conditions in Sweden before c 16,000 years BP. The purpose of this literature review is to compile and evaluate available information from Scandinavian, Northern and Central European geological archives, which record climatic conditions during the Weichselian time period. The compilation provides paleotemperature data sets, which may be used to explore the possible evolution of periglacial permafrost in Sweden. This report is a synopsis of 22 publications detailing climatic and environmental changes during the Weichselian time period in Northwestern Europe based on quantified paleotemperature records. Some of the data is presented as temperature curves which were digitised specifically for this report. The time range covered by the different publications varies considerably. Only few authors dealt with the whole Weichselian period and the majority cover only a few thousand years. This however is not considered to influence the reliability of the archives. The reason for the

  6. Northern European long term climate archives

    International Nuclear Information System (INIS)

    Hohl, Veronica

    2005-01-01

    The Swedish Nuclear Fuel and Waste Management Company is responsible for the management and disposal of Sweden's radioactive waste. It is intended to deposit the spent nuclear fuel in a deep geological repository. This repository shall keep the radiotoxic material separated from humans and the environment for extended periods, from decades to millennia and possibly to geological timescales. During this time perspective climate induced changes such as shore-level displacement and evolution of permafrost and ice sheets are expected to occur which may affect the repository. The possible occurrence, extent and duration of these long-term changes, are therefore of interest when considering the assessment of repository performance and safety. The main climate parameters determining both surface and subsurface conditions are temperature and precipitation. As a result of the last advance of the Weichselian ice sheet only few geological archives exist, which contain information on past climatic conditions in Sweden before c 16,000 years BP. The purpose of this literature review is to compile and evaluate available information from Scandinavian, Northern and Central European geological archives, which record climatic conditions during the Weichselian time period. The compilation provides paleotemperature data sets, which may be used to explore the possible evolution of periglacial permafrost in Sweden. This report is a synopsis of 22 publications detailing climatic and environmental changes during the Weichselian time period in Northwestern Europe based on quantified paleotemperature records. Some of the data is presented as temperature curves which were digitised specifically for this report. The time range covered by the different publications varies considerably. Only few authors dealt with the whole Weichselian period and the majority cover only a few thousand years. This however is not considered to influence the reliability of the archives. The reason for the varying

  7. Hot functional test chemistry - long term experience

    International Nuclear Information System (INIS)

    Vonkova, K.; Kysela, J.; Marcinsky, M.; Martykan, M.

    2010-01-01

    Primary circuit materials undergo general corrosion in high temperature, deoxygenated, neutral or mildly alkaline solutions to form thin oxide films. These oxide layers (films) serve as protective film and mitigate the further corrosion of primary materials. Inner chromium-rich oxide layer has low cation diffusion coefficients and thus control iron and nickel transport from the metal surface to the outer layer and their dissolution into the coolant. Much less corrosion products are generated by the compact, integral and stable oxide (passivation) layer. For the latest Czech and Slovak stations commissioned (Temelin and Mochovce) a modified Hot Functional Test (HFT) chemistry was developed in the NRI Rez. Chromium rich surface layer formatted due to modified HTF chemistry ensures lower corrosion rates and radiation field formation and thus also mitigates crud formation during operation. This procedure was also designed to prepare the commissioned unit for the further proper water chemistry practise. Mochovce 1 (SK) was the first station commissioned using these recommendations in 1998. Mochovce 2 (1999) and Temelin 1 and 2 (CZ - 2000 and 2002) were subsequently commissioned using these guidelines too. The main principles of the controlled primary water chemistry applied during the hot functional tests are reviewed and importance of the water chemistry, technological and other relevant parameters is stressed regarding to the quality of the passive layer formed on the primary system surfaces. Samples from Mochovce indicated that duplex oxide layers up to 20 μm thick were produced, which were mainly magnetite substituted with nickel and chromium (e.g. 60-65% Fe, 18-28% Cr, 9-12% Ni, <1% Mn and 1-2% Si on a stainless steel primary circuit sample). Long term operation experience from both nuclear power plants are discussed in this paper. Radiation field, occupational radiation exposure and corrosion layers evolution during the first c. ten years of operation are

  8. EDF - Activity and sustainable development 2011 - electricity, choices on the long term

    International Nuclear Information System (INIS)

    2012-05-01

    This publication notably contains a set of articles about choices on the long term related to electricity production and distribution. Different aspects are addressed: arbitration (the diversity of the French energy mix), grids (investments and evolution towards smart grids), electricity cost (for households and for industry), nuclear energy (actions and results regarding safety and availability, the EPR project), renewable energies, the design and construction of a dam (Nam Theun 2) in Thailand with an important human development dimension in the project, thermal energy (the future of flame-based power stations using gas or biomass for example), and EDF's commercial policy

  9. Long-term preservation of analysis software environment

    International Nuclear Information System (INIS)

    Toppe Larsen, Dag; Blomer, Jakob; Buncic, Predrag; Charalampidis, Ioannis; Haratyunyan, Artem

    2012-01-01

    Long-term preservation of scientific data represents a challenge to experiments, especially regarding the analysis software. Preserving data is not enough; the full software and hardware environment is needed. Virtual machines (VMs) make it possible to preserve hardware “in software”. A complete infrastructure package has been developed for easy deployment and management of VMs, based on CERN virtual machine (CernVM). Further, a HTTP-based file system, CernVM file system (CVMFS), is used for the distribution of the software. It is possible to process data with any given software version, and a matching, regenerated VM version. A point-and-click web user interface is being developed for setting up the complete processing chain, including VM and software versions, number and type of processing nodes, and the particular type of analysis and data. This paradigm also allows for distributed cloud-computing on private and public clouds, for both legacy and contemporary experiments.

  10. Consolidation of long-term memory: evidence and alternatives.

    Science.gov (United States)

    Meeter, Martijn; Murre, Jaap M J

    2004-11-01

    Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how consolidation can explain characteristics of amnesia, but they have not elucidated how consolidation must be envisaged. Here findings are reviewed that shed light on how consolidation may be implemented in the brain. Moreover, consolidation is contrasted with alternative theories of the Ribot gradient. Consolidation theory, multiple trace theory, and semantization can all handle some findings well but not others. Conclusive evidence for or against consolidation thus remains to be found.

  11. Long-term Preservation of Data Analysis Capabilities

    Science.gov (United States)

    Gabriel, C.; Arviset, C.; Ibarra, A.; Pollock, A.

    2015-09-01

    While the long-term preservation of scientific data obtained by large astrophysics missions is ensured through science archives, the issue of data analysis software preservation has hardly been addressed. Efforts by large data centres have contributed so far to maintain some instrument or mission-specific data reduction packages on top of high-level general purpose data analysis software. However, it is always difficult to keep software alive without support and maintenance once the active phase of a mission is over. This is especially difficult in the budgetary model followed by space agencies. We discuss the importance of extending the lifetime of dedicated data analysis packages and review diverse strategies under development at ESA using new paradigms such as Virtual Machines, Cloud Computing, and Software as a Service for making possible full availability of data analysis and calibration software for decades at minimal cost.

  12. Long-term consequences of postoperative cognitive dysfunction

    DEFF Research Database (Denmark)

    Steinmetz, Jacob; Christensen, Karl Bang; Lund, Thomas

    2009-01-01

    BACKGROUND: Postoperative cognitive dysfunction (POCD) is common in elderly patients after noncardiac surgery, but the consequences are unknown. The authors' aim was to determine the effects of POCD on long-term prognosis. METHODS: This was an observational study of Danish patients enrolled in two...... on survival, labor market attachment, and social transfer payments were obtained from administrative databases. The Cox proportional hazards regression model was used to compute relative risk estimates for mortality and disability, and the relative prevalence of time on social transfer payments was assessed......, and cancer). The risk of leaving the labor market prematurely because of disability or voluntary early retirement was higher among patients with 1-week POCD (hazard ratio, 2.26 [1.24-4.12]; P = 0.01). Patients with POCD at 1 week received social transfer payments for a longer proportion of observation time...

  13. Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab

    Science.gov (United States)

    Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.

    2017-10-01

    The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called

  14. Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb

    Energy Technology Data Exchange (ETDEWEB)

    Herner, K. [Fermilab; Alba Hernandex, A. F. [Fermilab; Bhat, S. [Fermilab; Box, D. [Fermilab; Boyd, J. [Fermilab; Di Benedetto, V. [Fermilab; Ding, P. [Fermilab; Dykstra, D. [Fermilab; Fattoruso, M. [Fermilab; Garzoglio, G. [Fermilab; Kirby, M. [Fermilab; Kreymer, A. [Fermilab; Levshina, T. [Fermilab; Mazzacane, A. [Fermilab; Mengel, M. [Fermilab; Mhashilkar, P. [Fermilab; Podstavkov, V. [Fermilab; Retzke, K. [Fermilab; Sharma, N. [Fermilab; Teheran, J. [Fermilab

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed

  15. Design of microdevices for long-term live cell imaging

    International Nuclear Information System (INIS)

    Chen, Huaying; Nordon, Robert E; Rosengarten, Gary; Li, Musen

    2012-01-01

    Advances in fluorescent live cell imaging provide high-content information that relates a cell's life events to its ancestors. An important requirement to track clonal growth and development is the retention of motile cells derived from an ancestor within the same microscopic field of view for days to weeks, while recording fluorescence images and controlling the mechanical and biochemical microenvironments that regulate cell growth and differentiation. The aim of this study was to design a microwell device for long-term, time-lapse imaging of motile cells with the specific requirements of (a) inoculating devices with an average of one cell per well and (b) retaining progeny of cells within a single microscopic field of view for extended growth periods. A two-layer PDMS microwell culture device consisting of a parallel-plate flow cell bonded on top of a microwell array was developed for cell capture and clonal culture. Cell deposition statistics were related to microwell geometry (plate separation and well depth) and the Reynolds number. Computational fluid dynamics was used to simulate flow in the microdevices as well as cell–fluid interactions. Analysis of the forces acting upon a cell was used to predict cell docking zones, which were confirmed by experimental observations. Cell–fluid dynamic interactions are important considerations for design of microdevices for long-term, live cell imaging. The analysis of force and torque balance provides a reasonable approximation for cell displacement forces. It is computationally less intensive compared to simulation of cell trajectories, and can be applied to a wide range of microdevice geometries to predict the cell docking behavior. (paper)

  16. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting

  17. The self-adaptation to dynamic failures for efficient virtual organization formations in grid computing context

    International Nuclear Information System (INIS)

    Han Liangxiu

    2009-01-01

    Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. However, due to the nature of heterogeneous and dynamic resources, dynamic failures in the distributed grid environment usually occur more than in traditional computation platforms, which cause failed VO formations. In this paper, we develop a novel self-adaptive mechanism to dynamic failures during VO formations. Such a self-adaptive scheme allows an individual and member of VOs to automatically find other available or replaceable one once a failure happens and therefore makes systems automatically recover from dynamic failures. We define dynamic failure situations of a system by using two standard indicators: mean time between failures (MTBF) and mean time to recover (MTTR). We model both MTBF and MTTR as Poisson distributions. We investigate and analyze the efficiency of the proposed self-adaptation mechanism to dynamic failures by comparing the success probability of VO formations before and after adopting it in three different cases: (1) different failure situations; (2) different organizational structures and scales; (3) different task complexities. The experimental results show that the proposed scheme can automatically adapt to dynamic failures and effectively improve the dynamic VO formation performance in the event of node failures, which provide a valuable addition to the field.

  18. Helicopter Rotor Blade Computation in Unsteady Flows Using Moving Overset Grids

    Science.gov (United States)

    Ahmad, Jasim; Duque, Earl P. N.

    1996-01-01

    An overset grid thin-layer Navier-Stokes code has been extended to include dynamic motion of helicopter rotor blades through relative grid motion. The unsteady flowfield and airloads on an AH-IG rotor in forward flight were computed to verify the methodology and to demonstrate the method's potential usefulness towards comprehensive helicopter codes. In addition, the method uses the blade's first harmonics measured in the flight test to prescribe the blade motion. The solution was impulsively started and became periodic in less than three rotor revolutions. Detailed unsteady numerical flow visualization techniques were applied to the entire unsteady data set of five rotor revolutions and exhibited flowfield features such as blade vortex interaction and wake roll-up. The unsteady blade loads and surface pressures compare well against those from flight measurements. Details of the method, a discussion of the resulting predicted flowfield, and requirements for future work are presented. Overall, given the proper blade dynamics, this method can compute the unsteady flowfield of a general helicopter rotor in forward flight.

  19. Service task partition and distribution in star topology computer grid subject to data security constraints

    Energy Technology Data Exchange (ETDEWEB)

    Xiang Yanping [Collaborative Autonomic Computing Laboratory, School of Computer Science, University of Electronic Science and Technology of China (China); Levitin, Gregory, E-mail: levitin@iec.co.il [Collaborative Autonomic Computing Laboratory, School of Computer Science, University of Electronic Science and Technology of China (China); Israel electric corporation, P. O. Box 10, Haifa 31000 (Israel)

    2011-11-15

    The paper considers grid computing systems in which the resource management systems (RMS) can divide service tasks into execution blocks (EBs) and send these blocks to different resources. In order to provide a desired level of service reliability the RMS can assign the same blocks to several independent resources for parallel execution. The data security is a crucial issue in distributed computing that affects the execution policy. By the optimal service task partition into the EBs and their distribution among resources, one can achieve the greatest possible service reliability and/or expected performance subject to data security constraints. The paper suggests an algorithm for solving this optimization problem. The algorithm is based on the universal generating function technique and on the evolutionary optimization approach. Illustrative examples are presented. - Highlights: > Grid service with star topology is considered. > An algorithm for evaluating service reliability and data security is presented. > A tradeoff between the service reliability and data security is analyzed. > A procedure for optimal service task partition and distribution is suggested.

  20. Service task partition and distribution in star topology computer grid subject to data security constraints

    International Nuclear Information System (INIS)

    Xiang Yanping; Levitin, Gregory

    2011-01-01

    The paper considers grid computing systems in which the resource management systems (RMS) can divide service tasks into execution blocks (EBs) and send these blocks to different resources. In order to provide a desired level of service reliability the RMS can assign the same blocks to several independent resources for parallel execution. The data security is a crucial issue in distributed computing that affects the execution policy. By the optimal service task partition into the EBs and their distribution among resources, one can achieve the greatest possible service reliability and/or expected performance subject to data security constraints. The paper suggests an algorithm for solving this optimization problem. The algorithm is based on the universal generating function technique and on the evolutionary optimization approach. Illustrative examples are presented. - Highlights: → Grid service with star topology is considered. → An algorithm for evaluating service reliability and data security is presented. → A tradeoff between the service reliability and data security is analyzed. → A procedure for optimal service task partition and distribution is suggested.

  1. Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids

    Science.gov (United States)

    Ma, Xinrong; Duan, Zhijian

    2018-04-01

    High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.

  2. Long-Term Patency of Lymphovenous Anastomoses: A Systematic Review.

    Science.gov (United States)

    Tourani, Saam S; Taylor, G Ian; Ashton, Mark W

    2016-08-01

    With advancements in technology and microsurgical techniques, lymphovenous anastomosis has become a popular reconstructive procedure in the treatment of chronic lymphedema. However, the long-term patency of these anastomoses is not clear in the literature. A systematic review of the MEDLINE and EMBASE databases was performed to assess the reported long-term patency of lymphovenous anastomoses. A total of eight studies satisfied the inclusion criteria. Pooled data from four similar experiments in normal dogs showed an average long-term (≥5 months) patency of 52 percent. The only experiment in dogs with chronic lymphedema failed to show any long-term patency. The creation of peripheral lymphovenous anastomoses with a moderate long-term patency rate has become technically possible. However, the long-term results in chronic lymphedema are limited.

  3. Reforming Long-Term Care Funding in Alberta.

    Science.gov (United States)

    Crump, R Trafford; Repin, Nadya; Sutherland, Jason M

    2015-01-01

    Like many provinces across Canada, Alberta is facing growing demand for long-term care. Issues with the mixed funding model used to pay long-term care providers had Alberta Health Services concerned that it was not efficiently meeting the demand for long-term care. Consequently, in 2010, Alberta Health Services introduced the patient/care-based funding (PCBF) model. PCBF is similar to activity-based funding in that it directly ties the complexity and care needs of long-term care residents to the payment received by long-term care providers. This review describes PCBF and discusses some of its strengths and weaknesses. In doing so, this review is intended to inform other provinces faced with similar long-term care challenges and contemplating their own funding reforms.

  4. Advances in Grid and Pervasive Computing: 5th International Conference, GPC 2010, Hualien, Taiwan, May 10-13, 2010: Proceedings

    NARCIS (Netherlands)

    Bellavista, P.; Chang, R.-S.; Chao, H.-C.; Lin, S.-F.; Sloot, P.M.A.

    2010-01-01

    This book constitutes the proceedings of the 5th international conference, CPC 2010, held in Hualien, Taiwan in May 2010. The 67 full papers are selected from 184 submissions and focus on topics such as cloud and Grid computing, peer-to-peer and pervasive computing, sensor and mobile networks,

  5. CERN readies world's biggest science grid The computing network now encompasses more than 100 sites in 31 countries

    CERN Multimedia

    Niccolai, James

    2005-01-01

    If the Large Hadron Collider (LHC) at CERN is to yield miraculous discoveries in particle physics, it may also require a small miracle in grid computing. By a lack of suitable tools from commercial vendors, engineers at the famed Geneva laboratory are hard at work building a giant grid to store and process the vast amount of data the collider is expected to produce when it begins operations in mid-2007 (2 pages)

  6. A cost of long-term memory in Drosophila

    OpenAIRE

    Mery, Frederic; Kawecki, Tadeusz J.

    2005-01-01

    Two distinct forms of consolidated associative memory are known in Drosophila: long-term memory and so-called anesthesia-resistant memory. Long-term memory is more stable, but unlike anesthesia-resistant memory, its formation requires protein synthesis. We show that flies induced to form long-term memory become more susceptible to extreme stress (such as desiccation). In contrast, induction of anesthesia-resistant memory had no detectable effect on desiccation resistance. This finding may hel...

  7. Safety Aspects of Long Term Spent Fuel Dry Storage

    International Nuclear Information System (INIS)

    Botsch, Wolfgang; Smalian, S.; Hinterding, P.; Drotleff, H.; Voelzke, H.; Wolff, D.; Kasparek, E.

    2014-01-01

    As a consequence of the lack of a final repository for spent nuclear fuel (SF) and high level waste (HLW), long term interim storage of SF and HLW will be necessary. As with the storage of all radioactive materials, the long term storage of SF and HLW must conform to safety requirements. Safety aspects such as safe enclosure of radioactive materials, safe removal of decay heat, sub-criticality and avoidance of unnecessary radiation exposure must be achieved throughout the complete storage period. The implementation of these safety requirements can be achieved by dry storage of SF and HLW in casks as well as in other systems such as dry vault storage systems or spent fuel pools, where the latter is neither a dry nor a passive system. After the events of Fukushima, the advantages of passively and inherently safe dry storage systems have become more obvious. In Germany, dry storage of SF in casks fulfils both transport and storage requirements. Mostly, storage facilities are designed as concrete buildings above the ground; one storage facility has also been built as a rock tunnel. In all these facilities the safe enclosure of radioactive materials in dry storage casks is achieved by a double-lid sealing system with surveillance of the sealing system. The safe removal of decay heat is ensured by the design of the storage containers and the storage facility, which also secures to reduce the radiation exposure to acceptable levels. TUV and BAM, who work as independent experts for the competent authorities, inform about spent fuel management and issues concerning dry storage of spent nuclear fuel, based on their long experience in these fields. All relevant safety issues such as safe enclosure, shielding, removal of decay heat and sub-criticality are checked and validated with state-of-the-art methods and computer codes before the license approval. In our presentation we discuss which of these aspects need to be examined closer for a long term interim storage. It is shown

  8. Radiotherapy for pituitary adenomas: long-term outcome and complications

    Energy Technology Data Exchange (ETDEWEB)

    Rim, Chai Hong; Yang, Dae Sik; Park, Young Je; Yoon, Won Sup; Lee, Jung AE; Kim, Chul Yong [Korea University Medical Center, Seoul (Korea, Republic of)

    2011-09-15

    To evaluate long-term local control rate and toxicity in patients treated with external beam radiotherapy (EBRT) for pituitary adenomas. We retrospectively reviewed the medical records of 60 patients treated with EBRT for pituitary adenoma at Korea University Medical Center from 1996 and 2006. Thirty-fi ve patients had hormone secreting tumors, 25 patients had non-secreting tumors. Fifty-seven patients had received postoperative radiotherapy (RT), and 3 had received RT alone. Median total dose was 54 Gy (range, 36 to 61.2 Gy). The definition of tumor progression were as follows: evidence of tumor progression on computed tomography or magnetic resonance imaging, worsening of clinical sign requiring additional operation or others, rising serum hormone level against a previously stable or falling value, and failure of controlling serum hormone level so that the hormone level had been far from optimal range until last follow-up. Age, sex, hormone secretion, tumor extension, tumor size, and radiation dose were analyzed for prognostic significance in tumor control. Median follow-up was 5.7 years (range, 2 to 14.4 years). The 10-year actuarial local control rates for non-secreting and secreting adenomas were 96% and 66%, respectively. In univariate analysis, hormone secretion was significant prognostic factor (p = 0.042) and cavernous sinus extension was marginally significant factor (p = 0.054) for adverse local control. All other factors were not significant. In multivariate analysis, hormone secretion and gender were significant. Fifty-three patients had mass-effect symptoms (headache, dizziness, visual disturbance, hypopituitarism, loss of consciousness, and cranial nerve palsy). A total of 17 of 23 patients with headache and 27 of 34 patients with visual impairment were improved. Twenty-seven patients experienced symptoms of endocrine hypersecretion (galactorrhea, amenorrhea, irregular menstruation, decreased libido, gynecomastia, acromegaly, and Cushing

  9. Radiotherapy for pituitary adenomas: long-term outcome and complications

    International Nuclear Information System (INIS)

    Rim, Chai Hong; Yang, Dae Sik; Park, Young Je; Yoon, Won Sup; Lee, Jung AE; Kim, Chul Yong

    2011-01-01

    To evaluate long-term local control rate and toxicity in patients treated with external beam radiotherapy (EBRT) for pituitary adenomas. We retrospectively reviewed the medical records of 60 patients treated with EBRT for pituitary adenoma at Korea University Medical Center from 1996 and 2006. Thirty-fi ve patients had hormone secreting tumors, 25 patients had non-secreting tumors. Fifty-seven patients had received postoperative radiotherapy (RT), and 3 had received RT alone. Median total dose was 54 Gy (range, 36 to 61.2 Gy). The definition of tumor progression were as follows: evidence of tumor progression on computed tomography or magnetic resonance imaging, worsening of clinical sign requiring additional operation or others, rising serum hormone level against a previously stable or falling value, and failure of controlling serum hormone level so that the hormone level had been far from optimal range until last follow-up. Age, sex, hormone secretion, tumor extension, tumor size, and radiation dose were analyzed for prognostic significance in tumor control. Median follow-up was 5.7 years (range, 2 to 14.4 years). The 10-year actuarial local control rates for non-secreting and secreting adenomas were 96% and 66%, respectively. In univariate analysis, hormone secretion was significant prognostic factor (p = 0.042) and cavernous sinus extension was marginally significant factor (p = 0.054) for adverse local control. All other factors were not significant. In multivariate analysis, hormone secretion and gender were significant. Fifty-three patients had mass-effect symptoms (headache, dizziness, visual disturbance, hypopituitarism, loss of consciousness, and cranial nerve palsy). A total of 17 of 23 patients with headache and 27 of 34 patients with visual impairment were improved. Twenty-seven patients experienced symptoms of endocrine hypersecretion (galactorrhea, amenorrhea, irregular menstruation, decreased libido, gynecomastia, acromegaly, and Cushing's disease

  10. Private long-term care insurance and state tax incentives.

    Science.gov (United States)

    Stevenson, David G; Frank, Richard G; Tau, Jocelyn

    2009-01-01

    To increase the role of private insurance in financing long-term care, tax incentives for long-term care insurance have been implemented at both the federal and state levels. To date, there has been surprisingly little study of these initiatives. Using a panel of national data, we find that market take-up for long-term care insurance increased over the last decade, but state tax incentives were responsible for only a small portion of this growth. Ultimately, the modest ability of state tax incentives to lower premiums implies that they should be viewed as a small piece of the long-term care financing puzzle.

  11. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for

  12. Availability measurement of grid services from the perspective of a scientific computing centre

    International Nuclear Information System (INIS)

    Marten, H; Koenig, T

    2011-01-01

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard 'IT Infrastructure Library (ITIL)' was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  13. Long Term Solar Radiation Forecast Using Computational Intelligence Methods

    Directory of Open Access Journals (Sweden)

    João Paulo Coelho

    2014-01-01

    Full Text Available The point prediction quality is closely related to the model that explains the dynamic of the observed process. Sometimes the model can be obtained by simple algebraic equations but, in the majority of the physical systems, the relevant reality is too hard to model with simple ordinary differential or difference equations. This is the case of systems with nonlinear or nonstationary behaviour which require more complex models. The discrete time-series problem, obtained by sampling the solar radiation, can be framed in this type of situation. By observing the collected data it is possible to distinguish multiple regimes. Additionally, due to atmospheric disturbances such as clouds, the temporal structure between samples is complex and is best described by nonlinear models. This paper reports the solar radiation prediction by using hybrid model that combines support vector regression paradigm and Markov chains. The hybrid model performance is compared with the one obtained by using other methods like autoregressive (AR filters, Markov AR models, and artificial neural networks. The results obtained suggests an increasing prediction performance of the hybrid model regarding both the prediction error and dynamic behaviour.

  14. Long term pavement performance computed parameter : frost penetration

    Science.gov (United States)

    2008-11-01

    As the pavement design process moves toward mechanistic-empirical techniques, knowledge of seasonal changes in pavement structural characteristics becomes critical. Specifically, frost penetration information is necessary for determining the effect o...

  15. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  16. A new image for long-term care.

    Science.gov (United States)

    Wager, Richard; Creelman, William

    2004-04-01

    To counter widely held negative images of long-term care, managers in the industry should implement quality-improvement initiatives that include six key strategies: Manage the expectations of residents and their families. Address customers' concerns early. Build long-term customer satisfaction. Allocate resources to achieve exceptional outcomes in key areas. Respond to adverse events with compassion. Reinforce the facility's credibility.

  17. Setting the stage for long-term reproductive health.

    Science.gov (United States)

    Payne, Craig A; Vander Ley, Brian; Poock, Scott E

    2013-11-01

    This article discusses some of the aspects of heifer development that contribute to long-term health and productivity, such as disease prevention and control. Nutrition is also an important component of long-term health, and body condition score is discussed as a way to determine whether the nutrient demands of heifers are being met. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Long-term effects of a preoperative smoking cessation programme

    DEFF Research Database (Denmark)

    Villebro, Nete Munk; Pedersen, Tom; Møller, Ann M

    2008-01-01

    Preoperative smoking intervention programmes reduce post-operative complications in smokers. Little is known about the long-term effect upon smoking cessation.......Preoperative smoking intervention programmes reduce post-operative complications in smokers. Little is known about the long-term effect upon smoking cessation....

  19. Pediatric polytrauma : Short-term and long-term outcomes

    NARCIS (Netherlands)

    vanderSluis, CK; Kingma, J; Eisma, WH; tenDuis, HJ

    Objective: To assess the short-term and long-term outcomes of pediatric polytrauma patients and to analyze the extent to which short-term outcomes can predict long-term outcomes. Materials and Methods: Ail pediatric polytrauma patients (Injury Severity Score of greater than or equal to 16, less than

  20. Inflammatory markers in relation to long-term air pollution

    NARCIS (Netherlands)

    Mostafavi Montazeri, Nahid|info:eu-repo/dai/nl/375290575; Vlaanderen, Jelle|info:eu-repo/dai/nl/31403160X; Chadeau-Hyam, Marc; Beelen, Rob|info:eu-repo/dai/nl/30483100X; Modig, Lars; Palli, Domenico; Bergdahl, Ingvar A; Vineis, Paolo; Hoek, Gerard|info:eu-repo/dai/nl/069553475; Kyrtopoulos, Soterios Α; Vermeulen, Roel|info:eu-repo/dai/nl/216532620

    Long-term exposure to ambient air pollution can lead to chronic health effects such as cancer, cardiovascular and respiratory disease. Systemic inflammation has been hypothesized as a putative biological mechanism contributing to these adverse health effects. We evaluated the effect of long-term

  1. Factors associated with long-term mortality in acute pancreatitis

    DEFF Research Database (Denmark)

    Nøjgaard, Camilla; Matzen, Peter; Bendtsen, Flemming

    2011-01-01

    Knowledge of the long-term prognosis of acute pancreatitis (AP) is limited. The aims were to investigate: (1) prognostic factors associated with long-term mortality in patients with AP; (2) whether or not the level of serum (S-)amylase at admission had an impact on the prognosis; (3) causes...

  2. Sacrococcygeal teratoma: Clinical characteristics and long-term ...

    African Journals Online (AJOL)

    Background/Purpose : The excision of sacrococcygeal teratoma (SCT) may be associated with significant long-term morbidity for the child. We reviewed our experience with SCT in a tertiary health care facility in a developing country with particular interest on the long-term sequelae. Methods : Between January 1990 and ...

  3. Albumin: Creatinine Ratio during long term Diabetes Mellitus in the ...

    African Journals Online (AJOL)

    Albumin: Creatinine Ratio during long term Diabetes Mellitus in the Assessment of early Nephropathy in Sudanese Population. ... Further studies with 24 hour urine sample are recommended for assessment of Microalbuminuria in long term Diabetic patients, provided that the patients are on a normal diet with regular ...

  4. Developmental Dyslexia and Explicit Long-Term Memory

    Science.gov (United States)

    Menghini, Deny; Carlesimo, Giovanni Augusto; Marotta, Luigi; Finzi, Alessandra; Vicari, Stefano

    2010-01-01

    The reduced verbal long-term memory capacities often reported in dyslexics are generally interpreted as a consequence of their deficit in phonological coding. The present study was aimed at evaluating whether the learning deficit exhibited by dyslexics was restricted only to the verbal component of the long-term memory abilities or also involved…

  5. Long term physical and chemical stability of polyelectrolyte multilayer membranes

    NARCIS (Netherlands)

    de Grooth, Joris; Haakmeester, Brian; Wever, Carlos; Potreck, Jens; de Vos, Wiebe Matthijs; Nijmeijer, Dorothea C.

    2015-01-01

    This work presents a detailed investigation into the long term stability of polyelectrolyte multilayer (PEM) modified membranes, a key factor for the application of these membranes in water purification processes. Although PEM modified membranes have been frequently investigated, their long term

  6. Long-term hearing preservation in vestibular schwannoma

    DEFF Research Database (Denmark)

    Stangerup, Sven-Eric; Thomsen, Jens; Tos, Mirko

    2010-01-01

    The aim of the present study was to evaluate the long-term hearing during "wait and scan" management of vestibular schwannomas.......The aim of the present study was to evaluate the long-term hearing during "wait and scan" management of vestibular schwannomas....

  7. Quantification of long term emission potential from landfills

    NARCIS (Netherlands)

    Heimovaara, T.J.

    2011-01-01

    Novel approaches for the after-care of Municipal Solid Waste (MSW) landfills are based on technological measures to reduce the long term emission potential in a short time period. Biological degradation in landfills is a means to significantly reduce the long term emission potential. Leachate

  8. Long-term effects of childbirth in MS

    NARCIS (Netherlands)

    D'hooghe, M.B.; Nagels, G.; Uitdehaag, B.M.J.

    2010-01-01

    Background: The uncertainty about long-term effects of childbirth presents MS patients with dilemmas. Methods: Based on clinical data of 330 female MS patients, the long-term effects of childbirth were analysed, using a cross-sectional study design. Four groups of patients were distinguished: (1)

  9. Long-Term Orientation and Educational Performance. Working Paper 174

    Science.gov (United States)

    Figlio, David; Giuliano, Paola; Özek, Umut; Sapienza, Paola

    2017-01-01

    We use remarkable population-level administrative education and birth records from Florida to study the role of Long-Term Orientation on the educational attainment of immigrant students living in the US. Controlling for the quality of schools and individual characteristics, students from countries with long-term oriented attitudes perform better…

  10. Automated agents for management and control of the ALICE Computing Grid

    CERN Document Server

    Grigoras, C; Carminati, F; Legrand, I; Voicu, R

    2010-01-01

    A complex software environment such as the ALICE Computing Grid infrastructure requires permanent control and management for the large set of services involved. Automating control procedures reduces the human interaction with the various components of the system and yields better availability of the overall system. In this paper we will present how we used the MonALISA framework to gather, store and display the relevant metrics in the entire system from central and remote site services. We will also show the automatic local and global procedures that are triggered by the monitored values. Decision-taking agents are used to restart remote services, alert the operators in case of problems that cannot be automatically solved, submit production jobs, replicate and analyze raw data, resource load-balance and other control mechanisms that optimize the overall work flow and simplify day-to-day operations. Synthetic graphical views for all operational parameters, correlations, state of services and applications as we...

  11. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  12. Long-Term Dynamics of Autonomous Fractional Differential Equations

    Science.gov (United States)

    Liu, Tao; Xu, Wei; Xu, Yong; Han, Qun

    This paper aims to investigate long-term dynamic behaviors of autonomous fractional differential equations with effective numerical method. The long-term dynamic behaviors predict where systems are heading after long-term evolution. We make some modification and transplant cell mapping methods to autonomous fractional differential equations. The mapping time duration of cell mapping is enlarged to deal with the long memory effect. Three illustrative examples, i.e. fractional Lotka-Volterra equation, fractional van der Pol oscillator and fractional Duffing equation, are studied with our revised generalized cell mapping method. We obtain long-term dynamics, such as attractors, basins of attraction, and saddles. Compared with some existing stability and numerical results, the validity of our method is verified. Furthermore, we find that the fractional order has its effect on the long-term dynamics of autonomous fractional differential equations.

  13. Experimental Researches on Long-Term Strength of Granite Gneiss

    Directory of Open Access Journals (Sweden)

    Lin Liu

    2015-01-01

    Full Text Available It is important to confirm the long-term strength of rock materials for the purpose of evaluating the long-term stability of rock engineering. In this study, a series of triaxial creep tests were conducted on granite gneiss under different pore pressures. Based on the test data, we proposed two new quantitative methods, tangent method and intersection method, to confirm the long-term strength of rock. Meanwhile, the isochronous stress-strain curve method was adopted to make sure of the accuracy and operability of the two new methods. It is concluded that the new methods are suitable for the study of the long-term strength of rock. The effect of pore pressure on the long-term strength of rock in triaxial creep tests is also discussed.

  14. Long-Term Memory Performance in Adult ADHD.

    Science.gov (United States)

    Skodzik, Timo; Holling, Heinz; Pedersen, Anya

    2017-02-01

    Memory problems are a frequently reported symptom in adult ADHD, and it is well-documented that adults with ADHD perform poorly on long-term memory tests. However, the cause of this effect is still controversial. The present meta-analysis examined underlying mechanisms that may lead to long-term memory impairments in adult ADHD. We performed separate meta-analyses of measures of memory acquisition and long-term memory using both verbal and visual memory tests. In addition, the influence of potential moderator variables was examined. Adults with ADHD performed significantly worse than controls on verbal but not on visual long-term memory and memory acquisition subtests. The long-term memory deficit was strongly statistically related to the memory acquisition deficit. In contrast, no retrieval problems were observable. Our results suggest that memory deficits in adult ADHD reflect a learning deficit induced at the stage of encoding. Implications for clinical and research settings are presented.

  15. Desktop Grid Computing with BOINC and its Use for Solving the RND telecommunication Problem

    International Nuclear Information System (INIS)

    Vega-Rodriguez, M. A.; Vega-Perez, D.; Gomez-Pulido, J. A.; Sanchez-Perez, J. M.

    2007-01-01

    An important problem in mobile/cellular technology is trying to cover a certain geographical area by using the smallest number of radio antennas, and looking for the biggest cover rate. This is the well known Telecommunication problem identified as Radio Network Design (RND). This optimization problem can be solved by bio-inspired algorithms, among other options. In this work we use the PBIL (Population-Based Incremental Learning) algorithm, that has been little studied in this field but we have obtained very good results with it. PBIL is based on genetic algorithms and competitive learning (typical in neural networks), being a population evolution model based on probabilistic models. Due to the high number of configuration parameters of the PBIL, and because we want to test the RND problem with numerous variants, we have used grid computing with BOINC (Berkeley Open Infrastructure for Network Computing). In this way, we have been able to execute thousands of experiments in few days using around 100 computers at the same time. In this paper we present the most interesting results from our work. (Author)

  16. Numerical simulation of gender differences in a long-term microgravity exposure

    Science.gov (United States)

    Perez-Poch, Antoni

    The objective of this work is to analyse and simulate gender differences when individuals are exposed to long-term microgravity. Risk probability of a health impairment which may put in jeopardy a long-term mission is also evaluated. Computer simulations are becoming a promising research line of work, as physiological models become more and more sophisticated and reliable. Technological advances in state-of-the-art hardware technology and software allow nowadays for better and more accurate simulations of complex phenomena, such as the response of the human cardiovascular system to long-term exposure to microgravity. Experimental data for long-term missions are difficult to achieve and reproduce, therefore the predictions of computer simulations are of a major importance in this field. Our approach is based on a previous model developed and implemented in our laboratory (NELME: Numerical Evaluation of Long-term Microgravity Effects). The software simulates the behaviour of the cardiovascular system and different human organs, has a modular architecture, and allows to introduce perturbations such as physical exercise or countermeasures. The implementation is based on a complex electricallike model of this control system, using inexpensive software development frameworks, and has been tested and validated with the available experimental data. Gender differences have been implemented for this specific work, as an adjustment of a number of parameters that are included in the model. Women versus men physiological differences have been therefore taken into account, based upon estimations from the physiology bibliography. A number of simulations have been carried out for long-term exposure to microgravity. Gravity varying from Earth-based to zero, and time exposure are the two main variables involved in the construction of results, including responses to patterns of physical aerobical exercise, and also thermal stress simulating an extra-vehicular activity. Results show

  17. More efficient optimization of long-term water supply portfolios

    Science.gov (United States)

    Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.

    2009-03-01

    The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.

  18. Debris filtering efficiency and its effect on long term cooling capability

    International Nuclear Information System (INIS)

    Jung, Min-Su; Kim, Kyu-Tae

    2013-01-01

    the containment sump into the reactor core on the long term cooling (LTC) capability after a loss of coolant accident (LOCA) was evaluated, which indicates that the debris-filter capability of the P-grid and G-grid designs may not have a detrimental effect on the LTC capability after a LOCA only if the sump mesh size is smaller than 2.54 mm in diameter

  19. The Womanly World of Long Term Care: The Plight of the Long Term Care Worker. Gray Paper.

    Science.gov (United States)

    Older Women's League, Washington, DC.

    Long-term care workers (those who are paid to provide custodial care for long-term patients in nursing homes or at home) must care for a growing number of increasingly disabled or dependent persons. They are working for agencies and institutions under growing pressure to increase productivity. They face new training and competency requirements,…

  20. Input reduction for long-term morphodynamic simulations in wave-dominated coastal settings

    NARCIS (Netherlands)

    Walstra, D.J.R.; Hoekstra, R.; Tonnon, P.K.; Ruessink, B.G.

    2013-01-01

    Input reduction is imperative to long-term (> years) morphodynamic simulations to avoid excessive computation times. Here, we introduce an input-reduction framework for wave-dominated coastal settings. Our framework comprises 4 steps, viz. (1) the selection of the duration of the original (full)