WorldWideScience

Sample records for computing grid dosimetrie

  1. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  2. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  3. Recent trends in grid computing

    International Nuclear Information System (INIS)

    Miura, Kenichi

    2004-01-01

    Grid computing is a technology which allows uniform and transparent access to geographically dispersed computational resources, such as computers, databases, experimental and observational equipment etc. via high-speed, high-bandwidth networking. The commonly used analogy is that of electrical power grid, whereby the household electricity is made available from outlets on the wall, and little thought need to be given to where the electricity is generated and how it is transmitted. The usage of grid also includes distributed parallel computing, high through-put computing, data intensive computing (data grid) and collaborative computing. This paper reviews the historical background, software structure, current status and on-going grid projects, including applications of grid technology to nuclear fusion research. (author)

  4. Dosimetry in radiotherapy and brachytherapy by Monte-Carlo GATE simulation on computing grid

    International Nuclear Information System (INIS)

    Thiam, Ch.O.

    2007-10-01

    Accurate radiotherapy treatment requires the delivery of a precise dose to the tumour volume and a good knowledge of the dose deposit to the neighbouring zones. Computation of the treatments is usually carried out by a Treatment Planning System (T.P.S.) which needs to be precise and fast. The G.A.T.E. platform for Monte-Carlo simulation based on G.E.A.N.T.4 is an emerging tool for nuclear medicine application that provides functionalities for fast and reliable dosimetric calculations. In this thesis, we studied in parallel a validation of the G.A.T.E. platform for the modelling of electrons and photons low energy sources and the optimized use of grid infrastructures to reduce simulations computing time. G.A.T.E. was validated for the dose calculation of point kernels for mono-energetic electrons and compared with the results of other Monte-Carlo studies. A detailed study was made on the energy deposit during electrons transport in G.E.A.N.T.4. In order to validate G.A.T.E. for very low energy photons (<35 keV), three models of radioactive sources used in brachytherapy and containing iodine 125 (2301 of Best Medical International; Symmetra of Uro- Med/Bebig and 6711 of Amersham) were simulated. Our results were analyzed according to the recommendations of task group No43 of American Association of Physicists in Medicine (A.A.P.M.). They show a good agreement between G.A.T.E., the reference studies and A.A.P.M. recommended values. The use of Monte-Carlo simulations for a better definition of the dose deposited in the tumour volumes requires long computing time. In order to reduce it, we exploited E.G.E.E. grid infrastructure where simulations are distributed using innovative technologies taking into account the grid status. Time necessary for the computing of a radiotherapy planning simulation using electrons was reduced by a factor 30. A Web platform based on G.E.N.I.U.S. portal was developed to make easily available all the methods to submit and manage G

  5. LHC computing grid

    International Nuclear Information System (INIS)

    Novaes, Sergio

    2011-01-01

    Full text: We give an overview of the grid computing initiatives in the Americas. High-Energy Physics has played a very important role in the development of grid computing in the world and in Latin America it has not been different. Lately, the grid concept has expanded its reach across all branches of e-Science, and we have witnessed the birth of the first nationwide infrastructures and its use in the private sector. (author)

  6. Computational methods in several fields of radiation dosimetry

    International Nuclear Information System (INIS)

    Paretzke, Herwig G.

    2010-01-01

    Full text: Radiation dosimetry has to cope with a wide spectrum of applications and requirements in time and size. The ubiquitous presence of various radiation fields or radionuclides in the human home, working, urban or agricultural environment can lead to various dosimetric tasks starting from radioecology, retrospective and predictive dosimetry, personal dosimetry, up to measurements of radionuclide concentrations in environmental and food product and, finally in persons and their excreta. In all these fields measurements and computational models for the interpretation or understanding of observations are employed explicitly or implicitly. In this lecture some examples of own computational models will be given from the various dosimetric fields, including a) Radioecology (e.g. with the code systems based on ECOSYS, which was developed far before the Chernobyl reactor accident, and tested thoroughly afterwards), b) Internal dosimetry (improved metabolism models based on our own data), c) External dosimetry (with the new ICRU-ICRP-Voxelphantom developed by our lab), d) Radiation therapy (with GEANT IV as applied to mixed reactor radiation incident on individualized voxel phantoms), e) Some aspects of nanodosimetric track structure computations (not dealt with in the other presentation of this author). Finally, some general remarks will be made on the high explicit or implicit importance of computational models in radiation protection and other research field dealing with large systems, as well as on good scientific practices which should generally be followed when developing and applying such computational models

  7. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  8. Resource allocation in grid computing

    NARCIS (Netherlands)

    Koole, Ger; Righter, Rhonda

    2007-01-01

    Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of

  9. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  10. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  11. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  12. Dosimetry computer module of the gamma irradiator of ININ

    International Nuclear Information System (INIS)

    Ledezma F, L. E.; Baldomero J, R.; Agis E, K. A.

    2012-10-01

    This work present the technical specifications for the upgrade of the dosimetry module of the computer system of the gamma irradiator of the Instituto Nacional de Investigaciones Nucleares (ININ) whose result allows the integration and consultation of information in industrial dosimetry subject under an outline client-server. (Author)

  13. Grid Computing Making the Global Infrastructure a Reality

    CERN Document Server

    Fox, Geoffrey C; Hey, Anthony J G

    2003-01-01

    Grid computing is applying the resources of many computers in a network to a single problem at the same time Grid computing appears to be a promising trend for three reasons: (1) Its ability to make more cost-effective use of a given amount of computer resources, (2) As a way to solve problems that can't be approached without an enormous amount of computing power (3) Because it suggests that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as a collaboration toward a common objective. A number of corporations, professional groups, university consortiums, and other groups have developed or are developing frameworks and software for managing grid computing projects. The European Community (EU) is sponsoring a project for a grid for high-energy physics, earth observation, and biology applications. In the United States, the National Technology Grid is prototyping a computational grid for infrastructure and an access grid for people. Sun Microsystems offers Gri...

  14. European questionnaire on the use of computer programmes in radiation dosimetry

    International Nuclear Information System (INIS)

    Gualdrini, G.; Tanner, R.; Terrisol, M.

    1999-01-01

    Because of a potential reduction of necessary experimental efforts, the combination of measurements and supplementing calculations, also in the field of radiation dosimetry, may allow time and money to be saved if computational methods are used which are well suited to reproduce experimental data in a satisfactory quality. The dramatic increase in computing power in recent years now permits the use of computational tools for dosimetry also in routine applications. Many institutions dealing with radiation protection, however, have small groups which, in addition to their routine work, often cannot afford to specialise in the field of computational dosimetry. This means that not only experts but increasingly also casual users employ complicated computational tools such as general-purpose transport codes. This massive use of computer programmes in radiation protection and dosimetry applications motivated the Concerted Action Investigation and Quality Assurance of Numerical Methods in Radiation Protection Dosimetry of the 4th framework programme of the European Commission to prepare, distribute and evaluate a questionnaire on the use of such codes. A significant number of scientists from nearly all the countries of the European Community (and some countries outside Europe) contributed to the questionnaire, that allowed to obtain a satisfactory overview of the state of the art in this field. The results obtained from the questionnaire and summarised in the present Report are felt to be indicative of the situation of using sophisticated computer codes within the European Community although the group of participating scientist may not be a representative sample in a strict statistical sense [it

  15. Proposal for grid computing for nuclear applications

    International Nuclear Information System (INIS)

    Faridah Mohamad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim; Zukhaimira Zolkapli

    2013-01-01

    Full-text: The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process. (author)

  16. Synchrotron Imaging Computations on the Grid without the Computing Element

    International Nuclear Information System (INIS)

    Curri, A; Pugliese, R; Borghes, R; Kourousias, G

    2011-01-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  17. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  18. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  19. Grid computing infrastructure, service, and applications

    CERN Document Server

    Jie, Wei; Chen, Jinjun

    2009-01-01

    Offering a comprehensive discussion of advances in grid computing, this book summarizes the concepts, methods, technologies, and applications. It covers topics such as philosophy, middleware, architecture, services, and applications. It also includes technical details to demonstrate how grid computing works in the real world

  20. Grid computing faces IT industry test

    CERN Multimedia

    Magno, L

    2003-01-01

    Software company Oracle Corp. unveiled it's Oracle 10g grid computing platform at the annual OracleWorld user convention in San Francisco. It gave concrete examples of how grid computing can be a viable option outside the scientific community where the concept was born (1 page).

  1. Southampton uni's computer whizzes develop "mini" grid

    CERN Multimedia

    Sherriff, Lucy

    2006-01-01

    "In a bid to help its students explore the potential of grid computing, the University of Southampton's Computer Science department has developed what it calls a "lightweight grid". The system has been designed to allow students to experiment with grid technology without the complexity of inherent security concerns of the real thing. (1 page)

  2. Dosimetry; La dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Le Couteulx, I.; Apretna, D.; Beaugerie, M.F. [Electricite de France (EDF), 75 - Paris (France)] [and others

    2003-07-01

    Eight articles treat the dosimetry. Two articles evaluate the radiation doses in specific cases, dosimetry of patients in radiodiagnosis, three articles are devoted to detectors (neutrons and x and gamma radiations) and a computer code to build up the dosimetry of an accident due to an external exposure. (N.C.)

  3. Computer dosimetry of 192Ir wire

    International Nuclear Information System (INIS)

    Kline, R.W.; Gillin, M.T.; Grimm, D.F.; Niroomand-Rad, A.

    1985-01-01

    The dosimetry of 192 Ir linear sources with a commercial treatment planning computer system has been evaluated. Reference dose rate data were selected from the literature and normalized in a manner consistent with our clinical and dosimetric terminology. The results of the computer calculations are compared to the reference data and good agreement is shown at distances within about 7 cm from a linear source. The methodology of translating source calibration in terms of exposure rate for use in the treatment planning computer is developed. This may be useful as a practical guideline for users of similar computer calculation programs for iridium as well as other sources

  4. Virtual Machine Lifecycle Management in Grid and Cloud Computing

    OpenAIRE

    Schwarzkopf, Roland

    2015-01-01

    Virtualization is the foundation for two important technologies: Virtualized Grid and Cloud Computing. Virtualized Grid Computing is an extension of the Grid Computing concept introduced to satisfy the security and isolation requirements of commercial Grid users. Applications are confined in virtual machines to isolate them from each other and the data they process from other users. Apart from these important requirements, Virtual...

  5. Discovery Mondays: 'The Grid: a universal computer'

    CERN Multimedia

    2006-01-01

    How can one store and analyse the 15 million billion pieces of data that the LHC will produce each year with a computer that isn't the size of a sky-scraper? The IT experts have found the answer: the Grid, which will harness the power of tens of thousands of computers in the world by putting them together on one network and making them work like a single computer achieving a power that has not yet been matched. The Grid, inspired from the Web, already exists - in fact, several of them exist in the field of science. The European EGEE project, led by CERN, contributes not only to the study of particle physics but to medical research as well, notably in the study of malaria and avian flu. The next Discovery Monday invites you to explore this futuristic computing technology. The 'Grid Masters' of CERN have prepared lively animations to help you understand how the Grid works. Children can practice saving the planet on the Grid video game. You will also discover other applications such as UNOSAT, a United Nations...

  6. The MicroGrid: A Scientific Tool for Modeling Computational Grids

    Directory of Open Access Journals (Sweden)

    H.J. Song

    2000-01-01

    Full Text Available The complexity and dynamic nature of the Internet (and the emerging Computational Grid demand that middleware and applications adapt to the changes in configuration and availability of resources. However, to the best of our knowledge there are no simulation tools which support systematic exploration of dynamic Grid software (or Grid resource behavior. We describe our vision and initial efforts to build tools to meet these needs. Our MicroGrid simulation tools enable Globus applications to be run in arbitrary virtual grid resource environments, enabling broad experimentation. We describe the design of these tools, and their validation on micro-benchmarks, the NAS parallel benchmarks, and an entire Grid application. These validation experiments show that the MicroGrid can match actual experiments within a few percent (2% to 4%.

  7. Fault tolerance in computational grids: perspectives, challenges, and issues.

    Science.gov (United States)

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  8. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  9. Soil Erosion Estimation Using Grid-based Computation

    Directory of Open Access Journals (Sweden)

    Josef Vlasák

    2005-06-01

    Full Text Available Soil erosion estimation is an important part of a land consolidation process. Universal soil loss equation (USLE was presented by Wischmeier and Smith. USLE computation uses several factors, namely R – rainfall factor, K – soil erodability, L – slope length factor, S – slope gradient factor, C – cropping management factor, and P – erosion control management factor. L and S factors are usually combined to one LS factor – Topographic factor. The single factors are determined from several sources, such as DTM (Digital Terrain Model, BPEJ – soil type map, aerial and satellite images, etc. A conventional approach to the USLE computation, which is widely used in the Czech Republic, is based on the selection of characteristic profiles for which all above-mentioned factors must be determined. The result (G – annual soil loss of such computation is then applied for a whole area (slope of interest. Another approach to the USLE computation uses grids as a main data-structure. A prerequisite for a grid-based USLE computation is that each of the above-mentioned factors exists as a separate grid layer. The crucial step in this computation is a selection of appropriate grid resolution (grid cell size. A large cell size can cause an undesirable precision degradation. Too small cell size can noticeably slow down the whole computation. Provided that the cell size is derived from the source’s precision, the appropriate cell size for the Czech Republic varies from 30m to 50m. In some cases, especially when new surveying was done, grid computations can be performed with higher accuracy, i.e. with a smaller grid cell size. In such case, we have proposed a new method using the two-step computation. The first step computation uses a bigger cell size and is designed to identify higher erosion spots. The second step then uses a smaller cell size but it make the computation only the area identified in the previous step. This decomposition allows a

  10. Financial Derivatives Market for Grid Computing

    CERN Document Server

    Aubert, David; Lindset, Snorre; Huuse, Henning

    2007-01-01

    This Master thesis studies the feasibility and properties of a financial derivatives market on Grid computing, a service for sharing computing resources over a network such as the Internet. For the European Organization for Nuclear Research (CERN) to perform research with the world's largest and most complex machine, the Large Hadron Collider (LHC), Grid computing was developed to handle the information created. In accordance with the mandate of CERN Technology Transfer (TT) group, this thesis is a part of CERN's dissemination of the Grid technology. The thesis gives a brief overview of the use of the Grid technology and where it is heading. IT trend analysts and large-scale IT vendors see this technology as key in transforming the world of IT. They predict that in a matter of years, IT will be bought as a service, instead of a good. Commoditization of IT, delivered as a service, is a paradigm shift that will have a broad impact on all parts of the IT market, as well as on the society as a whole. Political, e...

  11. ATLAS grid compute cluster with virtualized service nodes

    International Nuclear Information System (INIS)

    Mejia, J; Stonjek, S; Kluth, S

    2010-01-01

    The ATLAS Computing Grid consists of several hundred compute clusters distributed around the world as part of the Worldwide LHC Computing Grid (WLCG). The Grid middleware and the ATLAS software which has to be installed on each site, often require a certain Linux distribution and sometimes even specific version thereof. On the other hand, mostly due to maintenance reasons, computer centres install the same operating system and version on all computers. This might lead to problems with the Grid middleware if the local version is different from the one for which it has been developed. At RZG we partly solved this conflict by using virtualization technology for the service nodes. We will present the setup used at RZG and show how it helped to solve the problems described above. In addition we will illustrate the additional advantages gained by the above setup.

  12. Dosimetry in radiotherapy and brachytherapy by Monte-Carlo GATE simulation on computing grid; Dosimetrie en radiotherapie et curietherapie par simulation Monte-Carlo GATE sur grille informatique

    Energy Technology Data Exchange (ETDEWEB)

    Thiam, Ch O

    2007-10-15

    Accurate radiotherapy treatment requires the delivery of a precise dose to the tumour volume and a good knowledge of the dose deposit to the neighbouring zones. Computation of the treatments is usually carried out by a Treatment Planning System (T.P.S.) which needs to be precise and fast. The G.A.T.E. platform for Monte-Carlo simulation based on G.E.A.N.T.4 is an emerging tool for nuclear medicine application that provides functionalities for fast and reliable dosimetric calculations. In this thesis, we studied in parallel a validation of the G.A.T.E. platform for the modelling of electrons and photons low energy sources and the optimized use of grid infrastructures to reduce simulations computing time. G.A.T.E. was validated for the dose calculation of point kernels for mono-energetic electrons and compared with the results of other Monte-Carlo studies. A detailed study was made on the energy deposit during electrons transport in G.E.A.N.T.4. In order to validate G.A.T.E. for very low energy photons (<35 keV), three models of radioactive sources used in brachytherapy and containing iodine 125 (2301 of Best Medical International; Symmetra of Uro- Med/Bebig and 6711 of Amersham) were simulated. Our results were analyzed according to the recommendations of task group No43 of American Association of Physicists in Medicine (A.A.P.M.). They show a good agreement between G.A.T.E., the reference studies and A.A.P.M. recommended values. The use of Monte-Carlo simulations for a better definition of the dose deposited in the tumour volumes requires long computing time. In order to reduce it, we exploited E.G.E.E. grid infrastructure where simulations are distributed using innovative technologies taking into account the grid status. Time necessary for the computing of a radiotherapy planning simulation using electrons was reduced by a factor 30. A Web platform based on G.E.N.I.U.S. portal was developed to make easily available all the methods to submit and manage G

  13. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  14. Comparison of Real-Time Intraoperative Ultrasound-Based Dosimetry With Postoperative Computed Tomography-Based Dosimetry for Prostate Brachytherapy

    International Nuclear Information System (INIS)

    Nag, Subir; Shi Peipei; Liu Bingren; Gupta, Nilendu; Bahnson, Robert R.; Wang, Jian Z.

    2008-01-01

    Purpose: To evaluate whether real-time intraoperative ultrasound (US)-based dosimetry can replace conventional postoperative computed tomography (CT)-based dosimetry in prostate brachytherapy. Methods and Materials: Between December 2001 and November 2002, 82 patients underwent 103 Pd prostate brachytherapy. An interplant treatment planning system was used for real-time intraoperative transrectal US-guided treatment planning. The dose distribution was updated according to the estimated seed position to obtain the dose-volume histograms. Postoperative CT-based dosimetry was performed a few hours later using the Theraplan-Plus treatment planning system. The dosimetric parameters obtained from the two imaging modalities were compared. Results: The results of this study revealed correlations between the US- and CT-based dosimetry. However, large variations were found in the implant-quality parameters of the two modalities, including the doses covering 100%, 90%, and 80% of the prostate volume and prostate volumes covered by 100%, 150%, and 200% of the prescription dose. The mean relative difference was 38% and 16% for doses covering 100% and 90% of the prostate volume and 10% and 21% for prostate volumes covered by 100% and 150% of the prescription dose, respectively. The CT-based volume covered by 200% of the prescription dose was about 30% greater than the US-based one. Compared with CT-based dosimetry, US-based dosimetry significantly underestimated the dose to normal organs, especially for the rectum. The average US-based maximal dose and volume covered by 100% of the prescription dose for the rectum was 72 Gy and 0.01 cm 3 , respectively, much lower than the 159 Gy and 0.65 cm 3 obtained using CT-based dosimetry. Conclusion: Although dosimetry using intraoperative US-based planning provides preliminary real-time information, it does not accurately reflect the postoperative CT-based dosimetry. Until studies have determined whether US-based dosimetry or

  15. Grids in Europe - a computing infrastructure for science

    International Nuclear Information System (INIS)

    Kranzlmueller, D.

    2008-01-01

    Grids provide sheer unlimited computing power and access to a variety of resources to todays scientists. Moving from a research topic of computer science to a commodity tool for science and research in general, grid infrastructures are built all around the world. This talk provides an overview of the developments of grids in Europe, the status of the so-called national grid initiatives as well as the efforts towards an integrated European grid infrastructure. The latter, summarized under the title of the European Grid Initiative (EGI), promises a permanent and reliable grid infrastructure and its services in a way similar to research networks today. The talk describes the status of these efforts, the plans for the setup of this pan-European e-Infrastructure, and the benefits for the application communities. (author)

  16. Experimental and computational development of a natural breast phantom for dosimetry studies

    International Nuclear Information System (INIS)

    Nogueira, Luciana B.; Campos, Tarcisio P.R.

    2013-01-01

    This paper describes the experimental and computational development of a natural breast phantom, anthropomorphic and anthropometric for studies in dosimetry of brachytherapy and teletherapy of breast. The natural breast phantom developed corresponding to fibroadipose breasts of women aged 30 to 50 years, presenting radiographically medium density. The experimental breast phantom was constituted of three tissue-equivalents (TE's): glandular TE, adipose TE and skin TE. These TE's were developed according to chemical composition of human breast and present radiological response to exposure. Completed the construction of experimental breast phantom this was mounted on a thorax phantom previously developed by the research group NRI/UFMG. Then the computational breast phantom was constructed by performing a computed tomography (CT) by axial slices of the chest phantom. Through the images generated by CT a computational model of voxels of the thorax phantom was developed by SISCODES computational program, being the computational breast phantom represented by the same TE's of the experimental breast phantom. The images generated by CT allowed evaluating the radiological equivalence of the tissues. The breast phantom is being used in studies of experimental dosimetry both in brachytherapy as in teletherapy of breast. Dosimetry studies by MCNP-5 code using the computational model of the phantom breast are in progress. (author)

  17. Security Implications of Typical Grid Computing Usage Scenarios

    International Nuclear Information System (INIS)

    Humphrey, Marty; Thompson, Mary R.

    2001-01-01

    A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing

  18. Security Implications of Typical Grid Computing Usage Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Humphrey, Marty; Thompson, Mary R.

    2001-06-05

    A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing.

  19. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  20. Bringing Federated Identity to Grid Computing

    Energy Technology Data Exchange (ETDEWEB)

    Teheran, Jeny [Fermilab

    2016-03-04

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access for users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.

  1. Improved visibility computation on massive grid terrains

    NARCIS (Netherlands)

    Fishman, J.; Haverkort, H.J.; Toma, L.; Wolfson, O.; Agrawal, D.; Lu, C.-T.

    2009-01-01

    This paper describes the design and engineering of algorithms for computing visibility maps on massive grid terrains. Given a terrain T, specified by the elevations of points in a regular grid, and given a viewpoint v, the visibility map or viewshed of v is the set of grid points of T that are

  2. Grid Computing BOINC Redesign Mindmap with incentive system (gamification)

    OpenAIRE

    Kitchen, Kris

    2016-01-01

    Grid Computing BOINC Redesign Mindmap with incentive system (gamification) this is a PDF viewable of https://figshare.com/articles/Grid_Computing_BOINC_Redesign_Mindmap_with_incentive_system_gamification_/1265350

  3. Grid Computing Das wahre Web 2.0?

    CERN Document Server

    2008-01-01

    'Grid-Computing ist eine Fortentwicklung des World Wide Web, sozusagen die nchste Generation', sagte (1) Franz-Josef Pfreundt (Fraunhofer-Institut fr Techno- und Wirtschaftsmathematik) schon auf der CeBIT 2003 und verwies auf die NASA als Grid-Avantgarde.

  4. The LHC Computing Grid in the starting blocks

    CERN Multimedia

    Danielle Amy Venton

    2010-01-01

    As the Large Hadron Collider ramps up operations and breaks world records, it is an exciting time for everyone at CERN. To get the computing perspective, the Bulletin this week caught up with Ian Bird, leader of the Worldwide LHC Computing Grid (WLCG). He is confident that everything is ready for the first data.   The metallic globe illustrating the Worldwide LHC Computing GRID (WLCG) in the CERN Computing Centre. The Worldwide LHC Computing Grid (WLCG) collaboration has been in place since 2001 and for the past several years it has continually run the workloads for the experiments as part of their preparations for LHC data taking. So far, the numerous and massive simulations of the full chain of reconstruction and analysis software could only be carried out using Monte Carlo simulated data. Now, for the first time, the system is starting to work with real data and with many simultaneous users accessing them from all around the world. “During the 2009 large-scale computing challenge (...

  5. Computational anthropomorphic phantoms for radiation protection dosimetry: evolution and prospects

    International Nuclear Information System (INIS)

    Lee, Choonsik; Lee, Jaiki

    2006-01-01

    Computational anthropomorphic phantoms are computer models of human anatomy used in the calculation of radiation dose distribution in the human body upon exposure to a radiation source. Depending on the manner to represent human anatomy, they are categorized into two classes: stylized and tomographic phantoms. Stylized phantoms, which have mainly been developed at the Oak Ridge National Laboratory (ORNL), describe human anatomy by using simple mathematical equations of analytical geometry. Several improved stylized phantoms such as male and female adults, pediatric series, and enhanced organ models have been developed following the first hermaphrodite adult stylized phantom, Medical Internal Radiation Dose (MIRD)-5 phantom. Although stylized phantoms have significantly contributed to dosimetry calculation, they provide only approximations of the true anatomical features of the human body and the resulting organ dose distribution. An alternative class of computational phantom, the tomographic phantom, is based upon three-dimensional imaging techniques such as Magnetic Resonance (MR) imaging and Computed Tomography (CT). The tomographic phantoms represent the human anatomy with a large number of voxels that are assigned tissue type and organ identity. To date, a total of around 30 tomographic phantoms including male and female adults, pediatric phantoms, and even a pregnant female, have been developed and utilized for realistic radiation dosimetry calculation. They are based on MRI/CT images or sectional color photos from patients, volunteers or cadavers. Several investigators have compared tomographic phantoms with stylized phantoms, and demonstrated the superiority of tomographic phantoms in terms of realistic anatomy and dosimetry calculation. This paper summarizes the history and current status of both stylized and tomographic phantoms, including Korean computational phantoms. Advantages, limitations, and future prospects are also discussed

  6. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  7. Computation of Asteroid Proper Elements on the Grid

    Science.gov (United States)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  8. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  9. Radiation dosimetry of computed tomography x-ray scanners

    International Nuclear Information System (INIS)

    Poletti, J.L.; Williamson, B.D.P.; Le Heron, J.C.

    1983-01-01

    This report describes the development and application of the methods employed in National Radiation Laboratory (NRL) surveys of computed tomography x-ray scanners (CT scanners). It includes descriptions of the phantoms and equipment used, discussion of the various dose parameters measured, the principles of the various dosimetry systems employed and some indication of the doses to occupationally exposed personnel

  10. CDF GlideinWMS usage in Grid computing of high energy physics

    International Nuclear Information System (INIS)

    Zvada, Marian; Sfiligoi, Igor; Benjamin, Doug

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  11. Optimal usage of computing grid network in the fields of nuclear fusion computing task

    International Nuclear Information System (INIS)

    Tenev, D.

    2006-01-01

    Nowadays the nuclear power becomes the main source of energy. To make its usage more efficient, the scientists created complicated simulation models, which require powerful computers. The grid computing is the answer to powerful and accessible computing resources. The article observes, and estimates the optimal configuration of the grid environment in the fields of the complicated nuclear fusion computing tasks. (author)

  12. IBM announces global Grid computing solutions for banking, financial markets

    CERN Multimedia

    2003-01-01

    "IBM has announced a series of Grid projects around the world as part of its Grid computing program. They include IBM new Grid-based product offerings with business intelligence software provider SAS and other partners that address the computer-intensive needs of the banking and financial markets industry (1 page)."

  13. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  14. A way forward for the development of an exposure computational model to computed tomography dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, C.C., E-mail: cassio.c.ferreira@gmail.co [Nucleo de Fisica, Universidade Federal de Sergipe, Itabaiana-SE, CEP 49500-000 (Brazil); Galvao, L.A., E-mail: lailagalmeida@gmail.co [Departamento de Fisica, Universidade Federal de Sergipe, Sao Cristovao-SE, CEP 49100-000 (Brazil); Vieira, J.W., E-mail: jose.wilson59@uol.com.b [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco, Recife-PE, CEP 50740-540 (Brazil); Escola Politecnica de Pernambuco, Universidade de Pernambuco, Recife-PE, CEP 50720-001 (Brazil); Maia, A.F., E-mail: afmaia@ufs.b [Departamento de Fisica, Universidade Federal de Sergipe, Sao Cristovao-SE, CEP 49100-000 (Brazil)

    2011-04-15

    A way forward for the development of an exposure computational model to computed tomography dosimetry has been presented. In this way, an exposure computational model (ECM) for computed tomography (CT) dosimetry has been developed and validated through comparison with experimental results. For the development of the ECM, X-ray spectra generator codes have been evaluated and the head bow tie filter has been modelled through a mathematical equation. EGS4 and EGSnrc have been used for simulating the radiation transport by the ECM. Geometrical phantoms, commonly used in CT dosimetry, have been modelled by IDN software. MAX06 has also been used to simulate an adult male patient submitted for CT examinations. The evaluation of the X-ray spectra generator codes in CT dosimetry showed dependence with tube filtration (or HVL value). More generally, with the increment of total filtration (or HVL value) the X-raytbc becomes the best X-ray spectra generator code for CT dosimetry. The EGSnrc/X-raytbc combination has calculated C{sub 100,c} in better concordance with C{sub 100,c} measured in two different CT scanners. For a Toshiba CT scanner, the average percentage difference between the calculated C{sub 100,c} values and measured C{sub 100,c} values was 8.2%. Whilst for a GE CT scanner, the average percentage difference was 10.4%. By the measurements of air kerma through a prototype head bow tie filter a third-order exponential decay equation was found. C{sub 100,c} and C{sub 100,p} values calculated by the ECM are in good agreement with values measured at a specific CT scanner. A maximum percentage difference of 2% has been found in the PMMA CT head phantoms, demonstrating effective modelling of the head bow tie filter by the equation. The absorbed and effective doses calculated by the ECM developed in this work have been compared to those calculated by the ECM of Jones and Shrimpton for an adult male patient. For a head examination the absorbed dose values calculated by the

  15. In vivo thermoluminescent dosimetry in studies of helicoid computed tomography and excretory urogram

    International Nuclear Information System (INIS)

    Cruz C, D.; Azorin N, J.; Saucedo A, V.M.; Barajas O, J.L.

    2005-01-01

    The dosimetry is the field of measurement of the ionizing radiations. It final objective is to determine the 'absorbed dose' for people. The dosimetry is vital in the radiotherapy, the radiological protection and the treatment technologies by irradiation. Presently work, we develop 'In vivo' dosimetry, in exposed patients to studies of helical computed tomography and excretory urogram. The dosimetry 'in vivo' was carried out in 20 patients selected aleatorily, for each medical study. The absorbed dose was measured in points of interest located in crystalline, thyroid, chest and abdomen of each patient, by means of thermoluminescent dosemeters (TLD) LiF: Mg,Cu,P + Ptfe of national fabrication. Also it was quantified the dose in the working area. (Author)

  16. Computation of asteroid proper elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković B.

    2009-01-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  17. Computation of Asteroid Proper Elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković, B.

    2009-12-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  18. Measurement of the three-dimensional distribution of radiation dose in grid therapy

    International Nuclear Information System (INIS)

    Trapp, J V; Warrington, A P; Partridge, M; Philps, A; Glees, J; Tait, D; Ahmed, R; Leach, M O; Webb, S

    2004-01-01

    A single large dose of megavoltage x-rays delivered through a grid is currently being utilized by some centres for palliative radiotherapy treatments of large tumours. In this note, we investigate the dosimetry of grid therapy using two-dimensional film dosimetry and three-dimensional gel dosimetry. It is shown that the radiation dose is attenuated more rapidly with depth in a grid field than an open field, and that even shielded regions receive approximately 25% of the dose to the unshielded areas. (note)

  19. Computing Flows Using Chimera and Unstructured Grids

    Science.gov (United States)

    Liou, Meng-Sing; Zheng, Yao

    2006-01-01

    DRAGONFLOW is a computer program that solves the Navier-Stokes equations of flows in complexly shaped three-dimensional regions discretized by use of a direct replacement of arbitrary grid overlapping by nonstructured (DRAGON) grid. A DRAGON grid (see figure) is a combination of a chimera grid (a composite of structured subgrids) and a collection of unstructured subgrids. DRAGONFLOW incorporates modified versions of two prior Navier-Stokes-equation-solving programs: OVERFLOW, which is designed to solve on chimera grids; and USM3D, which is used to solve on unstructured grids. A master module controls the invocation of individual modules in the libraries. At each time step of a simulated flow, DRAGONFLOW is invoked on the chimera portion of the DRAGON grid in alternation with USM3D, which is invoked on the unstructured subgrids of the DRAGON grid. The USM3D and OVERFLOW modules then immediately exchange their solutions and other data. As a result, USM3D and OVERFLOW are coupled seamlessly.

  20. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  1. Status of computed tomography dosimetry for wide cone beam scanners

    International Nuclear Information System (INIS)

    2011-01-01

    International standardization in dosimetry is essential for the successful exploitation of radiation technology. To provide such standardization in diagnostic radiology, the IAEA published Code of Practice entitled Dosimetry in Diagnostic Radiology: An International Code of Practice (IAEA Technical Reports Series No. 457; 2007), which recommends procedures for calibration and dosimetric measurement both in standards dosimetry laboratories, especially Secondary Standards Dosimetry Laboratories (SSDLs), and in clinical centres for radiology, as found in most hospitals. These standards address the main dosimetric methodologies needed in clinical diagnostic radiology, with the calibration of associated dosimetric equipment, including the measurement methodologies for computed tomography (CT). For some time now there has been a growing awareness that radiation dose originating from medical diagnostic procedures in radiology, is contributing an increasing proportion to the total population dose, with a large component coming from CT examinations. This is accompanied by rapid developments in CT technology, including the use of increasingly wide X ray scanning beams, which are presenting problems in dosimetry that currently cannot be adequately addressed by existing standards. This situation has received attention from a number of professional bodies, and institutions have proposed and are investigating new and adapted dosimetric models in order to find robust solutions to these problems that are critically affecting clinical application of CT dosimetry. In view of these concerns, and as a response to a recommendation from a coordinated research project that reviewed the implementation of IAEA Technical Reports Series No. 457, a meeting was held to review current dosimetric methodologies and to determine if a practical solution for dosimetry for wide X ray beam CT scanners was currently available. The meeting rapidly formed the view that there was an interim solution that

  2. Grid computing in pakistan and: opening to large hadron collider experiments

    International Nuclear Information System (INIS)

    Batool, N.; Osman, A.; Mahmood, A.; Rana, M.A.

    2009-01-01

    A grid computing facility was developed at sister institutes Pakistan Institute of Nuclear Science and Technology (PINSTECH) and Pakistan Institute of Engineering and Applied Sciences (PIEAS) in collaboration with Large Hadron Collider (LHC) Computing Grid during early years of the present decade. The Grid facility PAKGRID-LCG2 as one of the grid node in Pakistan was developed employing mainly local means and is capable of supporting local and international research and computational tasks in the domain of LHC Computing Grid. Functional status of the facility is presented in terms of number of jobs performed. The facility developed provides a forum to local researchers in the field of high energy physics to participate in the LHC experiments and related activities at European particle physics research laboratory (CERN), which is one of the best physics laboratories in the world. It also provides a platform of an emerging computing technology (CT). (author)

  3. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    DEFF Research Database (Denmark)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-01-01

    for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric...

  4. FAULT TOLERANCE IN MOBILE GRID COMPUTING

    OpenAIRE

    Aghila Rajagopal; M.A. Maluk Mohamed

    2014-01-01

    This paper proposes a novel model for Surrogate Object based paradigm in mobile grid environment for achieving a Fault Tolerance. Basically Mobile Grid Computing Model focuses on Service Composition and Resource Sharing Process. In order to increase the performance of the system, Fault Recovery plays a vital role. In our Proposed System for Recovery point, Surrogate Object Based Checkpoint Recovery Model is introduced. This Checkpoint Recovery model depends on the Surrogate Object and the Fau...

  5. Multiobjective Variable Neighborhood Search algorithm for scheduling independent jobs on computational grid

    Directory of Open Access Journals (Sweden)

    S. Selvi

    2015-07-01

    Full Text Available Grid computing solves high performance and high-throughput computing problems through sharing resources ranging from personal computers to super computers distributed around the world. As the grid environments facilitate distributed computation, the scheduling of grid jobs has become an important issue. In this paper, an investigation on implementing Multiobjective Variable Neighborhood Search (MVNS algorithm for scheduling independent jobs on computational grid is carried out. The performance of the proposed algorithm has been evaluated with Min–Min algorithm, Simulated Annealing (SA and Greedy Randomized Adaptive Search Procedure (GRASP algorithm. Simulation results show that MVNS algorithm generally performs better than other metaheuristics methods.

  6. Grid computing and e-science: a view from inside

    Directory of Open Access Journals (Sweden)

    Stefano Cozzini

    2008-06-01

    Full Text Available My intention is to analyze how, where and if grid computing technology is truly enabling a new way of doing science (so-called ‘e-science’. I will base my views on the experiences accumulated thus far in a number of scientific communities, which we have provided with the opportunity of using grid computing. I shall first define some basic terms and concepts and then discuss a number of specific cases in which the use of grid computing has actually made possible a new method for doing science. I will then present a case in which this did not result in a change in research methods. I will try to identify the reasons for these failures and analyze the future evolution of grid computing. I will conclude by introducing and commenting the concept of ‘cloud computing’, the approach offered and provided by major industrial actors (Google/IBM and Amazon being among the most important and what impact this technology might have on the world of research.

  7. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  8. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  9. Dosimetry system 1986

    International Nuclear Information System (INIS)

    Woolson, William A.; Egbert, Stephen D.; Gritzner, Michael L.

    1987-01-01

    In May 1983, the authors proposed a dosimetry system for use by the Radiation Effects Research Foundation (RERF) that would incorporate the new findings and calculations of the joint United States - Japan working groups on the reassessment of A-bomb dosimetry. The proposed dosimetry system evolved from extensive discussions with RERF personnel, numerous meetings of the scientists from Japan and the United States involved in the dosimetry reassessment research, and requirements expressed by epidemiologists and radiobiologists on the various review panels. The dosimetry system proposed was based on considerations of the dosimetry requirements for the normal work of RERF and for future research in radiobiology, the computerized input data on A-bomb survivors available in the RERF data base, the level of detail, precision, and accuracy of various components of the dosimetric estimates, and the computer resources available at RERF in Hiroshima. These discussions and our own experience indicated that, in light of the expansion of computer and radiation technologies and the desire for more detail in the dosimetry, an entirely new approach to the dosimetry system was appropriate. This resulted in a complete replacement of the T65D system as distinguished from a simpler approach involving a renormalization of T65D parameters to reflect the new dosimetry. The proposed dosimetry system for RERF and the plan for implementation was accepted by the Department of Energy (DOE) Working Group on A-bomb Dosimetry chaired by Dr. R.F. Christy. The dosimetry system plan was also presented to the binational A-bomb dosimetry review groups for critical comment and was discussed at joint US-Japan workshop. A prototype dosimetry system incorporating preliminary dosimetry estimates and applicable to only a limited set of A-bomb survivors was installed on the RERF computer system in the fall of 1984. This system was successfully operated at RERF and provided an initial look at the impact of

  10. European questionnaire on the use of computer programmes in radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Gualdrini, G. [ENEA, Centro Ricerche Ezio Clementel, Bologna (Italy). Dipt. Ambiente; Grosswendt, B.; Siebert, B.R.L. [Braunschweig (Germany); Tanner, R. [NRPB, Dosimetry Development Group, Chilton, Didcot, Oxon (United Kingdom); Terrisol, M. [CPAT, Univ. Paul Sabatier, Toulouse (France)

    1999-07-01

    Because of a potential reduction of necessary experimental efforts, the combination of measurements and supplementing calculations, also in the field of radiation dosimetry, may allow time and money to be saved if computational methods are used which are well suited to reproduce experimental data in a satisfactory quality. The dramatic increase in computing power in recent years now permits the use of computational tools for dosimetry also in routine applications. Many institutions dealing with radiation protection, however, have small groups which, in addition to their routine work, often cannot afford to specialise in the field of computational dosimetry. This means that not only experts but increasingly also casual users employ complicated computational tools such as general-purpose transport codes. This massive use of computer programmes in radiation protection and dosimetry applications motivated the Concerted Action Investigation and Quality Assurance of Numerical Methods in Radiation Protection Dosimetry of the 4. framework programme of the European Commission to prepare, distribute and evaluate a questionnaire on the use of such codes. A significant number of scientists from nearly all the countries of the European Community (and some countries outside Europe) contributed to the questionnaire, that allowed to obtain a satisfactory overview of the state of the art in this field. The results obtained from the questionnaire and summarised in the present Report are felt to be indicative of the situation of using sophisticated computer codes within the European Community although the group of participating scientist may not be a representative sample in a strict statistical sense. [Italian] A causa della progressiva diminuzione dell'impegno sperimentale, la combinazione di misure e valutazioni numeriche supplementari puo' consentire, anche nel campo della dosimetria delle radiazioni, risparmi di tempo e risorse purche' sia garantito l

  11. Grid computing : enabling a vision for collaborative research

    International Nuclear Information System (INIS)

    von Laszewski, G.

    2002-01-01

    In this paper the authors provide a motivation for Grid computing based on a vision to enable a collaborative research environment. The authors vision goes beyond the connection of hardware resources. They argue that with an infrastructure such as the Grid, new modalities for collaborative research are enabled. They provide an overview showing why Grid research is difficult, and they present a number of management-related issues that must be addressed to make Grids a reality. They list projects that provide solutions to subsets of these issues

  12. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  13. Computation for LHC experiments: a worldwide computing grid

    International Nuclear Information System (INIS)

    Fairouz, Malek

    2010-01-01

    In normal operating conditions the LHC detectors are expected to record about 10 10 collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10 9 octets per second and recording capacity of a few tens of 10 15 octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  14. Cloud computing for energy management in smart grid - an application survey

    International Nuclear Information System (INIS)

    Naveen, P; Ing, Wong Kiing; Danquah, Michael Kobina; Sidhu, Amandeep S; Abu-Siada, Ahmed

    2016-01-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid. (paper)

  15. GLOA: A New Job Scheduling Algorithm for Grid Computing

    Directory of Open Access Journals (Sweden)

    Zahra Pooranian

    2013-03-01

    Full Text Available The purpose of grid computing is to produce a virtual supercomputer by using free resources available through widespread networks such as the Internet. This resource distribution, changes in resource availability, and an unreliable communication infrastructure pose a major challenge for efficient resource allocation. Because of the geographical spread of resources and their distributed management, grid scheduling is considered to be a NP-complete problem. It has been shown that evolutionary algorithms offer good performance for grid scheduling. This article uses a new evaluation (distributed algorithm inspired by the effect of leaders in social groups, the group leaders' optimization algorithm (GLOA, to solve the problem of scheduling independent tasks in a grid computing system. Simulation results comparing GLOA with several other evaluation algorithms show that GLOA produces shorter makespans.

  16. Improvement of JCDS, a computational dosimetry system in JAEA for neutron capture therapy

    International Nuclear Information System (INIS)

    Kumada, Hiroaki; Yamamoto, Kazuyoshi; Matsumura, Akira; Yamamoto, Tetsuya; Nakagawa, Yoshinobu; Kageji, Teruyoshi

    2006-01-01

    JCDS, a computational dosimetry system for neutron capture therapy, was developed by Japan Atomic Energy Agency. The system has been sophisticated to facilitate dose planning so far. In dosimetry with JCDS for BNCT clinical trials at JRR-4, several absorbed doses and the dose distributions are determined by a voxel model consisted of 2x2x2mm 3 voxel cells. By using the detailed voxel model, accuracy of the dosimetry can be improved. Clinical trials for melanoma and head-and-neck cancer as well as brain tumor were started using hot version of JCDS in 2005. JCDS is also being of improved so as to enable a JCDS application to dosimetry by PHITS as well as dosimetry by MCNP. By using PHITS, total doses of a patient by a combined modality therapy, for example a combination of BNCT and proton therapy, can be estimated consistently. Moreover, PET images can be adopted in combination with CT and MRI images as a farsighted approach. JCDS became able to identify target regions by using the PET values. (author)

  17. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    Science.gov (United States)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  18. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  19. Two-parametric model of electron beam in computational dosimetry for radiation processing

    International Nuclear Information System (INIS)

    Lazurik, V.M.; Lazurik, V.T.; Popov, G.; Zimek, Z.

    2016-01-01

    Computer simulation of irradiation process of various materials with electron beam (EB) can be applied to correct and control the performances of radiation processing installations. Electron beam energy measurements methods are described in the international standards. The obtained results of measurements can be extended by implementation computational dosimetry. Authors have developed the computational method for determination of EB energy on the base of two-parametric fitting of semi-empirical model for the depth dose distribution initiated by mono-energetic electron beam. The analysis of number experiments show that described method can effectively consider random displacements arising from the use of aluminum wedge with a continuous strip of dosimetric film and minimize the magnitude uncertainty value of the electron energy evaluation, calculated from the experimental data. Two-parametric fitting method is proposed for determination of the electron beam model parameters. These model parameters are as follow: E 0 – energy mono-energetic and mono-directional electron source, X 0 – the thickness of the aluminum layer, located in front of irradiated object. That allows obtain baseline data related to the characteristic of the electron beam, which can be later on applied for computer modeling of the irradiation process. Model parameters which are defined in the international standards (like E p – the most probably energy and R p – practical range) can be linked with characteristics of two-parametric model (E 0 , X 0 ), which allows to simulate the electron irradiation process. The obtained data from semi-empirical model were checked together with the set of experimental results. The proposed two-parametric model for electron beam energy evaluation and estimation of accuracy for computational dosimetry methods on the base of developed model are discussed. - Highlights: • Experimental and computational methods of electron energy evaluation. • Development

  20. The 20 Tera flop Erasmus Computing Grid (ECG).

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  1. The 20 Tera flop Erasmus Computing Grid (ECG)

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2009-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  2. A Chinese Visible Human-based computational female pelvic phantom for radiation dosimetry simulation

    International Nuclear Information System (INIS)

    Nan, H.; Jinlu, S.; Shaoxiang, Z.; Qing, H.; Li-wen, T.; Chengjun, G.; Tang, X.; Jiang, S. B.; Xiano-lin, Z.

    2010-01-01

    Accurate voxel phantom is needed for dosimetric simulation in radiation therapy for malignant tumors in female pelvic region. However, most of the existing voxel phantoms are constructed on the basis of Caucasian or non-Chinese population. Materials and Methods: A computational framework for constructing female pelvic voxel phantom for radiation dosimetry was performed based on Chinese Visible Human datasets. First, several organs within pelvic region were segmented from Chinese Visible Human datasets. Then, polygonization and voxelization were performed based on the segmented organs and a 3D computational phantom is built in the form of a set of voxel arrays. Results: The generated phantom can be converted and loaded into treatment planning system for radiation dosimetry calculation. From the observed dosimetric results of those organs and structures, we can evaluate their absorbed dose and implement some simulation studies. Conclusion: A voxel female pelvic phantom was developed from Chinese Visible Human datasets. It can be utilized for dosimetry evaluation and planning simulation, which would be very helpful to improve the clinical performance and reduce the radiation toxicity on organ at risk.

  3. Computer Simulation of the UMER Gridded Gun

    CERN Document Server

    Haber, Irving; Friedman, Alex; Grote, D P; Kishek, Rami A; Reiser, Martin; Vay, Jean-Luc; Zou, Yun

    2005-01-01

    The electron source in the University of Maryland Electron Ring (UMER) injector employs a grid 0.15 mm from the cathode to control the current waveform. Under nominal operating conditions, the grid voltage during the current pulse is sufficiently positive relative to the cathode potential to form a virtual cathode downstream of the grid. Three-dimensional computer simulations have been performed that use the mesh refinement capability of the WARP particle-in-cell code to examine a small region near the beam center in order to illustrate some of the complexity that can result from such a gridded structure. These simulations have been found to reproduce the hollowed velocity space that is observed experimentally. The simulations also predict a complicated time-dependent response to the waveform applied to the grid during the current turn-on. This complex temporal behavior appears to result directly from the dynamics of the virtual cathode formation and may therefore be representative of the expected behavior in...

  4. From testbed to reality grid computing steps up a gear

    CERN Multimedia

    2004-01-01

    "UK plans for Grid computing changed gear this week. The pioneering European DataGrid (EDG) project came to a successful conclusion at the end of March, and on 1 April a new project, known as Enabling Grids for E-Science in Europe (EGEE), begins" (1 page)

  5. Workflow Support for Advanced Grid-Enabled Computing

    OpenAIRE

    Xu, Fenglian; Eres, M.H.; Tao, Feng; Cox, Simon J.

    2004-01-01

    The Geodise project brings computer scientists and engineer's skills together to build up a service-oriented computing environmnet for engineers to perform complicated computations in a distributed system. The workflow tool is a front GUI to provide a full life cycle of workflow functions for Grid-enabled computing. The full life cycle of workflow functions have been enhanced based our initial research and development. The life cycle starts with a composition of a workflow, followed by an ins...

  6. Grid computing and collaboration technology in support of fusion energy sciences

    International Nuclear Information System (INIS)

    Schissel, D.P.

    2005-01-01

    Science research in general and magnetic fusion research in particular continue to grow in size and complexity resulting in a concurrent growth in collaborations between experimental sites and laboratories worldwide. The simultaneous increase in wide area network speeds has made it practical to envision distributed working environments that are as productive as traditionally collocated work. In computing power, it has become reasonable to decouple production and consumption resulting in the ability to construct computing grids in a similar manner as the electrical power grid. Grid computing, the secure integration of computer systems over high speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. For human interaction, advanced collaborative environments are being researched and deployed to have distributed group work that is as productive as traditional meetings. The DOE Scientific Discovery through Advanced Computing Program initiative has sponsored several collaboratory projects, including the National Fusion Collaboratory Project, to utilize recent advances in grid computing and advanced collaborative environments to further research in several specific scientific domains. For fusion, the collaborative technology being deployed is being used in present day research and is also scalable to future research, in particular, to the International Thermonuclear Experimental Reactor experiment that will require extensive collaboration capability worldwide. This paper briefly reviews the concepts of grid computing and advanced collaborative environments and gives specific examples of how these technologies are being used in fusion research today

  7. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  8. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  9. Mesoscale Climate Evaluation Using Grid Computing

    Science.gov (United States)

    Campos Velho, H. F.; Freitas, S. R.; Souto, R. P.; Charao, A. S.; Ferraz, S.; Roberti, D. R.; Streck, N.; Navaux, P. O.; Maillard, N.; Collischonn, W.; Diniz, G.; Radin, B.

    2012-04-01

    The CLIMARS project is focused to establish an operational environment for seasonal climate prediction for the Rio Grande do Sul state, Brazil. The dynamical downscaling will be performed with the use of several software platforms and hardware infrastructure to carry out the investigation on mesoscale of the global change impact. The grid computing takes advantage of geographically spread out computer systems, connected by the internet, for enhancing the power of computation. The ensemble climate prediction is an appropriated application for processing on grid computing, because the integration of each ensemble member does not have a dependency on information from another ensemble members. The grid processing is employed to compute the 20-year climatology and the long range simulations under ensemble methodology. BRAMS (Brazilian Regional Atmospheric Model) is a mesoscale model developed from a version of the RAMS (from the Colorado State University - CSU, USA). BRAMS model is the tool for carrying out the dynamical downscaling from the IPCC scenarios. Long range BRAMS simulations will provide data for some climate (data) analysis, and supply data for numerical integration of different models: (a) Regime of the extreme events for temperature and precipitation fields: statistical analysis will be applied on the BRAMS data, (b) CCATT-BRAMS (Coupled Chemistry Aerosol Tracer Transport - BRAMS) is an environmental prediction system that will be used to evaluate if the new standards of temperature, rain regime, and wind field have a significant impact on the pollutant dispersion in the analyzed regions, (c) MGB-IPH (Portuguese acronym for the Large Basin Model (MGB), developed by the Hydraulic Research Institute, (IPH) from the Federal University of Rio Grande do Sul (UFRGS), Brazil) will be employed to simulate the alteration of the river flux under new climate patterns. Important meteorological input variables for the MGB-IPH are the precipitation (most relevant

  10. Characterization of electronics devices for computed tomography dosimetry

    International Nuclear Information System (INIS)

    Paschoal, Cinthia Marques Magalhaes

    2012-01-01

    Computed tomography (CT) is an examination of high diagnostic capability that delivers high doses of radiation compared with other diagnostic radiological examinations. The current CT dosimetry is mainly made by using a 100 mm long ionization chamber. However, it was verified that this extension, which is intended to collect ali scattered radiation of the single slice dose profile in CT, is not enough. An alternative dosimetry has been suggested by translating smaller detectors. In this work, commercial electronics devices of small dimensions were characterized for CT dosimetry. The project can be divided in five parts: a) pre-selection of devices; b) electrical characterization of selected devices; e) dosimetric characterization in Iaboratory, using radiation qualities specific to CT, and in a tomograph; d) evaluation of the dose profile in CT scanner (free in air and in head and body dosimetric phantom); e) evaluation of the new MSAD detector in a tomograph. The selected devices were OP520 and OP521 phototransistors and BPW34FS photodiode. Before the dosimetric characterization, three configurations of detectors, with 4, 2 and 1 OP520 phototransistor working as a single detector, were evaluated and the configuration with only one device was the most adequate. Hence, the following tests, for all devices, were made using the configuration with only one device. The tests of dosimetric characterization in laboratory and in a tomograph were: energy dependence, response as a function of air kerma (laboratory) and CTDI 100 (scanner), sensitivity variation and angular dependence. In both characterizations, the devices showed some energy dependence, indicating the need of correction factors depending on the beam energy; their response was linear with the air kerma and the CTDI 100 ; the OP520 phototransistor showed the largest variation in sensitivity with the irradiation and the photodiode was the most stable; the angular dependence was significant in the laboratory and

  11. VIP visit of LHC Computing Grid Project

    CERN Multimedia

    Krajewski, Yann Tadeusz

    2015-01-01

    VIP visit of LHC Computing Grid Project with Dr -.Ing. Tarek Kamel [Senior Advisor to the President for Government Engagement, ICANN Geneva Office] and Dr Nigel Hickson [VP, IGO Engagement, ICANN Geneva Office

  12. Grid Computing at GSI for ALICE and FAIR - present and future

    International Nuclear Information System (INIS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-01-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE-CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  13. Updating the INDAC computer application of internal dosimetry

    International Nuclear Information System (INIS)

    Bravo Perez-Tinao, B.; Marchena Gonzalez, P.; Sollet Sanudo, E.; Serrano Calvo, E.

    2013-01-01

    The initial objective of this project is to expand the application INDAC currently used in internal dosimetry services of the Spanish nuclear power plants and Tecnatom for estimating the effective doses of internal dosimetry of workers in direct action. or in-vivo dosimetry. (Author)

  14. Proceedings of the second workshop of LHC Computing Grid, LCG-France

    International Nuclear Information System (INIS)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin

    2007-03-01

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3 leadership expressed

  15. Task-and-role-based access-control model for computational grid

    Institute of Scientific and Technical Information of China (English)

    LONG Tao; HONG Fan; WU Chi; SUN Ling-li

    2007-01-01

    Access control in a grid environment is a challenging issue because the heterogeneous nature and independent administration of geographically dispersed resources in grid require access control to use fine-grained policies. We established a task-and-role-based access-control model for computational grid (CG-TRBAC model), integrating the concepts of role-based access control (RBAC) and task-based access control (TBAC). In this model, condition restrictions are defined and concepts specifically tailored to Workflow Management System are simplified or omitted so that role assignment and security administration fit computational grid better than traditional models; permissions are mutable with the task status and system variables, and can be dynamically controlled. The CG-TRBAC model is proved flexible and extendible. It can implement different control policies. It embodies the security principle of least privilege and executes active dynamic authorization. A task attribute can be extended to satisfy different requirements in a real grid system.

  16. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  17. Integrating GRID tools to build a computing resource broker: activities of DataGrid WP1

    International Nuclear Information System (INIS)

    Anglano, C.; Barale, S.; Gaido, L.; Guarise, A.; Lusso, S.; Werbrouck, A.

    2001-01-01

    Resources on a computational Grid are geographically distributed, heterogeneous in nature, owned by different individuals or organizations with their own scheduling policies, have different access cost models with dynamically varying loads and availability conditions. This makes traditional approaches to workload management, load balancing and scheduling inappropriate. The first work package (WP1) of the EU-funded DataGrid project is addressing the issue of optimizing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of-date (collected in a finite amount of time at a very loosely coupled site). The authors describe the DataGrid approach in integrating existing software components (from Condor, Globus, etc.) to build a Grid Resource Broker, and the early efforts to define a workable scheduling strategy

  18. EU grid computing effort takes on malaria

    CERN Multimedia

    Lawrence, Stacy

    2006-01-01

    Malaria is the world's most common parasitic infection, affecting more thatn 500 million people annually and killing more than 1 million. In order to help combat malaria, CERN has launched a grid computing effort (1 page)

  19. Computer-assisted planning and dosimetry for radiation treatment of head and neck cancer in Cameroon

    International Nuclear Information System (INIS)

    Yomi, J.; Ngniah, A.; Kingue, S.; Muna, W.F.T.; Durosinmi-Etti, F.A.

    1995-01-01

    This evaluation was part of a multicenter, multinational study sponsored by the International Agency for Atomic Energy (Vienna) to investigate a simple, reliable computer-assisted planning and dosimetry system for radiation treatment of head and neck cancers in developing countries. Over a 13-month period (April 1992-April 1993), 120 patients with histologically-proven head or neck cancer were included in the evaluation. In each patient, planning and dosimetry were done both manually and using the computer-assisted system. The manual and computerized systems were compared on the basis of accuracy of determination of the outer contour, target volume, and critical organs; volume inequality resolution; structure heterogeneity correction; selection of the number, angle, and size of beams; treatment time calculation; availability of dosimetry predictions; and duration and cost of the procedure. Results demonstrated that the computer-assisted procedure was superior over the manual procedure, despite less than optimal software. The accuracy provided by the completely computerized procedure is indispensable for Level II radiation therapy, which is particularly useful in tumors of the sensitive, complex structures in the head and neck. (authors). 7 refs., 3 tabs

  20. First Experiences with LHC Grid Computing and Distributed Analysis

    CERN Document Server

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  1. Greedy and metaheuristics for the offline scheduling problem in grid computing

    DEFF Research Database (Denmark)

    Gamst, Mette

    In grid computing a number of geographically distributed resources connected through a wide area network, are utilized as one computations unit. The NP-hard offline scheduling problem in grid computing consists of assigning jobs to resources in advance. In this paper, five greedy heuristics and two....... All heuristics solve instances with up to 2000 jobs and 1000 resources, thus the results are useful both with respect to running times and to solution values....

  2. Adaptively detecting changes in Autonomic Grid Computing

    KAUST Repository

    Zhang, Xiangliang; Germain, Cé cile; Sebag, Michè le

    2010-01-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting

  3. Colgate one of first to build global computing grid

    CERN Multimedia

    Magno, L

    2003-01-01

    "Colgate-Palmolive Co. has become one of the first organizations in the world to build an enterprise network based on the grid computing concept. Since mid-August, the consumer products firm has been working to connect approximately 50 geographically dispersed Unix servers and storage devices in an enterprise grid network" (1 page).

  4. CT dosimetry computer codes: Their influence on radiation dose estimates and the necessity for their revision under new ICRP radiation protection standards

    International Nuclear Information System (INIS)

    Kim, K. P.; Lee, J.; Bolch, W. E.

    2011-01-01

    Computed tomography (CT) dosimetry computer codes have been most commonly used due to their user friendliness, but with little consideration for potential uncertainty in estimated organ dose and their underlying limitations. Generally, radiation doses calculated with different CT dosimetry computer codes were comparable, although relatively large differences were observed for some specific organs or tissues. The largest difference in radiation doses calculated using different computer codes was observed for Siemens Sensation CT scanners. Radiation doses varied with patient age and sex. Younger patients and adult females receive a higher radiation dose in general than adult males for the same CT technique factors. There are a number of limitations of current CT dosimetry computer codes. These include unrealistic modelling of the human anatomy, a limited number of organs and tissues for dose calculation, inability to alter patient height and weight, and non-applicability to new CT technologies. Therefore, further studies are needed to overcome these limitations and to improve CT dosimetry. (authors)

  5. CMS on the GRID: Toward a fully distributed computing architecture

    International Nuclear Information System (INIS)

    Innocente, Vincenzo

    2003-01-01

    The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described

  6. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  7. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    Science.gov (United States)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  8. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    International Nuclear Information System (INIS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware. (paper)

  9. SU-E-T-454: Impact of Calculation Grid Size On Dosimetry and Radiobiological Parameters for Head and Neck IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Srivastava, S; Das, I [Purdue University, West Lafayette, IN (United States); Indiana University Health Methodist Hospital, Indianapolis, IN (United States); Indiana University- School of Medicine, Indianapolis, IN (United States); Cheng, C [Purdue University, West Lafayette, IN (United States); Indiana University Health Methodist Hospital, Indianapolis, IN (United States)

    2014-06-01

    Purpose: IMRT has become standard of care for complex treatments to optimize dose to target and spare normal tissues. However, the impact of calculation grid size is not widely known especially dose distribution, tumor control probability (TCP) and normal tissue complication probability (NTCP) which is investigated in this study. Methods: Ten head and neck IMRT patients treated with 6 MV photons were chosen for this study. Using Eclipse TPS, treatment plans were generated for different grid sizes in the range 1–5 mm for the same optimization criterion with specific dose-volume constraints. The dose volume histogram (DVH) was calculated for all IMRT plans and dosimetric data were compared. ICRU-83 dose points such as D2%, D50%, D98%, as well as the homogeneity and conformity indices (HI, CI) were calculated. In addition, TCP and NTCP were calculated from DVH data. Results: The PTV mean dose and TCP decreases with increasing grid size with an average decrease in mean dose by 2% and TCP by 3% respectively. Increasing grid size from 1–5 mm grid size, the average mean dose and NTCP for left parotid was increased by 6.0% and 8.0% respectively. Similar patterns were observed for other OARs such as cochlea, parotids and spinal cord. The HI increases up to 60% and CI decreases on average by 3.5% between 1 and 5 mm grid that resulted in decreased TCP and increased NTCP values. The number of points meeting the gamma criteria of ±3% dose difference and ±3mm DTA was higher with a 1 mm on average (97.2%) than with a 5 mm grid (91.3%). Conclusion: A smaller calculation grid provides superior dosimetry with improved TCP and reduced NTCP values. The effect is more pronounced for smaller OARs. Thus, the smallest possible grid size should be used for accurate dose calculation especially in H and N planning.

  10. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  11. How to build a high-performance compute cluster for the Grid

    CERN Document Server

    Reinefeld, A

    2001-01-01

    The success of large-scale multi-national projects like the forthcoming analysis of the LHC particle collision data at CERN relies to a great extent on the ability to efficiently utilize computing and data-storage resources at geographically distributed sites. Currently, much effort is spent on the design of Grid management software (Datagrid, Globus, etc.), while the effective integration of computing nodes has been largely neglected up to now. This is the focus of our work. We present a framework for a high- performance cluster that can be used as a reliable computing node in the Grid. We outline the cluster architecture, the management of distributed data and the seamless integration of the cluster into the Grid environment. (11 refs).

  12. Dosimetry computer module of the gamma irradiator of ININ; Modulo informatico de dosimetria del irradiador gamma del ININ

    Energy Technology Data Exchange (ETDEWEB)

    Ledezma F, L. E.; Baldomero J, R. [ININ, Gerencia de Sistemas Informaticos, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Agis E, K. A., E-mail: luis.ledezma@inin.gob.mx [Universidad Autonoma del Estado de Mexico, Facultad de Ingenieria, Cerro de Coatepec s/n, Ciudad Universitaria, 50100 Toluca, Estado de Mexico (Mexico)

    2012-10-15

    This work present the technical specifications for the upgrade of the dosimetry module of the computer system of the gamma irradiator of the Instituto Nacional de Investigaciones Nucleares (ININ) whose result allows the integration and consultation of information in industrial dosimetry subject under an outline client-server. (Author)

  13. A portable grid-enabled computing system for a nuclear material study

    International Nuclear Information System (INIS)

    Tsujita, Yuichi; Arima, Tatsumi; Takekawa, Takayuki; Suzuki, Yoshio

    2010-01-01

    We have built a portable grid-enabled computing system specialized for our molecular dynamics (MD) simulation program to study Pu material easily. Experimental approach to reveal properties of Pu materials is often accompanied by some difficulties such as radiotoxicity of actinides. Since a computational approach reveals new aspects to researchers without such radioactive facilities, we address an MD computation. In order to have more realistic results about e.g., melting point or thermal conductivity, we need a large scale of parallel computations. Most of application users who don't have supercomputers in their institutes should use a remote supercomputer. For such users, we have developed the portable and secured grid-enabled computing system to utilize a grid computing infrastructure provided by Information Technology Based Laboratory (ITBL). This system enables us to access remote supercomputers in the ITBL system seamlessly from a client PC through its graphical user interface (GUI). Typically it enables seamless file accesses on the GUI. Furthermore monitoring of standard output or standard error is available to see progress of an executed program. Since the system provides fruitful functionalities which are useful for parallel computing on a remote supercomputer, application users can concentrate on their researches. (author)

  14. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    CERN Document Server

    INSPIRE-00416173; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machin...

  15. Computed tomographic practice and dosimetry: implications for nuclear medicine: editorial

    International Nuclear Information System (INIS)

    Mountford, P.J.; Harding, L.K.

    1992-01-01

    This editorial briefly discusses the results of an NRPB survey of x-ray computed tomography practice and dosimetry in the UK. A wide variation in practice and patient doses was revealed. The implications for nuclear medicine are considered. The NRPB is to issue formal guidance on protection of the patient undergoing a CT investigation with the aim of achieving a more systematic approach to the justification and optimization of such exposures. (UK)

  16. Dosimetry and health effects self-teaching curriculum: illustrative problems to supplement the user's manual for the Dosimetry and Health Effects Computer Code

    International Nuclear Information System (INIS)

    Runkle, G.E.; Finley, N.C.

    1983-03-01

    This document contains a series of sample problems for the Dosimetry and Health Effects Computer Code to be used in conjunction with the user's manual (Runkle and Cranwell, 1982) for the code. This code was developed at Sandia National Laboratories for the Risk Methodology for Geologic Disposal of Radioactive Waste program (NRC FIN A-1192). The purpose of this document is to familiarize the user with the code, its capabilities, and its limitations. When the user has finished reading this document, he or she should be able to prepare data input for the Dosimetry and Health Effects code and have some insights into interpretation of the model output

  17. Kids at CERN Grids for Kids programme leads to advanced computing knowledge.

    CERN Multimedia

    2008-01-01

    Children as young as 10 are learning computing skills, such as middleware, parallel processing and supercomputing, at CERN, the European Organisation for Nuclear Research, last week. The initiative for 10 to 12 years olds is part of the Grids for Kids programme, which aims to introduce Grid computing as a tool for research.

  18. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-12-01

    Dec 1, 2015 ... Abstract. This work developed and simulated a mathematical model for a mobile wireless computational Grid ... which mobile modes will process the tasks .... evaluation are analytical modelling, simulation ... MATLAB 7.10.0.

  19. Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data

    Science.gov (United States)

    Koranda, Scott

    2004-03-01

    The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.

  20. Pediatric personalized CT-dosimetry Monte Carlo simulations, using computational phantoms

    International Nuclear Information System (INIS)

    Papadimitroulas, P; Kagadis, G C; Ploussi, A; Kordolaimi, S; Papamichail, D; Karavasilis, E; Syrgiamiotis, V; Loudos, G

    2015-01-01

    The last 40 years Monte Carlo (MC) simulations serve as a “gold standard” tool for a wide range of applications in the field of medical physics and tend to be essential in daily clinical practice. Regarding diagnostic imaging applications, such as computed tomography (CT), the assessment of deposited energy is of high interest, so as to better analyze the risks and the benefits of the procedure. The last few years a big effort is done towards personalized dosimetry, especially in pediatric applications. In the present study the GATE toolkit was used and computational pediatric phantoms have been modeled for the assessment of CT examinations dosimetry. The pediatric models used come from the XCAT and IT'IS series. The X-ray spectrum of a Brightspeed CT scanner was simulated and validated with experimental data. Specifically, a DCT-10 ionization chamber was irradiated twice using 120 kVp with 100 mAs and 200 mAs, for 1 sec in 1 central axial slice (thickness = 10mm). The absorbed dose was measured in air resulting in differences lower than 4% between the experimental and simulated data. The simulations were acquired using ∼10 10 number of primaries in order to achieve low statistical uncertainties. Dose maps were also saved for quantification of the absorbed dose in several children critical organs during CT acquisition. (paper)

  1. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  2. DZero data-intensive computing on the Open Science Grid

    International Nuclear Information System (INIS)

    Abbott, B; Baranovski, A; Diesburg, M; Garzoglio, G; Mhashilkar, P; Kurca, T

    2008-01-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

  3. DZero data-intensive computing on the Open Science Grid

    International Nuclear Information System (INIS)

    Abbott, B.; Baranovski, A.; Diesburg, M.; Garzoglio, G.; Kurca, T.; Mhashilkar, P.

    2007-01-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

  4. The extended RBAC model based on grid computing

    Institute of Scientific and Technical Information of China (English)

    CHEN Jian-gang; WANG Ru-chuan; WANG Hai-yan

    2006-01-01

    This article proposes the extended role-based access control (RBAC) model for solving dynamic and multidomain problems in grid computing, The formulated description of the model has been provided. The introduction of context and the mapping relations of context-to-role and context-to-permission help the model adapt to dynamic property in grid environment.The multidomain role inheritance relation by the authorization agent service realizes the multidomain authorization amongst the autonomy domain. A function has been proposed for solving the role inheritance conflict during the establishment of the multidomain role inheritance relation.

  5. 11th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing

    CERN Document Server

    Barolli, Leonard; Amato, Flora

    2017-01-01

    P2P, Grid, Cloud and Internet computing technologies have been very fast established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources at large scale. The aim of this volume is to provide latest research findings, innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, Grid, Cloud and Internet computing as well as to reveal synergies among such large scale computing paradigms. This proceedings volume presents the results of the 11th International Conference on P2P, Parallel, Grid, Cloud And Internet Computing (3PGCIC-2016), held November 5-7, 2016, at Soonchunhyang University, Asan, Korea.

  6. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  7. Optical computed tomography in PRESAGE® three-dimensional dosimetry: Challenges and prospective.

    Science.gov (United States)

    Khezerloo, Davood; Nedaie, Hassan Ali; Farhood, Bagher; Zirak, Alireza; Takavar, Abbas; Banaee, Nooshin; Ahmadalidokht, Isa; Kron, Tomas

    2017-01-01

    With the advent of new complex but precise radiotherapy techniques, the demands for an accurate, feasible three-dimensional (3D) dosimetry system have been increased. A 3D dosimeter system generally should not only have accurate and precise results but should also feasible, inexpensive, and time consuming. Recently, one of the new candidates for 3D dosimetry is optical computed tomography (CT) with a radiochromic dosimeter such as PRESAGE®. Several generations of optical CT have been developed since the 90s. At the same time, a large attempt has been also done to introduce the robust dosimeters that compatible with optical CT scanners. In 2004, PRESAGE® dosimeter as a new radiochromic solid plastic dosimeters was introduced. In this decade, a large number of efforts have been carried out to enhance optical scanning methods. This article attempts to review and reflect on the results of these investigations.

  8. Optical computed tomography in PRESAGE® three-dimensional dosimetry: Challenges and prospective

    Directory of Open Access Journals (Sweden)

    Davood Khezerloo

    2017-01-01

    Full Text Available With the advent of new complex but precise radiotherapy techniques, the demands for an accurate, feasible three-dimensional (3D dosimetry system have been increased. A 3D dosimeter system generally should not only have accurate and precise results but should also feasible, inexpensive, and time consuming. Recently, one of the new candidates for 3D dosimetry is optical computed tomography (CT with a radiochromic dosimeter such as PRESAGE®. Several generations of optical CT have been developed since the 90s. At the same time, a large attempt has been also done to introduce the robust dosimeters that compatible with optical CT scanners. In 2004, PRESAGE® dosimeter as a new radiochromic solid plastic dosimeters was introduced. In this decade, a large number of efforts have been carried out to enhance optical scanning methods. This article attempts to review and reflect on the results of these investigations.

  9. WEKA-G: Parallel data mining on computational grids

    Directory of Open Access Journals (Sweden)

    PIMENTA, A.

    2009-12-01

    Full Text Available Data mining is a technology that can extract useful information from large amounts of data. However, mining a database often requires a high computational power. To resolve this problem, this paper presents a tool (Weka-G, which runs in parallel algorithms used in the mining process data. As the environment for doing so, we use a computational grid by adding several features within a WAN.

  10. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    Directory of Open Access Journals (Sweden)

    Watthanai Pinthong

    2016-07-01

    Full Text Available Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software.

  11. Development of a guidance guide for dosimetry in computed tomography

    International Nuclear Information System (INIS)

    Fontes, Ladyjane Pereira

    2016-01-01

    Due to frequent questions from users of ionization chambers pencil type calibrated in the Instrument Calibration Laboratory of the Institute of Energy and Nuclear Research (LCI - IPEN), on how to properly apply the factors indicated in their calibration certificates, a guide was prepared guidance for dosimetry in computed tomography. The guide includes guidance prior knowledge of half value layer (HVL), as it is necessary to know the effective beam energy for application quality for correction factor (kq). The evaluation of HVL in TC scanners becomes a difficult task due to system geometry and therefore a survey was conducted of existing methodologies for the determination of HVL in clinical beams Computed Tomography, taking into account technical, practical and economic factors. In this work it was decided to test a Tandem System consists of absorbing covers made in the workshop of IPEN, based on preliminary studies due to low cost and good response. The Tandem system consists of five cylindrical absorbing layers of 1mm, 3mm, 5mm, 7mm and 10mm aluminum and 3 cylindrical absorbing covers 15mm, 25mm and acrylic 35mm (PMMA) coupled to the ionization chamber of commercial pencil type widely used in quality control tests in dosimetry in clinical beams Computed tomography. Through Tandem curves it was possible to assess HVL values and from the standard curve pencil-type ionization chamber, Kq find the appropriate beam. The elaborate Guide provides information on how to build the calibration curve on the basis of CSR, to find the Kq and information for construction Tandem curve, to find values close to CSR. (author)

  12. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  13. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  14. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  15. Dosimetry for computed tomography using Fricke gel dosimetry and magnetic resonance imaging; Dosimetria em tomografia computadorizada empregando dosimetro Fricke gel e a tecnica de imageamento por ressonancia magnetica

    Energy Technology Data Exchange (ETDEWEB)

    Capeleti, Felipe Favaro; Campos, Leticia L., E-mail: felipe@gmpbrasil.com.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2014-04-15

    In this work it was determined a new method for the determination of absorbed doses in Computed Tomography (CT) examinations using Fricke gel dosimetry developed at IPEN. Absorbed doses were determined by different methods of analysis, such as optical absorption spectrometry, ionization chambers and magnetic resonance imaging. Lower limit of sensitivity of the Fricke gel solution, the solution repeatability signal Fricke gel and CT equipment, detection sensitivity, among other tests were performed. Different equipment of computed tomography with multiple detectors were used. The Fricke gel solution showed better repeatability than ±5.5% using the technique of optical absorption spectrophotometry and computed tomography equipment showed repeatability better than ±0.2%. The Fricke gel solution features an easy and relatively quick preparation, but it is necessary to be careful not to contaminate and lose the solution. With the results, it was confirmed the application of this type of dosimetry for computed tomography equipment. (author)

  16. Monte Carlo simulation with the Gate software using grid computing

    International Nuclear Information System (INIS)

    Reuillon, R.; Hill, D.R.C.; Gouinaud, C.; El Bitar, Z.; Breton, V.; Buvat, I.

    2009-03-01

    Monte Carlo simulations are widely used in emission tomography, for protocol optimization, design of processing or data analysis methods, tomographic reconstruction, or tomograph design optimization. Monte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the 'Multiple Replications In Parallel' approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-science), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses. (authors)

  17. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  18. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    This work developed and simulated a mathematical model for a mobile wireless computational Grid architecture using networks of queuing theory. This was in order to evaluate the performance of theload-balancing three tier hierarchical configuration. The throughput and resource utilizationmetrics were measured and the ...

  19. Reliable multicast for the Grid: a case study in experimental computer science.

    Science.gov (United States)

    Nekovee, Maziar; Barcellos, Marinho P; Daw, Michael

    2005-08-15

    In its simplest form, multicast communication is the process of sending data packets from a source to multiple destinations in the same logical multicast group. IP multicast allows the efficient transport of data through wide-area networks, and its potentially great value for the Grid has been highlighted recently by a number of research groups. In this paper, we focus on the use of IP multicast in Grid applications, which require high-throughput reliable multicast. These include Grid-enabled computational steering and collaborative visualization applications, and wide-area distributed computing. We describe the results of our extensive evaluation studies of state-of-the-art reliable-multicast protocols, which were performed on the UK's high-speed academic networks. Based on these studies, we examine the ability of current reliable multicast technology to meet the Grid's requirements and discuss future directions.

  20. CheckDen, a program to compute quantum molecular properties on spatial grids.

    Science.gov (United States)

    Pacios, Luis F; Fernandez, Alberto

    2009-09-01

    CheckDen, a program to compute quantum molecular properties on a variety of spatial grids is presented. The program reads as unique input wavefunction files written by standard quantum packages and calculates the electron density rho(r), promolecule and density difference function, gradient of rho(r), Laplacian of rho(r), information entropy, electrostatic potential, kinetic energy densities G(r) and K(r), electron localization function (ELF), and localized orbital locator (LOL) function. These properties can be calculated on a wide range of one-, two-, and three-dimensional grids that can be processed by widely used graphics programs to render high-resolution images. CheckDen offers also other options as extracting separate atom contributions to the property computed, converting grid output data into CUBE and OpenDX volumetric data formats, and perform arithmetic combinations with grid files in all the recognized formats.

  1. Grid computing the European Data Grid Project

    CERN Document Server

    Segal, B; Gagliardi, F; Carminati, F

    2000-01-01

    The goal of this project is the development of a novel environment to support globally distributed scientific exploration involving multi- PetaByte datasets. The project will devise and develop middleware solutions and testbeds capable of scaling to handle many PetaBytes of distributed data, tens of thousands of resources (processors, disks, etc.), and thousands of simultaneous users. The scale of the problem and the distribution of the resources and user community preclude straightforward replication of the data at different sites, while the aim of providing a general purpose application environment precludes distributing the data using static policies. We will construct this environment by combining and extending newly emerging "Grid" technologies to manage large distributed datasets in addition to computational elements. A consequence of this project will be the emergence of fundamental new modes of scientific exploration, as access to fundamental scientific data is no longer constrained to the producer of...

  2. Enabling Campus Grids with Open Science Grid Technology

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Pordes, Ruth; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  3. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    Science.gov (United States)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  4. Computer codes in nuclear safety, radiation transport and dosimetry

    International Nuclear Information System (INIS)

    Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M.

    2006-01-01

    The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations

  5. Dosimetry and Shielding of X and Gamma Radiation

    International Nuclear Information System (INIS)

    Oncescu, M.; Panaitescu, I.

    1992-01-01

    This book covers the following problems: 1. X and Gamma radiations, 2. Interaction of X-ray and gamma radiations with matter, 3. Interaction of electrons with matter, 4. Principles and basic concepts of dosimetry, 5. Ionization dosimetry, 6. Calorimetric chemical and photographic dosimetry, 7. Solid state dosimetry, 8. Computation of dosimetric quantities, 9. Dosimetry in radiation protection, 10. Shielding of X and gamma radiations. The authors, well-known Romanian experts in Radiation Physics and Engineering, gave an up-dated, complete and readable account of this subject matter. The analyses of physical principles and concepts, of materials and instruments and of computational methods and applications are all well balanced to meat the needs of a broad readership

  6. Enabling campus grids with open science grid technology

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Derek [Nebraska U.; Bockelman, Brian [Nebraska U.; Swanson, David [Nebraska U.; Fraser, Dan [Argonne; Pordes, Ruth [Fermilab

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  7. New challenges in grid generation and adaptivity for scientific computing

    CERN Document Server

    Formaggia, Luca

    2015-01-01

    This volume collects selected contributions from the “Fourth Tetrahedron Workshop on Grid Generation for Numerical Computations”, which was held in Verbania, Italy in July 2013. The previous editions of this Workshop were hosted by the Weierstrass Institute in Berlin (2005), by INRIA Rocquencourt in Paris (2007), and by Swansea University (2010). This book covers different, though related, aspects of the field: the generation of quality grids for complex three-dimensional geometries; parallel mesh generation algorithms; mesh adaptation, including both theoretical and implementation aspects; grid generation and adaptation on surfaces – all with an interesting mix of numerical analysis, computer science and strongly application-oriented problems.

  8. Numerical Nuclear Second Derivatives on a Computing Grid: Enabling and Accelerating Frequency Calculations on Complex Molecular Systems.

    Science.gov (United States)

    Yang, Tzuhsiung; Berry, John F

    2018-06-04

    The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.

  9. NURBS-based 3-d anthropomorphic computational phantoms for radiation dosimetry applications

    International Nuclear Information System (INIS)

    Lee, Choonsik; Lodwick, Daniel; Lee, Choonik; Bolch, Wesley E.

    2007-01-01

    Computational anthropomorphic phantoms are computer models used in the evaluation of absorbed dose distributions within the human body. Currently, two classes of the computational phantoms have been developed and widely utilised for dosimetry calculation: (1) stylized (equation-based) and (2) voxel (image-based) phantoms describing human anatomy through the use of mathematical surface equations and 3-D voxel matrices, respectively. However, stylized phantoms have limitations in defining realistic organ contours and positioning as compared to voxel phantoms, which are themselves based on medical images of human subjects. In turn, voxel phantoms that have been developed through medical image segmentation have limitations in describing organs that are presented in low contrast within either magnetic resonance or computed tomography image. The present paper reviews the advantages and disadvantages of these existing classes of computational phantoms and introduces a hybrid approach to a computational phantom construction based on non-uniform rational B-Spline (NURBS) surface animation technology that takes advantage of the most desirable features of the former two phantom types. (authors)

  10. Near-Body Grid Adaption for Overset Grids

    Science.gov (United States)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  11. Dynamic stability calculations for power grids employing a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K

    1982-06-01

    The aim of dynamic contingency calculations in power systems is to estimate the effects of assumed disturbances, such as loss of generation. Due to the large dimensions of the problem these simulations require considerable computing time and costs, to the effect that they are at present only used in a planning state but not for routine checks in power control stations. In view of the homogeneity of the problem, where a multitude of equal generator models, having different parameters, are to be integrated simultaneously, the use of a parallel computer looks very attractive. The results of this study employing a prototype parallel computer (SMS 201) are presented. It consists of up to 128 equal microcomputers bus-connected to a control computer. Each of the modules is programmed to simulate a node of the power grid. Generators with their associated control are represented by models of 13 states each. Passive nodes are complemented by 'phantom'-generators, so that the whole power grid is homogenous, thus removing the need for load-flow-iterations. Programming of microcomputers is essentially performed in FORTRAN.

  12. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  13. Dynamic grid refinement for partial differential equations on parallel computers

    International Nuclear Information System (INIS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems. 6 refs

  14. The Adoption of Grid Computing Technology by Organizations: A Quantitative Study Using Technology Acceptance Model

    Science.gov (United States)

    Udoh, Emmanuel E.

    2010-01-01

    Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…

  15. Adaptively detecting changes in Autonomic Grid Computing

    KAUST Repository

    Zhang, Xiangliang

    2010-10-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs. © 2010 IEEE.

  16. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  17. User's Manual for FOMOCO Utilities-Force and Moment Computation Tools for Overset Grids

    Science.gov (United States)

    Chan, William M.; Buning, Pieter G.

    1996-01-01

    In the numerical computations of flows around complex configurations, accurate calculations of force and moment coefficients for aerodynamic surfaces are required. When overset grid methods are used, the surfaces on which force and moment coefficients are sought typically consist of a collection of overlapping surface grids. Direct integration of flow quantities on the overlapping grids would result in the overlapped regions being counted more than once. The FOMOCO Utilities is a software package for computing flow coefficients (force, moment, and mass flow rate) on a collection of overset surfaces with accurate accounting of the overlapped zones. FOMOCO Utilities can be used in stand-alone mode or in conjunction with the Chimera overset grid compressible Navier-Stokes flow solver OVERFLOW. The software package consists of two modules corresponding to a two-step procedure: (1) hybrid surface grid generation (MIXSUR module), and (2) flow quantities integration (OVERINT module). Instructions on how to use this software package are described in this user's manual. Equations used in the flow coefficients calculation are given in Appendix A.

  18. The Grid

    CERN Document Server

    Klotz, Wolf-Dieter

    2005-01-01

    Grid technology is widely emerging. Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple, robust yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources. This talk will give a short history how, out of lessons learned from the Internet, the vision of Grids was born. Then the extensible anatomy of a Grid architecture will be discussed. The talk will end by presenting a selection of major Grid projects in Europe and US and if time permits a short on-line demonstration.

  19. Qualities of Grid Computing that can last for Ages | Asagba | Journal ...

    African Journals Online (AJOL)

    Grid computing has emerged as an important new field, distinguished from conventional distributed computing based on its abilities on large-scale resource sharing and services. And it will even become more popular because of the benefits it can offer over the traditional supercomputers, and other forms of distributed ...

  20. Porting of Bio-Informatics Tools for Plant Virology on a Computational Grid

    International Nuclear Information System (INIS)

    Lanzalone, G.; Lombardo, A.; Muoio, A.; Iacono-Manno, M.

    2007-01-01

    The goal of Tri Grid Project and PI2S2 is the creation of the first Sicilian regional computational Grid. In particular, it aims to build various software-hardware interfaces between the infrastructure and some scientific and industrial applications. In this context, we have integrated some among the most innovative computing applications in virology research inside these Grid infrastructure. Particularly, we have implemented in a complete work flow, various tools for pairwise or multiple sequence alignment and phylogeny tree construction (ClustalW-MPI), phylogenetic networks (Splits Tree), detection of recombination by phylogenetic methods (TOPALi) and prediction of DNA or RNA secondary consensus structures (KnetFold). This work will show how the ported applications decrease the execution time of the analysis programs, improve the accessibility to the data storage system and allow the use of metadata for data processing. (Author)

  1. Non-conventional personal dosimetry techniques

    International Nuclear Information System (INIS)

    Regulla, D.F.

    1984-01-01

    Established dosimetry has achieved a high standard in personnel monitoring. This applies particularly to photon dosimetry. Nevertheless, even in photon dosimetry, improvements and changes are being made. The reason may be technological progress, or the introduction of new tasks on the basis of the recommendations of international bodies (e.g. the new ICRU measurement unit) of national legislation. Since we are restricting ourselves here to technical trends the author would like to draw attention to various activities of current interest, e.g. the computation of receptor-related conversion coefficients from personal dose to organ or body doses, taking into account the conditions of exposure with respect to differential energy and angular distribution of the radiation field. Realistic data on exposure geometry are taken from work place analyses. Furthermore, the data banks of central personal dosimetry services are subject to statistical evaluation and radiation protection trend analysis. Technological progress and developments are considered from the point of view of personal dosimetry, partial body or extremity dosimetry and accidental dosimetry

  2. Taiwan links up to world's first LHC computing grid project

    CERN Multimedia

    2003-01-01

    "Taiwan's Academia Sinica was linked up to the Large Hadron Collider (LHC) Computing Grid Project last week to work jointly with 12 other countries to construct the world's largest and most powerful particle accelerator" (1/2 page).

  3. Erasmus Computing Grid: Het bouwen van een 20 Tera-FLOPS Virtuele Supercomputer.

    NARCIS (Netherlands)

    L.V. de Zeeuw (Luc); T.A. Knoch (Tobias); J.H. van den Berg (Jan); F.G. Grosveld (Frank)

    2007-01-01

    textabstractHet Erasmus Medisch Centrum en de Hogeschool Rotterdam zijn in 2005 een samenwerking begonnen teneinde de ongeveer 95% onbenutte rekencapaciteit van hun computers beschikbaar te maken voor onderzoek en onderwijs. Deze samenwerking heeft geleid tot het Erasmus Computing GRID (ECG),

  4. Grid: From EGEE to EGI and from INFN-Grid to IGI

    International Nuclear Information System (INIS)

    Giselli, A.; Mazzuccato, M.

    2009-01-01

    In the last fifteen years the approach of the computational Grid has changed the way to use computing resources. Grid computing has raised interest worldwide in academia, industry, and government with fast development cycles. Great efforts, huge funding and resources have been made available through national, regional and international initiatives aiming at providing Grid infrastructures, Grid core technologies, Grid middle ware and Grid applications. The Grid software layers reflect the architecture of the services developed so far by the most important European and international projects. In this paper Grid e-Infrastructure story is given, detailing European, Italian and international projects such as EGEE, INFN-Grid and NAREGI. In addition the sustainability issue in the long-term perspective is described providing plans by European and Italian communities with EGI and IGI.

  5. Chimera Grid Tools

    Science.gov (United States)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  6. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  7. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  8. Computer aided dosimetry and verification of exposure to radiation. Technical report

    International Nuclear Information System (INIS)

    Waller, D.; Stodilka, R.Z.; Leach, K.E.; Prud'homme-Lalonde, L.

    2002-06-01

    In the timeframe following the September 11th attacks on the United States, increased emphasis has been placed on Chemical, Biological, Radiological and Nuclear (CBRN) preparedness. Of prime importance is rapid field assessment of potential radiation exposure to Canadian Forces field personnel. This work set up a framework for generating an 'expert' computer system for aiding and assisting field personnel in determining the extent of radiation insult to military personnel. Data was gathered by review of the available literature, discussions with medical and health physics personnel having hands-on experience dealing with radiation accident victims, and from experience of the principal investigator. Flow charts and generic data fusion algorithms were developed. Relationships between known exposure parameters, patient interview and history, clinical symptoms, clinical work-ups, physical dosimetry, biological dosimetry, and dose reconstruction as critical data indicators were investigated. The data obtained was examined in terms of information theory. A main goal was to determine how best to generate an adaptive model (i.e. when more data becomes available, how is the prediction improved). Consideration was given to determination of predictive algorithms for health outcome. In addition, the concept of coding an expert medical treatment advisor system was developed. (author)

  9. Cloud Computing and Smart Grids

    Directory of Open Access Journals (Sweden)

    Janina POPEANGĂ

    2012-10-01

    Full Text Available Increasing concern about energy consumption is leading to infrastructure that supports real-time, two-way communication between utilities and consumers, and allows software systems at both ends to control and manage power use. To manage communications to millions of endpoints in a secure, scalable and highly-available environment and to achieve these twin goals of ‘energy conservation’ and ‘demand response’, utilities must extend the same communication network management processes and tools used in the data center to the field.This paper proposes that cloud computing technology, because of its low cost, flexible and redundant architecture and fast response time, has the functionality needed to provide the security, interoperability and performance required for large-scale smart grid applications.

  10. Computer codes in nuclear safety, radiation transport and dosimetry; Les codes de calcul en radioprotection, radiophysique et dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Bordy, J M; Kodeli, I; Menard, St; Bouchet, J L; Renard, F; Martin, E; Blazy, L; Voros, S; Bochud, F; Laedermann, J P; Beaugelin, K; Makovicka, L; Quiot, A; Vermeersch, F; Roche, H; Perrin, M C; Laye, F; Bardies, M; Struelens, L; Vanhavere, F; Gschwind, R; Fernandez, F; Quesne, B; Fritsch, P; Lamart, St; Crovisier, Ph; Leservot, A; Antoni, R; Huet, Ch; Thiam, Ch; Donadille, L; Monfort, M; Diop, Ch; Ricard, M

    2006-07-01

    The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.

  11. A microcomputer controlled thermoluminescence dosimetry system

    International Nuclear Information System (INIS)

    Huyskens, C.J.; Kicken, P.J.H.

    1980-01-01

    Using a microcomputer, an automatic thermoluminescence dosimetry system for personal dosimetry and thermoluminescence detector (TLD) research was developed. Process automation, statistical computation and dose calculation are provided by this microcomputer. Recording of measurement data, as well as dose record keeping for radiological workers is carried out with floppy disk. The microcomputer also provides a human/system interface by means of a video display and a printer. The main features of this dosimetry system are its low cost, high degree of flexibility, high degree of automation and the feasibility for use in routine dosimetry as well as in TLD research. The system is in use for personal dosimetry, environmental dosimetry and for TL-research work. Because of its modular set-up several components of the system are in use for other applications, too. The system seems suited for medium sized health physics groups. (author)

  12. Computational dosimetry and risk assessment of radioinduced cancer: studies in mammary glands radiotherapy, radiopharmaceuticals and internal contamination

    International Nuclear Information System (INIS)

    Mendes, Bruno Melo

    2017-01-01

    The use of Ionizing radiation (IR) in medicine has increased considerably. The benefits generated by diagnostic and therapy techniques with IR are proven. Nevertheless, the risks arising from these uses should not be underestimated. Justification, a basic radiation protection, states that the benefits from exposures must outweigh detriment. The cancer induction is one of the detriment components. Thus, the study of the benefit/detriment ratio should take into account cancer incidence and mortality estimations resulting from a given diagnosis or therapy radiological technique. The risk of cancer induction depends on the absorbed doses in the irradiated organs and tissues. Thus, IR dosimetry is essential to evaluate the benefit/detriment ratio. The present work aims to perform computational dosimetric evaluations and estimations of cancer induction risk after ionizing radiation exposure. The investigated situations cover nuclear medicine, radiological contamination and radiotherapy fields. Computational dosimetry, with MCNPx Monte Carlo Code, was used as a tool to calculate the absorbed dose in the interest organs of the voxelized human models. The simulations were also used to obtain calibration factors and optimization of in vivo monitoring systems for internal contamination dosimetry. A breast radiotherapy (RT) standard protocol was simulated using the MCNPx code. The calculation of the radiation-induced cancer risk was adapted from the BEIR VII methodology for the Brazilian population. The absorbed doses used in the risk calculations were obtained through computational simulations of different exposure scenarios. During this work, two new computational phantoms, DM B RA and VW, were generated from tomographic images. Additional twelve voxelized phantoms, including the reference phantoms, RCP A M and RCP A F, and the child, baby, and fetus models were adapted to run on MCNP. Internal Dosimetry Protocols (IDP) for radiopharmaceuticals and for internal contamination

  13. GENII: The Hanford Environmental Radiation Dosimetry Software System: Volume 2, Users' manual: Hanford Environmental Dosimetry Upgrade Project

    International Nuclear Information System (INIS)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.; Ramsdell, J.V.

    1988-11-01

    The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). The purpose of this coupled system of computer codes is to analyze environmental contamination of, air, water, or soil. This is accomplished by calculating radiation doses to individuals or populations. GENII is described in three volumes of documentation. This second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. The first volume describes the theoretical considerations of the system. The third volume is a Code Maintenance Manual for the user who requires knowledge of code detail. It includes logic diagrams, global dictionary, worksheets, example hand calculations, and listings of the code and its associated data libraries. 27 refs., 17 figs., 23 tabs

  14. Investigation of the Spatial Resolution of MR-Based Polymer Gel Dosimetry versus Film Densitometry using Dose Modulation Transfer Function

    Directory of Open Access Journals (Sweden)

    Reza Moghadam-Drodkhani

    2011-03-01

    Full Text Available Introduction: The conventional methods of dosimetry are not capable of dosimetry in such a small volume of less than one cubic millimeter. Although the polymer gel dosimetry method based on magnetic resonance imaging (MRI could achieve three dimensional dosimetry with high resolution, a spatial resolution evaluation based on gel dose modulation transfer function has not been investigated yet. Therefore, in this study, the spatial resolution of two systems of film densitometry and polymer gel dosimetry based on MRI has been evaluated by using the dose modulation transfer function (DMTF.   Material and Methods: Kodak therapy verification films and MAGICA polymer gel samples were positioned below a brass absorption grid with different periodic slices (a/2= 280, 525, 1125 μm, which was placed in a water bath container to avoid regions of dose build-up just below the absorption grid and then irradiated with Cobalt-60 photons on a Theratron external-beam treatment unit. Dose variation under the brass grid was determined using a calibration curve, while transverse relaxation time (T2 as the selective parameter in a dose image based on multiple echo MRI with 1.5 Tesla GE Signa Echo Speed system (FOV=10 cm, matrix size=512 ×512, pixel size =0.199×0.199 mm2, TE = 20, 40, 60, 80 ms, TR=4200 ms, NEX = 4, slice thickness=2 mm, gap=1 mm was calculated. DMTF from the modulation depths of T2 and variation in film optical density after calibration would be achieved. The results of polymer gel were compared with film. Results: After deriving the dose distribution profile under the absorption grid, minima and maxima at the smallest period of a = 560 μm could scarcely be resolved, but the modulations due to a=2250 μm and a = 1050 μm grids could be discerned. The modulation depth for a=2250 μm grid was set to 100% and the other modulations were subsequently referred to this maximum modulation. For film densitometry at a = 1050 μm, the modulation depth was

  15. Comparative Analysis of Stability to Induced Deadlocks for Computing Grids with Various Node Architectures

    Directory of Open Access Journals (Sweden)

    Tatiana R. Shmeleva

    2018-01-01

    Full Text Available In this paper, we consider the classification and applications of switching methods, their advantages and disadvantages. A model of a computing grid was constructed in the form of a colored Petri net with a node which implements cut-through packet switching. The model consists of packet switching nodes, traffic generators and guns that form malicious traffic disguised as usual user traffic. The characteristics of the grid model were investigated under a working load with different intensities. The influence of malicious traffic such as traffic duel was estimated on the quality of service parameters of the grid. A comparative analysis of the computing grids stability was carried out with nodes which implement the store-and-forward and cut-through switching technologies. It is shown that the grids performance is approximately the same under work load conditions, and under peak load conditions the grid with the node implementing the store-and-forward technology is more stable. The grid with nodes implementing SAF technology comes to a complete deadlock through an additional load which is less than 10 percent. After a detailed study, it is shown that the traffic duel configuration does not affect the grid with cut-through nodes if the workload is increases to the peak load, at which the grid comes to a complete deadlock. The execution intensity of guns which generate a malicious traffic is determined by a random function with the Poisson distribution. The modeling system CPN Tools is used for constructing models and measuring parameters. Grid performance and average package delivery time are estimated in the grid on various load options.

  16. Remote data access in computational jobs on the ATLAS data grid

    CERN Document Server

    Begy, Volodimir; The ATLAS collaboration; Lassnig, Mario

    2018-01-01

    This work describes the technique of remote data access from computational jobs on the ATLAS data grid. In comparison to traditional data movement and stage-in approaches it is well suited for data transfers which are asynchronous with respect to the job execution. Hence, it can be used for optimization of data access patterns based on various policies. In this study, remote data access is realized with the HTTP and WebDAV protocols, and is investigated in the context of intra- and inter-computing site data transfers. In both cases, the typical scenarios for application of remote data access are identified. The paper also presents an analysis of parameters influencing the data goodput between heterogeneous storage element - worker node pairs on the grid.

  17. The GRID seminar

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The Grid infrastructure is a key part of the computing environment for the simulation, processing and analysis of the data of the LHC experiments. These experiments depend on the availability of a worldwide Grid infrastructure in several aspects of their computing model. The Grid middleware will hide much of the complexity of this environment to the user, organizing all the resources in a coherent virtual computer center. The general description of the elements of the Grid, their interconnections and their use by the experiments will be exposed in this talk. The computational and storage capability of the Grid is attracting other research communities beyond the high energy physics. Examples of these applications will be also exposed during the presentation.

  18. Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules

    International Nuclear Information System (INIS)

    Mereghetti, Paolo; Martinez, Michael; Wade, Rebecca C

    2014-01-01

    Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials

  19. Taiwan links up to world's 1st LHC Computing Grid Project

    CERN Multimedia

    2003-01-01

    Taiwan's Academia Sinica was linked up to the Large Hadron Collider (LHC) Computing Grid Project to work jointly with 12 other countries to construct the world's largest and most powerful particle accelerator

  20. Operating the worldwide LHC computing grid: current and future challenges

    International Nuclear Information System (INIS)

    Molina, J Flix; Forti, A; Girone, M; Sciaba, A

    2014-01-01

    The Wordwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse their data. It includes almost 200,000 CPU cores, 200 PB of disk storage and 200 PB of tape storage distributed among more than 150 sites. The WLCG operations team is responsible for several essential tasks, such as the coordination of testing and deployment of Grid middleware and services, communication with the experiments and the sites, followup and resolution of operational issues and medium/long term planning. In 2012 WLCG critically reviewed all operational procedures and restructured the organisation of the operations team as a more coherent effort in order to improve its efficiency. In this paper we describe how the new organisation works, its recent successes and the changes to be implemented during the long LHC shutdown in preparation for the LHC Run 2.

  1. From the CERN web: grid computing, night shift, ridge effect and more

    CERN Multimedia

    2015-01-01

    This section highlights articles, blog posts and press releases published in the CERN web environment over the past weeks. This way, you won’t miss a thing...   Schoolboy uses grid computing to analyse satellite data 9 December - by David Lugmayer  At just 16, Cal Hewitt, a student at Simon Langton Grammar School for Boys in the United Kingdom became the youngest person to receive grid certification – giving him access to huge grid-computing resources. Hewitt uses these resources to help analyse data from the LUCID satellite detector, which a team of students from the school launched into space last year. Continue to read…    Night shift in the CMS Control Room (Photo: Andrés Delannoy). On Seagull Soup and Coffee Deficiency: Night Shift at CMS 8 December – CMS Collaboration More than half a year, a school trip to CERN, and a round of 13 TeV collisions later, the week-long internship we completed at CMS over E...

  2. Asia Federation Report on International Symposium on Grid Computing (ISGC) 2010

    Science.gov (United States)

    Grey, Francois; Lin, Simon C.

    This report provides an overview of developments in the Asia-Pacific region, based on presentations made at the International Symposium on Grid Computing 2010 (ISGC 2010), held 5-12 March at Academia Sinica, Taipei. The document includes a brief overview of the EUAsiaGrid project as well as progress reports by representatives of 13 Asian countries presented at ISGC 2010. In alphabetical order, these are: Australia, China, India, Indonesia, Japan, Malaysia, Pakistan, Philippines, Singapore, South Korea, Taiwan, Thailand and Vietnam.

  3. An Offload NIC for NASA, NLR, and Grid Computing

    Science.gov (United States)

    Awrach, James

    2013-01-01

    This work addresses distributed data management and access dynamically configurable high-speed access to data distributed and shared over wide-area high-speed network environments. An offload engine NIC (network interface card) is proposed that scales at nX10-Gbps increments through 100-Gbps full duplex. The Globus de facto standard was used in projects requiring secure, robust, high-speed bulk data transport. Novel extension mechanisms were derived that will combine these technologies for use by GridFTP, bandwidth management resources, and host CPU (central processing unit) acceleration. The result will be wire-rate encrypted Globus grid data transactions through offload for splintering, encryption, and compression. As the need for greater network bandwidth increases, there is an inherent need for faster CPUs. The best way to accelerate CPUs is through a network acceleration engine. Grid computing data transfers for the Globus tool set did not have wire-rate encryption or compression. Existing technology cannot keep pace with the greater bandwidths of backplane and network connections. Present offload engines with ports to Ethernet are 32 to 40 Gbps f-d at best. The best of ultra-high-speed offload engines use expensive ASICs (application specific integrated circuits) or NPUs (network processing units). The present state of the art also includes bonding and the use of multiple NICs that are also in the planning stages for future portability to ASICs and software to accommodate data rates at 100 Gbps. The remaining industry solutions are for carrier-grade equipment manufacturers, with costly line cards having multiples of 10-Gbps ports, or 100-Gbps ports such as CFP modules that interface to costly ASICs and related circuitry. All of the existing solutions vary in configuration based on requirements of the host, motherboard, or carriergrade equipment. The purpose of the innovation is to eliminate data bottlenecks within cluster, grid, and cloud computing systems

  4. A priori modeling of chemical reactions on computational grid platforms: Workflows and data models

    International Nuclear Information System (INIS)

    Rampino, S.; Monari, A.; Rossi, E.; Evangelisti, S.; Laganà, A.

    2012-01-01

    Graphical abstract: The quantum framework of the Grid Empowered Molecular Simulator GEMS assembled on the European Grid allows the ab initio evaluation of the dynamics of small systems starting from the calculation of the electronic properties. Highlights: ► The grid based GEMS simulator accurately models small chemical systems. ► Q5Cost and D5Cost file formats provide interoperability in the workflow. ► Benchmark runs on H + H 2 highlight the Grid empowering. ► O + O 2 and N + N 2 calculated k (T)’s fall within the error bars of the experiment. - Abstract: The quantum framework of the Grid Empowered Molecular Simulator GEMS has been assembled on the segment of the European Grid devoted to the Computational Chemistry Virtual Organization. The related grid based workflow allows the ab initio evaluation of the dynamics of small systems starting from the calculation of the electronic properties. Interoperability between computational codes across the different stages of the workflow was made possible by the use of the common data formats Q5Cost and D5Cost. Illustrative benchmark runs have been performed on the prototype H + H 2 , N + N 2 and O + O 2 gas phase exchange reactions and thermal rate coefficients have been calculated for the last two. Results are discussed in terms of the modeling of the interaction and advantages of using the Grid is highlighted.

  5. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    International Nuclear Information System (INIS)

    Lavoie-Courchesne, S; Chouinard-Decorte, F; Doyon, J; Bellec, P; Rioux, P; Sherif, T; Rousseau, M-E; Das, S; Adalat, R; Evans, A C; Craddock, C; Margulies, D; Chu, C; Lyttelton, O

    2012-01-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  6. gLExec: gluing grid computing to the Unix world

    Science.gov (United States)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  7. gLExec: gluing grid computing to the Unix world

    International Nuclear Information System (INIS)

    Groep, D; Koeroo, O; Venekamp, G

    2008-01-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system

  8. Precision of dosimetry-related measurements obtained on current multidetector computed tomography scanners

    International Nuclear Information System (INIS)

    Mathieu, Kelsey B.; McNitt-Gray, Michael F.; Zhang, Di; Kim, Hyun J.; Cody, Dianna D.

    2010-01-01

    Purpose: Computed tomography (CT) intrascanner and interscanner variability has not been well characterized. Thus, the purpose of this study was to examine the within-run, between-run, and between-scanner precision of physical dosimetry-related measurements collected over the course of 1 yr on three different makes and models of multidetector row CT (MDCT) scanners. Methods: Physical measurements were collected using nine CT scanners (three scanners each of GE VCT, GE LightSpeed 16, and Siemens Sensation 64 CT). Measurements were made using various combinations of technical factors, including kVp, type of bowtie filter, and x-ray beam collimation, for several dosimetry-related quantities, including (a) free-in-air CT dose index (CTDI 100,air ); (b) calculated half-value layers and quarter-value layers; and (c) weighted CT dose index (CTDI w ) calculated from exposure measurements collected in both a 16 and 32 cm diameter CTDI phantom. Data collection was repeated at several different time intervals, ranging from seconds (for CTDI 100,air values) to weekly for 3 weeks and then quarterly or triannually for 1 yr. Precision of the data was quantified by the percent coefficient of variation (%CV). Results: The maximum relative precision error (maximum %CV value) across all dosimetry metrics, time periods, and scanners included in this study was 4.33%. The median observed %CV values for CTDI 100,air ranged from 0.05% to 0.19% over several seconds, 0.12%-0.52% over 1 week, and 0.58%-2.31% over 3-4 months. For CTDI w for a 16 and 32 cm CTDI phantom, respectively, the range of median %CVs was 0.38%-1.14% and 0.62%-1.23% in data gathered weekly for 3 weeks and 1.32%-2.79% and 0.84%-2.47% in data gathered quarterly or triannually for 1 yr. Conclusions: From a dosimetry perspective, the MDCT scanners tested in this study demonstrated a high degree of within-run, between-run, and between-scanner precision (with relative precision errors typically well under 5%).

  9. Current Grid operation and future role of the Grid

    Science.gov (United States)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  10. Current Grid operation and future role of the Grid

    International Nuclear Information System (INIS)

    Smirnova, O

    2012-01-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  11. GRID and FMPhI-UNIBA

    International Nuclear Information System (INIS)

    Babik, M.; Daranyi, T.; Fekete, V.; Stavina, P.; Zagiba, M.; Zenis, T.

    2008-01-01

    The word GRID has several meanings, so it is not an abbreviation. All of them have in common description of GRID as a form of hardware and software and software solution for distributive computing. Additionally, word GRID is also used for distributive computing of many computers and not one super computer with several processors. It, of course, does not mean that such a supercomputer cannot be a part of the GRID. Typical task for GRID is computer programs execution and to data storage. (Authors)

  12. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  13. Computer aided dosimetry and verification of exposure to radiation. Technical report

    Energy Technology Data Exchange (ETDEWEB)

    Waller, D. [SAIC Canada (Canada); Stodilka, R.Z.; Leach, K.E.; Prud' homme-Lalonde, L. [Defence R and D Canada (DRDC), Radiation Effects Group, Space Systems and Technology, Ottawa, Ontario (Canada)

    2002-06-15

    In the timeframe following the September 11th attacks on the United States, increased emphasis has been placed on Chemical, Biological, Radiological and Nuclear (CBRN) preparedness. Of prime importance is rapid field assessment of potential radiation exposure to Canadian Forces field personnel. This work set up a framework for generating an 'expert' computer system for aiding and assisting field personnel in determining the extent of radiation insult to military personnel. Data was gathered by review of the available literature, discussions with medical and health physics personnel having hands-on experience dealing with radiation accident victims, and from experience of the principal investigator. Flow charts and generic data fusion algorithms were developed. Relationships between known exposure parameters, patient interview and history, clinical symptoms, clinical work-ups, physical dosimetry, biological dosimetry, and dose reconstruction as critical data indicators were investigated. The data obtained was examined in terms of information theory. A main goal was to determine how best to generate an adaptive model (i.e. when more data becomes available, how is the prediction improved). Consideration was given to determination of predictive algorithms for health outcome. In addition, the concept of coding an expert medical treatment advisor system was developed. (author)

  14. Minimizing the negative effects of device mobility in cell-based ad-hoc wireless computational grids

    CSIR Research Space (South Africa)

    Mudali, P

    2006-09-01

    Full Text Available This paper provides an outline of research being conducted to minimize the disruptive effects of device mobility in wireless computational grid networks. The proposed wireless grid framework uses the existing GSM cellular architecture, with emphasis...

  15. Software for evaluation of EPR-dosimetry performance

    International Nuclear Information System (INIS)

    Shishkina, E.A.; Timofeev, Yu.S.; Ivanov, D.V.

    2014-01-01

    Electron paramagnetic resonance (EPR) with tooth enamel is a method extensively used for retrospective external dosimetry. Different research groups apply different equipment, sample preparation procedures and spectrum processing algorithms for EPR dosimetry. A uniform algorithm for description and comparison of performances was designed and implemented in a new computer code. The aim of the paper is to introduce the new software 'EPR-dosimetry performance'. The computer code is a user-friendly tool for providing a full description of method-specific capabilities of EPR tooth dosimetry, from metrological characteristics to practical limitations in applications. The software designed for scientists and engineers has several applications, including support of method calibration by evaluation of calibration parameters, evaluation of critical value and detection limit for registration of radiation-induced signal amplitude, estimation of critical value and detection limit for dose evaluation, estimation of minimal detectable value for anthropogenic dose assessment and description of method uncertainty. (authors)

  16. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    International Nuclear Information System (INIS)

    Brun, Rene; Carminati, Federico; Galli Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  17. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  18. Hybrid GPU-CPU adaptive precision ray-triangle intersection tests for robust high-performance GPU dosimetry computations

    International Nuclear Information System (INIS)

    Perrotte, Lancelot; Bodin, Bruno; Chodorge, Laurent

    2011-01-01

    Before an intervention on a nuclear site, it is essential to study different scenarios to identify the less dangerous one for the operator. Therefore, it is mandatory to dispose of an efficient dosimetry simulation code with accurate results. One classical method in radiation protection is the straight-line attenuation method with build-up factors. In the case of 3D industrial scenes composed of meshes, the computation cost resides in the fast computation of all of the intersections between the rays and the triangles of the scene. Efficient GPU algorithms have already been proposed, that enable dosimetry calculation for a huge scene (800000 rays, 800000 triangles) in a fraction of second. But these algorithms are not robust: because of the rounding caused by floating-point arithmetic, the numerical results of the ray-triangle intersection tests can differ from the expected mathematical results. In worst case scenario, this can lead to a computed dose rate dramatically inferior to the real dose rate to which the operator is exposed. In this paper, we present a hybrid GPU-CPU algorithm to manage adaptive precision floating-point arithmetic. This algorithm allows robust ray-triangle intersection tests, with very small loss of performance (less than 5 % overhead), and without any need for scene-dependent tuning. (author)

  19. Proceedings of the second workshop of LHC Computing Grid, LCG-France; ACTES, 2e colloque LCG-France

    Energy Technology Data Exchange (ETDEWEB)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin (eds.) [Laboratoire de Physique Corpusculaire Clermont-Ferrand, Campus des Cezeaux, 24, avenue des Landais, Clermont-Ferrand (France)

    2007-03-15

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3

  20. Distributed and grid computing projects with research focus in human health.

    Science.gov (United States)

    Diomidous, Marianna; Zikos, Dimitrios

    2012-01-01

    Distributed systems and grid computing systems are used to connect several computers to obtain a higher level of performance, in order to solve a problem. During the last decade, projects use the World Wide Web to aggregate individuals' CPU power for research purposes. This paper presents the existing active large scale distributed and grid computing projects with research focus in human health. There have been found and presented 11 active projects with more than 2000 Processing Units (PUs) each. The research focus for most of them is molecular biology and, specifically on understanding or predicting protein structure through simulation, comparing proteins, genomic analysis for disease provoking genes and drug design. Though not in all cases explicitly stated, common target diseases include research to find cure against HIV, dengue, Duchene dystrophy, Parkinson's disease, various types of cancer and influenza. Other diseases include malaria, anthrax, Alzheimer's disease. The need for national initiatives and European Collaboration for larger scale projects is stressed, to raise the awareness of citizens to participate in order to create a culture of internet volunteering altruism.

  1. Forecasting Model for Network Throughput of Remote Data Access in Computing Grids

    CERN Document Server

    Begy, Volodimir; The ATLAS collaboration

    2018-01-01

    Computing grids are one of the key enablers of eScience. Researchers from many fields (e.g. High Energy Physics, Bioinformatics, Climatology, etc.) employ grids to run computational jobs in a highly distributed manner. The current state of the art approach for data access in the grid is data placement: a job is scheduled to run at a specific data center, and its execution starts only when the complete input data has been transferred there. This approach has two major disadvantages: (1) the jobs are staying idle while waiting for the input data; (2) due to the limited infrastructure resources, the distributed data management system handling the data placement, may queue the transfers up to several days. An alternative approach is remote data access: a job may stream the input data directly from storage elements, which may be located at local or remote data centers. Remote data access brings two innovative benefits: (1) the jobs can be executed asynchronously with respect to the data transfer; (2) when combined...

  2. WISDOM-II: Screening against multiple targets implicated in malaria using computational grid infrastructures

    Directory of Open Access Journals (Sweden)

    Kenyon Colin

    2009-05-01

    Full Text Available Abstract Background Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Motivation Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR, and on a new promising one, glutathione-S-transferase. Methods In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. Results On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. Conclusion The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software

  3. Computational model for turbulent flow around a grid spacer with mixing vane

    International Nuclear Information System (INIS)

    Tsutomu Ikeno; Takeo Kajishima

    2005-01-01

    Turbulent mixing coefficient and pressure drop are important factors in subchannel analysis to predict onset of DNB. However, universal correlations are difficult since these factors are significantly affected by the geometry of subchannel and a grid spacer with mixing vane. Therefore, we propose a computational model to estimate these factors. Computational model: To represent the effect of geometry of grid spacer in computational model, we applied a large eddy simulation (LES) technique in couple with an improved immersed-boundary method. In our previous work (Ikeno, et al., NURETH-10), detailed properties of turbulence in subchannel were successfully investigated by developing the immersed boundary method in LES. In this study, additional improvements are given: new one-equation dynamic sub-grid scale (SGS) model is introduced to account for the complex geometry without any artificial modification; the higher order accuracy is maintained by consistent treatment for boundary conditions for velocity and pressure. NUMERICAL TEST AND DISCUSSION: Turbulent mixing coefficient and pressure drop are affected strongly by the arrangement and inclination of mixing vane. Therefore, computations are carried out for each of convolute and periodic arrangements, and for each of 30 degree and 20 degree inclinations. The difference in turbulent mixing coefficient due to these factors is reasonably predicted by our method. (An example of this numerical test is shown in Fig. 1.) Turbulent flow of the problem includes unsteady separation behind the mixing vane and vortex shedding in downstream. Anisotropic distribution of turbulent stress is also appeared in rod gap. Therefore, our computational model has advantage for assessing the influence of arrangement and inclination of mixing vane. By coarser computational mesh, one can screen several candidates for spacer design. Then, by finer mesh, more quantitative analysis is possible. By such a scheme, we believe this method is useful

  4. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  5. The GLOBE-Consortium: The Erasmus Computing Grid and The Next Generation Genome Viewer

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    markdownabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  6. Experimental and computational investigations of heat and mass transfer of intensifier grids

    International Nuclear Information System (INIS)

    Kobzar, Leonid; Oleksyuk, Dmitry; Semchenkov, Yuriy

    2015-01-01

    The paper discusses experimental and numerical investigations on intensification of thermal and mass exchange which were performed by National Research Centre ''Kurchatov Institute'' over the past years. Recently, many designs of heat mass transfer intensifier grids have been proposed. NRC ''Kurchatov Institute'' has accomplished a large scope of experimental investigations to study efficiency of intensifier grids of various types. The outcomes of experimental investigations can be used in verification of computational models and codes. On the basis of experimental data, we derived correlations to calculate coolant mixing and critical heat flux mixing in rod bundles equipped with intensifier grids. The acquired correlations were integrated in subchannel code SC-INT.

  7. Surface Modeling, Grid Generation, and Related Issues in Computational Fluid Dynamic (CFD) Solutions

    Science.gov (United States)

    Choo, Yung K. (Compiler)

    1995-01-01

    The NASA Steering Committee for Surface Modeling and Grid Generation (SMAGG) sponsored a workshop on surface modeling, grid generation, and related issues in Computational Fluid Dynamics (CFD) solutions at Lewis Research Center, Cleveland, Ohio, May 9-11, 1995. The workshop provided a forum to identify industry needs, strengths, and weaknesses of the five grid technologies (patched structured, overset structured, Cartesian, unstructured, and hybrid), and to exchange thoughts about where each technology will be in 2 to 5 years. The workshop also provided opportunities for engineers and scientists to present new methods, approaches, and applications in SMAGG for CFD. This Conference Publication (CP) consists of papers on industry overview, NASA overview, five grid technologies, new methods/ approaches/applications, and software systems.

  8. MO-B-BRB-04: 3D Dosimetry in End-To-End Dosimetry QA

    Energy Technology Data Exchange (ETDEWEB)

    Ibbott, G. [UT MD Anderson Cancer Center (United States)

    2016-06-15

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by the development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an

  9. Computational Fluid Dynamic (CFD) Analysis of a Generic Missile With Grid Fins

    National Research Council Canada - National Science Library

    DeSpirito, James

    2000-01-01

    This report presents the results of a study demonstrating an approach for using viscous computational fluid dynamic simulations to calculate the flow field and aerodynamic coefficients for a missile with grid fin...

  10. The DataGrid Project

    CERN Document Server

    Ruggieri, F

    2001-01-01

    An overview of the objectives and status of the DataGrid Project is presented, together with a brief introduction to the Grid metaphor and some references to the Grid activities and initiatives related to DataGrid. High energy physics experiments have always requested state of the art computing facilities to efficiently perform several computing activities related with the handling of large amounts of data and fairly large computing resources. Some of the ideas born inside the community to enhance the user friendliness of all the steps in the computing chain have been, sometimes, successfully applied also in other contexts: one bright example is the World Wide Web. The LHC computing challenge has triggered inside the high energy physics community, the start of the DataGrid Project. The objective of the project is to enable next generation scientific exploration requiring intensive computation and analysis of shared large-scale databases. (12 refs).

  11. Additional Security Considerations for Grid Management

    Science.gov (United States)

    Eidson, Thomas M.

    2003-01-01

    The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.

  12. Multigrid on unstructured grids using an auxiliary set of structured grids

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, C.C.; Malhotra, S.; Schultz, M.H. [Yale Univ., New Haven, CT (United States)

    1996-12-31

    Unstructured grids do not have a convenient and natural multigrid framework for actually computing and maintaining a high floating point rate on standard computers. In fact, just the coarsening process is expensive for many applications. Since unstructured grids play a vital role in many scientific computing applications, many modifications have been proposed to solve this problem. One suggested solution is to map the original unstructured grid onto a structured grid. This can be used as a fine grid in a standard multigrid algorithm to precondition the original problem on the unstructured grid. We show that unless extreme care is taken, this mapping can lead to a system with a high condition number which eliminates the usefulness of the multigrid method. Theorems with lower and upper bounds are provided. Simple examples show that the upper bounds are sharp.

  13. The QUANTGRID Project (RO)—Quantum Security in GRID Computing Applications

    Science.gov (United States)

    Dima, M.; Dulea, M.; Petre, M.; Petre, C.; Mitrica, B.; Stoica, M.; Udrea, M.; Sterian, R.; Sterian, P.

    2010-01-01

    The QUANTGRID Project, financed through the National Center for Programme Management (CNMP-Romania), is the first attempt at using Quantum Crypted Communications (QCC) in large scale operations, such as GRID Computing, and conceivably in the years ahead in the banking sector and other security tight communications. In relation with the GRID activities of the Center for Computing & Communications (Nat.'l Inst. Nucl. Phys.—IFIN-HH), the Quantum Optics Lab. (Nat.'l Inst. Plasma and Lasers—INFLPR) and the Physics Dept. (University Polytechnica—UPB) the project will build a demonstrator infrastructure for this technology. The status of the project in its incipient phase is reported, featuring tests for communications in classical security mode: socket level communications under AES (Advanced Encryption Std.), both proprietary code in C++ technology. An outline of the planned undertaking of the project is communicated, highlighting its impact in quantum physics, coherent optics and information technology.

  14. OGC and Grid Interoperability in enviroGRIDS Project

    Science.gov (United States)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  15. Asia Federation Report on International Symposium on Grid Computing 2009 (ISGC 2009)

    Science.gov (United States)

    Grey, Francois

    This report provides an overview of developments in the Asia-Pacific region, based on presentations made at the International Symposium on Grid Computing 2009 (ISGC 09), held 21-23 April. This document contains 14 sections, including a progress report on general Asia-EU Grid activities as well as progress reports by representatives of 13 Asian countries presented at ISGC 09. In alphabetical order, these are: Australia, China, India, Indonesia, Japan, Malaysia, Pakistan, Philippines, Singapore, South Korea, Taiwan, Thailand and Vietnam.

  16. Porting of Scientific Applications to Grid Computing on GridWay

    Directory of Open Access Journals (Sweden)

    J. Herrera

    2005-01-01

    Full Text Available The expansion and adoption of Grid technologies is prevented by the lack of a standard programming paradigm to port existing applications among different environments. The Distributed Resource Management Application API has been proposed to aid the rapid development and distribution of these applications across different Distributed Resource Management Systems. In this paper we describe an implementation of the DRMAA standard on a Globus-based testbed, and show its suitability to express typical scientific applications, like High-Throughput and Master-Worker applications. The DRMAA routines are supported by the functionality offered by the GridWay2 framework, which provides the runtime mechanisms needed for transparently executing jobs on a dynamic Grid environment based on Globus. As cases of study, we consider the implementation with DRMAA of a bioinformatics application, a genetic algorithm and the NAS Grid Benchmarks.

  17. Modelling noise propagation using Grid Resources. Progress within GDI-Grid

    Science.gov (United States)

    Kiehle, Christian; Mayer, Christian; Padberg, Alexander; Stapelfeld, Hartmut

    2010-05-01

    Modelling noise propagation using Grid Resources. Progress within GDI-Grid. GDI-Grid (english: SDI-Grid) is a research project funded by the German Ministry for Science and Education (BMBF). It aims at bridging the gaps between OGC Web Services (OWS) and Grid infrastructures and identifying the potential of utilizing the superior storage capacities and computational power of grid infrastructures for geospatial applications while keeping the well-known service interfaces specified by the OGC. The project considers all major OGC webservice interfaces for Web Mapping (WMS), Feature access (Web Feature Service), Coverage access (Web Coverage Service) and processing (Web Processing Service). The major challenge within GDI-Grid is the harmonization of diverging standards as defined by standardization bodies for Grid computing and spatial information exchange. The project started in 2007 and will continue until June 2010. The concept for the gridification of OWS developed by lat/lon GmbH and the Department of Geography of the University of Bonn is applied to three real-world scenarios in order to check its practicability: a flood simulation, a scenario for emergency routing and a noise propagation simulation. The latter scenario is addressed by the Stapelfeldt Ingenieurgesellschaft mbH located in Dortmund adapting their LimA software to utilize grid resources. Noise mapping of e.g. traffic noise in urban agglomerates and along major trunk roads is a reoccurring demand of the EU Noise Directive. Input data requires road net and traffic, terrain, buildings and noise protection screens as well as population distribution. Noise impact levels are generally calculated in 10 m grid and along relevant building facades. For each receiver position sources within a typical range of 2000 m are split down into small segments, depending on local geometry. For each of the segments propagation analysis includes diffraction effects caused by all obstacles on the path of sound propagation

  18. Research and innovation in radiation dosimetry

    International Nuclear Information System (INIS)

    Delgado, A.

    1999-01-01

    In this article some relevant lines of research in radiation dosimetry are presented. In some of them innovative approaches have been recently proposed in recent years. In others innovation is still to come as it is necessary in view of the insufficiency of the actual methods and techniques. mention is made to Thermoluminescence Dosimetry an to the improvement produced by new computational methods for the analysis of the usually complex TL signals. A solid state dosimetric technique recently proposed, Optically Stimulated Luminescence, OSL, is briefly presented. This technique promises advantages over TLD for personal and environmental dosimetry. The necessity of improving the measurement characteristics of neutron personal dosemeters is commented, making reference to some very recent developments. The situation of the dosimetry in connection with radiobiology research is overviewed, commenting the controversy on the adequacy and utility of the quality absorbed dose for these activities. Finally the special problematic of internal dosimetry is discussed. (Author) 25 refs

  19. The performance model of dynamic virtual organization (VO) formations within grid computing context

    International Nuclear Information System (INIS)

    Han Liangxiu

    2009-01-01

    Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. Within the grid computing context, successful dynamic VO formations mean a number of individuals and institutions associated with certain resources join together and form new VOs in order to effectively execute tasks within given time steps. To date, while the concept of VOs has been accepted, few research has been done on the impact of effective dynamic virtual organization formations. In this paper, we develop a performance model of dynamic VOs formation and analyze the effect of different complex organizational structures and their various statistic parameter properties on dynamic VO formations from three aspects: (1) the probability of a successful VO formation under different organizational structures and statistic parameters change, e.g. average degree; (2) the effect of task complexity on dynamic VO formations; (3) the impact of network scales on dynamic VO formations. The experimental results show that the proposed model can be used to understand the dynamic VO formation performance of the simulated organizations. The work provides a good path to understand how to effectively schedule and utilize resources based on the complex grid network and therefore improve the overall performance within grid environment.

  20. A roadmap for caGrid, an enterprise Grid architecture for biomedical research.

    Science.gov (United States)

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil

    2008-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.

  1. Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic.

    Science.gov (United States)

    Sanduja, S; Jewell, P; Aron, E; Pharai, N

    2015-09-01

    Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic.

  2. Status of the Grid Computing for the ALICE Experiment in the Czech Republic

    International Nuclear Information System (INIS)

    Adamova, D; Hampl, J; Chudoba, J; Kouba, T; Svec, J; Mendez, Lorenzo P; Saiz, P

    2010-01-01

    The Czech Republic (CR) has been participating in the LHC Computing Grid project (LCG) ever since 2003 and gradually, a middle-sized Tier-2 center has been built in Prague, delivering computing services for national HEP experiments groups including the ALICE project at the LHC. We present a brief overview of the computing activities and services being performed in the CR for the ALICE experiment.

  3. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  4. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  5. Fast protocol for radiochromic film dosimetry using a cloud computing web application.

    Science.gov (United States)

    Calvo-Ortega, Juan-Francisco; Pozo, Miquel; Moragues, Sandra; Casals, Joan

    2017-07-01

    To investigate the feasibility of a fast protocol for radiochromic film dosimetry to verify intensity-modulated radiotherapy (IMRT) plans. EBT3 film dosimetry was conducted in this study using the triple-channel method implemented in the cloud computing application (Radiochromic.com). We described a fast protocol for radiochromic film dosimetry to obtain measurement results within 1h. Ten IMRT plans were delivered to evaluate the feasibility of the fast protocol. The dose distribution of the verification film was derived at 15, 30, 45min using the fast protocol and also at 24h after completing the irradiation. The four dose maps obtained per plan were compared using global and local gamma index (5%/3mm) with the calculated one by the treatment planning system. Gamma passing rates obtained for 15, 30 and 45min post-exposure were compared with those obtained after 24h. Small differences respect to the 24h protocol were found in the gamma passing rates obtained for films digitized at 15min (global: 99.6%±0.9% vs. 99.7%±0.5%; local: 96.3%±3.4% vs. 96.3%±3.8%), at 30min (global: 99.5%±0.9% vs. 99.7%±0.5%; local: 96.5%±3.2% vs. 96.3±3.8%) and at 45min (global: 99.2%±1.5% vs. 99.7%±0.5%; local: 96.1%±3.8% vs. 96.3±3.8%). The fast protocol permits dosimetric results within 1h when IMRT plans are verified, with similar results as those reported by the standard 24h protocol. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Topics in radiation dosimetry radiation dosimetry

    CERN Document Server

    1972-01-01

    Radiation Dosimetry, Supplement 1: Topics in Radiation Dosimetry covers instruments and techniques in dealing with special dosimetry problems. The book discusses thermoluminescence dosimetry in archeological dating; dosimetric applications of track etching; vacuum chambers of radiation measurement. The text also describes wall-less detectors in microdosimetry; dosimetry of low-energy X-rays; and the theory and general applicability of the gamma-ray theory of track effects to various systems. Dose equivalent determinations in neutron fields by means of moderator techniques; as well as developm

  7. NetJobs: A new approach to network monitoring for the Grid using Grid jobs

    OpenAIRE

    Pagano, Alfredo

    2011-01-01

    With grid computing, the far-fl�ung and disparate IT resources act as a single "virtual datacenter". Grid computing interfaces heterogeneous IT resources so they are available when and where we need them. Grid allows us to provision applications and allocate capacity among research and business groups that are geographically and organizationally dispersed. Building a high availability Grid is hold as the next goal to achieve: protecting against computer failures and site failures to avoid dow...

  8. Solution of Poisson equations for 3-dimensional grid generations. [computations of a flow field over a thin delta wing

    Science.gov (United States)

    Fujii, K.

    1983-01-01

    A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.

  9. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  10. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2010-01-01

    This book is an introduction to structured and unstructured grid methods in scientific computing, addressing graduate students, scientists as well as practitioners. Basic local and integral grid quality measures are formulated and new approaches to mesh generation are reviewed. In addition to the content of the successful first edition, a more detailed and practice oriented description of monitor metrics in Beltrami and diffusion equations is given for generating adaptive numerical grids. Also, new techniques developed by the author are presented, in particular a technique based on the inverted form of Beltrami’s partial differential equations with respect to control metrics. This technique allows the generation of adaptive grids for a wide variety of computational physics problems, including grid clustering to given function values and gradients, grid alignment with given vector fields, and combinations thereof. Applications of geometric methods to the analysis of numerical grid behavior as well as grid ge...

  11. GENII [Generation II]: The Hanford Environmental Radiation Dosimetry Software System: Volume 3, Code maintenance manual: Hanford Environmental Dosimetry Upgrade Project

    International Nuclear Information System (INIS)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.; Ramsdell, J.V.

    1988-09-01

    The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). This coupled system of computer codes is intended for analysis of environmental contamination resulting from acute or chronic releases to, or initial contamination of, air, water, or soil, on through the calculation of radiation doses to individuals or populations. GENII is described in three volumes of documentation. This volume is a Code Maintenance Manual for the serious user, including code logic diagrams, global dictionary, worksheets to assist with hand calculations, and listings of the code and its associated data libraries. The first volume describes the theoretical considerations of the system. The second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. 7 figs., 5 tabs

  12. GENII (Generation II): The Hanford Environmental Radiation Dosimetry Software System: Volume 3, Code maintenance manual: Hanford Environmental Dosimetry Upgrade Project

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.; Ramsdell, J.V.

    1988-09-01

    The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). This coupled system of computer codes is intended for analysis of environmental contamination resulting from acute or chronic releases to, or initial contamination of, air, water, or soil, on through the calculation of radiation doses to individuals or populations. GENII is described in three volumes of documentation. This volume is a Code Maintenance Manual for the serious user, including code logic diagrams, global dictionary, worksheets to assist with hand calculations, and listings of the code and its associated data libraries. The first volume describes the theoretical considerations of the system. The second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. 7 figs., 5 tabs.

  13. New data processing technologies at LHC: From Grid to Cloud Computing and beyond

    International Nuclear Information System (INIS)

    De Salvo, A.

    2011-01-01

    Since a few years the LHC experiments at CERN are successfully using the Grid Computing Technologies for their distributed data processing activities, on a global scale. Recently, the experience gained with the current systems allowed the design of the future Computing Models, involving new technologies like Could Computing, virtualization and high performance distributed database access. In this paper we shall describe the new computational technologies of the LHC experiments at CERN, comparing them with the current models, in terms of features and performance.

  14. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    Science.gov (United States)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  15. gCube Grid services

    CERN Document Server

    Andrade, Pedro

    2008-01-01

    gCube is a service-based framework for eScience applications requiring collaboratory, on-demand, and intensive information processing. It provides to these communities Virtual Research Environments (VREs) to support their activities. gCube is build on top of standard technologies for computational Grids, namely the gLite middleware. The software was produced by the DILIGENT project and will continue to be supported and further developed by the D4Science project. gCube reflects within its name a three-sided interpretation of the Grid vision of resource sharing: sharing of computational resources, sharing of structured data, and sharing of application services. As such, gCube embodies the defining characteristics of computational Grids, data Grids, and virtual data Grids. Precisely, it builds on gLite middleware for managing distributed computations and unstructured data, includes dedicated services for managing data and metadata, provides services for distributed information retrieval, allows the orchestration...

  16. The MammoGrid Project Grids Architecture

    CERN Document Server

    McClatchey, Richard; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri; Buncic, Predrag; Clatchey, Richard Mc; Buncic, Predrag; Manset, David; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri

    2003-01-01

    The aim of the recently EU-funded MammoGrid project is, in the light of emerging Grid technology, to develop a European-wide database of mammograms that will be used to develop a set of important healthcare applications and investigate the potential of this Grid to support effective co-working between healthcare professionals throughout the EU. The MammoGrid consortium intends to use a Grid model to enable distributed computing that spans national borders. This Grid infrastructure will be used for deploying novel algorithms as software directly developed or enhanced within the project. Using the MammoGrid clinicians will be able to harness the use of massive amounts of medical image data to perform epidemiological studies, advanced image processing, radiographic education and ultimately, tele-diagnosis over communities of medical "virtual organisations". This is achieved through the use of Grid-compliant services [1] for managing (versions of) massively distributed files of mammograms, for handling the distri...

  17. MrGrid: a portable grid based molecular replacement pipeline.

    Directory of Open Access Journals (Sweden)

    Jason W Schmidberger

    Full Text Available BACKGROUND: The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. METHODOLOGY/PRINCIPAL FINDINGS: MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. CONCLUSIONS: MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success.

  18. 3D measurement of absolute radiation dose in grid therapy

    International Nuclear Information System (INIS)

    Trapp, J V; Warrington, A P; Partridge, M; Philps, A; Leach, M O; Webb, S

    2004-01-01

    Spatially fractionated radiotherapy through a grid is a concept which has a long history and was routinely used in orthovoltage radiation therapy in the middle of last century to minimize damage to the skin and subcutaneous tissue. With the advent of megavoltage radiotherapy and its skin sparing effects the use of grids in radiotherapy declined in the 1970s. However there has recently been a revival of the technique for use in palliative treatments with a single fraction of 10 to 20 Gy. In this work the absolute 3D dose distribution in a grid irradiation is measured for photons using a combination of film and gel dosimetry

  19. Helicopter Rotor Blade Computation in Unsteady Flows Using Moving Overset Grids

    Science.gov (United States)

    Ahmad, Jasim; Duque, Earl P. N.

    1996-01-01

    An overset grid thin-layer Navier-Stokes code has been extended to include dynamic motion of helicopter rotor blades through relative grid motion. The unsteady flowfield and airloads on an AH-IG rotor in forward flight were computed to verify the methodology and to demonstrate the method's potential usefulness towards comprehensive helicopter codes. In addition, the method uses the blade's first harmonics measured in the flight test to prescribe the blade motion. The solution was impulsively started and became periodic in less than three rotor revolutions. Detailed unsteady numerical flow visualization techniques were applied to the entire unsteady data set of five rotor revolutions and exhibited flowfield features such as blade vortex interaction and wake roll-up. The unsteady blade loads and surface pressures compare well against those from flight measurements. Details of the method, a discussion of the resulting predicted flowfield, and requirements for future work are presented. Overall, given the proper blade dynamics, this method can compute the unsteady flowfield of a general helicopter rotor in forward flight.

  20. In vivo thermoluminescent dosimetry in studies of helicoid computed tomography and excretory urogram; Dosimetria termoluminiscente In vivo en estudios de tomografia computada helicoidal y urograma excretor

    Energy Technology Data Exchange (ETDEWEB)

    Cruz C, D.; Azorin N, J. [UAM-I, 09340 Mexico D.F. (Mexico); Saucedo A, V.M.; Barajas O, J.L. [Unidad de Especialidades Medicas, Secretaria de la Defensa Nacional, 11500 Mexico D.F. (Mexico)

    2005-07-01

    The dosimetry is the field of measurement of the ionizing radiations. It final objective is to determine the 'absorbed dose' for people. The dosimetry is vital in the radiotherapy, the radiological protection and the treatment technologies by irradiation. Presently work, we develop 'In vivo' dosimetry, in exposed patients to studies of helical computed tomography and excretory urogram. The dosimetry 'in vivo' was carried out in 20 patients selected aleatorily, for each medical study. The absorbed dose was measured in points of interest located in crystalline, thyroid, chest and abdomen of each patient, by means of thermoluminescent dosemeters (TLD) LiF: Mg,Cu,P + Ptfe of national fabrication. Also it was quantified the dose in the working area. (Author)

  1. Technical basis document for internal dosimetry

    International Nuclear Information System (INIS)

    Hickman, D.P.

    1991-01-01

    This document provides the technical basis for the Chem-Nuclear Geotech (Geotech) internal dosimetry program. Geotech policy describes the intentions of the company in complying with radiation protection standards and the as low as reasonably achievable (ALARA) program. It uses this policy and applicable protection standards to derive acceptable methods and levels of bioassay to assure compliance. The models and computational methods used are described in detail within this document. FR-om these models, dose- conversion factors and derived limits are computed. These computations are then verified using existing documentation and verification information or by demonstration of the calculations used to obtain the dose-conversion factors and derived limits. Recommendations for methods of optimizing the internal dosimetry program to provide effective monitoring and dose assessment for workers are provided in the last section of this document. This document is intended to be used in establishing an accredited dosimetry program in accordance with expected Department of Energy Laboratory Accreditation Program (DOELAP) requirements for the selected radionuclides provided in this document, including uranium mill tailing mixtures. Additions and modifications to this document and procedures derived FR-om this document are expected in the future according to changes in standards and changes in programmatic mission

  2. First Tuesday@CERN - THE GRID GETS REAL !

    CERN Document Server

    2003-01-01

    A few years ago, "the Grid" was just a vision dreamt up by some computer scientists who wanted to share processor power and data storage capacity between computers around the world - in much the same way as today's Web shares information seamlessly between millions of computers. Today, Grid technology is a huge enterprise, involving hundreds of software engineers, and generating exciting opportunities for industry. "Computing on demand", "utility computing", "web services", and "virtualisation" are just a few of the buzzwords in the IT industry today that are intimately connected to the development of Grid technology. For this third First Tuesday @CERN, the panel will survey some of the latest major breakthroughs in building international computer Grids for science. It will also provide a snapshot of Grid-related industrial activities, with contributions from both major players in the IT sector as well as emerging Grid technology start-ups. Panel: - Les Robertson, Head of the LHC Computing Grid Project, IT ...

  3. The work programme of EURADOS on internal and external dosimetry.

    Science.gov (United States)

    Rühm, W; Bottollier-Depois, J F; Gilvin, P; Harrison, R; Knežević, Ž; Lopez, M A; Tanner, R; Vargas, A; Woda, C

    2018-01-01

    Since the early 1980s, the European Radiation Dosimetry Group (EURADOS) has been maintaining a network of institutions interested in the dosimetry of ionising radiation. As of 2017, this network includes more than 70 institutions (research centres, dosimetry services, university institutes, etc.), and the EURADOS database lists more than 500 scientists who contribute to the EURADOS mission, which is to promote research and technical development in dosimetry and its implementation into practice, and to contribute to harmonisation of dosimetry in Europe and its conformance with international practices. The EURADOS working programme is organised into eight working groups dealing with environmental, computational, internal, and retrospective dosimetry; dosimetry in medical imaging; dosimetry in radiotherapy; dosimetry in high-energy radiation fields; and harmonisation of individual monitoring. Results are published as freely available EURADOS reports and in the peer-reviewed scientific literature. Moreover, EURADOS organises winter schools and training courses on various aspects relevant for radiation dosimetry, and formulates the strategic research needs in dosimetry important for Europe. This paper gives an overview on the most important EURADOS activities. More details can be found at www.eurados.org .

  4. The LHC Computing Grid Project

    CERN Multimedia

    Åkesson, T

    In the last ATLAS eNews I reported on the preparations for the LHC Computing Grid Project (LCGP). Significant LCGP resources were mobilized during the summer, and there have been numerous iterations on the formal paper to put forward to the CERN Council to establish the LCGP. ATLAS, and also the other LHC-experiments, has been very active in this process to maximally influence the outcome. Our main priorities were to ensure that the global aspects are properly taken into account, that the CERN non-member states are also included in the structure, that the experiments are properly involved in the LCGP execution and that the LCGP takes operative responsibility during the data challenges. A Project Launch Board (PLB) was active from the end of July until the 10th of September. It was chaired by Hans Hoffmann and had the IT division leader as secretary. Each experiment had a representative (me for ATLAS), and the large CERN member states were each represented while the smaller were represented as clusters ac...

  5. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    International Nuclear Information System (INIS)

    Maynard, Matthew R; Geyer, John W; Bolch, Wesley; Aris, John P; Shifrin, Roger Y

    2011-01-01

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations in

  6. Data grids a new computational infrastructure for data-intensive science

    CERN Document Server

    Avery, P

    2002-01-01

    Twenty-first-century scientific and engineering enterprises are increasingly characterized by their geographic dispersion and their reliance on large data archives. These characteristics bring with them unique challenges. First, the increasing size and complexity of modern data collections require significant investments in information technologies to store, retrieve and analyse them. Second, the increased distribution of people and resources in these projects has made resource sharing and collaboration across significant geographic and organizational boundaries critical to their success. In this paper I explore how computing infrastructures based on data grids offer data-intensive enterprises a comprehensive, scalable framework for collaboration and resource sharing. A detailed example of a data grid framework is presented for a Large Hadron Collider experiment, where a hierarchical set of laboratory and university resources comprising petaflops of processing power and a multi- petabyte data archive must be ...

  7. Computational dosimetry for grounded and ungrounded human models due to contact current

    International Nuclear Information System (INIS)

    Chan, Kwok Hung; Hattori, Junya; Laakso, Ilkka; Hirata, Akimasa; Taki, Masao

    2013-01-01

    This study presents the computational dosimetry of contact currents for grounded and ungrounded human models. The uncertainty of the quasi-static (QS) approximation of the in situ electric field induced in a grounded/ungrounded human body due to the contact current is first estimated. Different scenarios of cylindrical and anatomical human body models are considered, and the results are compared with the full-wave analysis. In the QS analysis, the induced field in the grounded cylindrical model is calculated by the QS finite-difference time-domain (QS-FDTD) method, and compared with the analytical solution. Because no analytical solution is available for the grounded/ungrounded anatomical human body model, the results of the QS-FDTD method are then compared with those of the conventional FDTD method. The upper frequency limit for the QS approximation in the contact current dosimetry is found to be 3 MHz, with a relative local error of less than 10%. The error increases above this frequency, which can be attributed to the neglect of the displacement current. The QS or conventional FDTD method is used for the dosimetry of induced electric field and/or specific absorption rate (SAR) for a contact current injected into the index finger of a human body model in the frequency range from 10 Hz to 100 MHz. The in situ electric fields or SAR are compared with the basic restrictions in the international guidelines/standards. The maximum electric field or the 99th percentile value of the electric fields appear not only in the fat and muscle tissues of the finger, but also around the wrist, forearm, and the upper arm. Some discrepancies are observed between the basic restrictions for the electric field and SAR and the reference levels for the contact current, especially in the extremities. These discrepancies are shown by an equation that relates the current density, tissue conductivity, and induced electric field in the finger with a cross-sectional area of 1 cm 2 . (paper)

  8. Verification of the computational dosimetry system in JAERI (JCDS) for boron neutron capture therapy

    International Nuclear Information System (INIS)

    Kumada, H; Yamamoto, K; Matsumura, A; Yamamoto, T; Nakagawa, Y; Nakai, K; Kageji, T

    2004-01-01

    Clinical trials for boron neutron capture therapy (BNCT) by using the medical irradiation facility installed in Japan Research Reactor No. 4 (JRR-4) at Japan Atomic Energy Research Institute (JAERI) have been performed since 1999. To carry out the BNCT procedure based on proper treatment planning and its precise implementation, the JAERI computational dosimetry system (JCDS) which is applicable to dose planning has been developed in JAERI. The aim of this study was to verify the performance of JCDS. The experimental data with a cylindrical water phantom were compared with the calculation results using JCDS. Data of measurements obtained from IOBNCT cases at JRR-4 were also compared with retrospective evaluation data with JCDS. In comparison with phantom experiments, the calculations and the measurements for thermal neutron flux and gamma-ray dose were in a good agreement, except at the surface of the phantom. Against the measurements of clinical cases, the discrepancy of JCDS's calculations was approximately 10%. These basic and clinical verifications demonstrated that JCDS has enough performance for the BNCT dosimetry. Further investigations are recommended for precise dose distribution and faster calculation environment

  9. Development of the JAERI computational dosimetry system (JCDS) for boron neutron capture therapy. Cooperative research

    CERN Document Server

    Kumada, H; Matsumura, A; Nakagawa, Y; Nose, T; Torii, Y; Uchiyama, J; Yamamoto, K; Yamamoto, T

    2003-01-01

    The Neutron Beam Facility at JRR-4 enables us to carry out boron neutron capture therapy with epithermal neutron beam. In order to make treatment plans for performing the epithermal neutron beam BNCT, it is necessary to estimate radiation doses in a patient's head in advance. The JAERI Computational Dosimetry System (JCDS), which can estimate distributions of radiation doses in a patient's head by simulating in order to support the treatment planning for epithermal neutron beam BNCT, was developed. JCDS is a software that creates a 3-dimentional head model of a patient by using CT and MRI images, and that generates a input data file automatically for calculation of neutron flux and gamma-ray dose distributions in the brain with the Monte Carlo code MCNP, and that displays these dose distributions on the head model for dosimetry by using the MCNP calculation results. JCDS has any advantages as follows; By using CT data and MRI data which are medical images, a detail three-dimensional model of patient's head is...

  10. The open science grid

    International Nuclear Information System (INIS)

    Pordes, R.

    2004-01-01

    The U.S. LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and active experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The Open Science Grid (OSG) model builds upon the successful approach of last year's joint Grid2003 project. The Grid3 shared infrastructure has for over eight months provided significant computational resources and throughput to a range of applications, including ATLAS and CMS data challenges, SDSS, LIGO, and biology analyses, and computer science demonstrators and experiments. To move towards LHC-scale data management, access and analysis capabilities, we must increase the scale, services, and sustainability of the current infrastructure by an order of magnitude or more. Thus, we must achieve a significant upgrade in its functionalities and technologies. The initial OSG partners will build upon a fully usable, sustainable and robust grid. Initial partners include the US LHC collaborations, DOE and NSF Laboratories and Universities and Trillium Grid projects. The approach is to federate with other application communities in the U.S. to build a shared infrastructure open to other sciences and capable of being modified and improved to respond to needs of other applications, including CDF, D0, BaBar, and RHIC experiments. We describe the application-driven, engineered services of the OSG, short term plans and status, and the roadmap for a consortium, its partnerships and national focus

  11. Grid Security

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing users and application developers to take security seriously is equally difficult. This paper will describe the main Grid security issues, both in terms of technology and policy, that have been tackled over recent years in LCG and related Grid projects. Achievements to date will be described and opportunities for future improvements will be addressed.

  12. Grid Computing Education Support

    Energy Technology Data Exchange (ETDEWEB)

    Steven Crumb

    2008-01-15

    The GGF Student Scholar program enabled GGF the opportunity to bring over sixty qualified graduate and under-graduate students with interests in grid technologies to its three annual events over the three-year program.

  13. Technical basis document for internal dosimetry

    CERN Document Server

    Hickman, D P

    1991-01-01

    This document provides the technical basis for the Chem-Nuclear Geotech (Geotech) internal dosimetry program. Geotech policy describes the intentions of the company in complying with radiation protection standards and the as low as reasonably achievable (ALARA) program. It uses this policy and applicable protection standards to derive acceptable methods and levels of bioassay to assure compliance. The models and computational methods used are described in detail within this document. FR-om these models, dose- conversion factors and derived limits are computed. These computations are then verified using existing documentation and verification information or by demonstration of the calculations used to obtain the dose-conversion factors and derived limits. Recommendations for methods of optimizing the internal dosimetry program to provide effective monitoring and dose assessment for workers are provided in the last section of this document. This document is intended to be used in establishing an accredited dosi...

  14. Secure grid-based computing with social-network based trust management in the semantic web

    Czech Academy of Sciences Publication Activity Database

    Špánek, Roman; Tůma, Miroslav

    2006-01-01

    Roč. 16, č. 6 (2006), s. 475-488 ISSN 1210-0552 R&D Projects: GA AV ČR 1ET100300419; GA MŠk 1M0554 Institutional research plan: CEZ:AV0Z10300504 Keywords : semantic web * grid computing * trust management * reconfigurable networks * security * hypergraph model * hypergraph algorithms Subject RIV: IN - Informatics, Computer Science

  15. From the Web to the Grid and beyond computing paradigms driven by high-energy physics

    CERN Document Server

    Carminati, Federico; Galli-Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the ...

  16. A Global Computing Grid for LHC; Una red global de computacion para LHC

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Calama, J. M.; Colino Arriero, N.

    2013-06-01

    An innovative computing infrastructure has played an instrumental role in the recent discovery of the Higgs boson in the LHC and has enabled scientists all over the world to store, process and analyze enormous amounts of data in record time. The Grid computing technology has made it possible to integrate computing center resources spread around the planet, including the CIEMAT, into a distributed system where these resources can be shared and accessed via Internet on a transparent, uniform basis. A global supercomputer for the LHC experiments. (Author)

  17. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  18. First Thuesday - CERN, The Grid gets real

    CERN Multimedia

    Robertson, Leslie

    2003-01-01

    A few years ago, "the Grid" was just a vision dreamt up by some computer scientists who wanted to share processor power and data storage capacity between computers around the world - in much the same way as today's Web shares information seamlessly between millions of computers. Today, Grid technology is a huge enterprise, involving hundreds of software engineers, and generating exciting opportunities for industry. "Computing on demand", "utility computing", "web services", and "virtualisation" are just a few of the buzzwords in the IT industry today that are intimately connected to the development of Grid technology. For this third First Tuesday @CERN, the panel will survey some of the latest major breakthroughs in building international computer Grids for science. It will also provide a snapshot of Grid-related industrial activities, with contributions from both major players in the IT sector as well as emerging Grid technology start-ups.

  19. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    Science.gov (United States)

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  20. ISDD: A computational model of particle sedimentation, diffusion and target cell dosimetry for in vitro toxicity studies

    Science.gov (United States)

    2010-01-01

    Background The difficulty of directly measuring cellular dose is a significant obstacle to application of target tissue dosimetry for nanoparticle and microparticle toxicity assessment, particularly for in vitro systems. As a consequence, the target tissue paradigm for dosimetry and hazard assessment of nanoparticles has largely been ignored in favor of using metrics of exposure (e.g. μg particle/mL culture medium, particle surface area/mL, particle number/mL). We have developed a computational model of solution particokinetics (sedimentation, diffusion) and dosimetry for non-interacting spherical particles and their agglomerates in monolayer cell culture systems. Particle transport to cells is calculated by simultaneous solution of Stokes Law (sedimentation) and the Stokes-Einstein equation (diffusion). Results The In vitro Sedimentation, Diffusion and Dosimetry model (ISDD) was tested against measured transport rates or cellular doses for multiple sizes of polystyrene spheres (20-1100 nm), 35 nm amorphous silica, and large agglomerates of 30 nm iron oxide particles. Overall, without adjusting any parameters, model predicted cellular doses were in close agreement with the experimental data, differing from as little as 5% to as much as three-fold, but in most cases approximately two-fold, within the limits of the accuracy of the measurement systems. Applying the model, we generalize the effects of particle size, particle density, agglomeration state and agglomerate characteristics on target cell dosimetry in vitro. Conclusions Our results confirm our hypothesis that for liquid-based in vitro systems, the dose-rates and target cell doses for all particles are not equal; they can vary significantly, in direct contrast to the assumption of dose-equivalency implicit in the use of mass-based media concentrations as metrics of exposure for dose-response assessment. The difference between equivalent nominal media concentration exposures on a μg/mL basis and target cell

  1. ISDD: A computational model of particle sedimentation, diffusion and target cell dosimetry for in vitro toxicity studies

    Directory of Open Access Journals (Sweden)

    Chrisler William B

    2010-11-01

    Full Text Available Abstract Background The difficulty of directly measuring cellular dose is a significant obstacle to application of target tissue dosimetry for nanoparticle and microparticle toxicity assessment, particularly for in vitro systems. As a consequence, the target tissue paradigm for dosimetry and hazard assessment of nanoparticles has largely been ignored in favor of using metrics of exposure (e.g. μg particle/mL culture medium, particle surface area/mL, particle number/mL. We have developed a computational model of solution particokinetics (sedimentation, diffusion and dosimetry for non-interacting spherical particles and their agglomerates in monolayer cell culture systems. Particle transport to cells is calculated by simultaneous solution of Stokes Law (sedimentation and the Stokes-Einstein equation (diffusion. Results The In vitro Sedimentation, Diffusion and Dosimetry model (ISDD was tested against measured transport rates or cellular doses for multiple sizes of polystyrene spheres (20-1100 nm, 35 nm amorphous silica, and large agglomerates of 30 nm iron oxide particles. Overall, without adjusting any parameters, model predicted cellular doses were in close agreement with the experimental data, differing from as little as 5% to as much as three-fold, but in most cases approximately two-fold, within the limits of the accuracy of the measurement systems. Applying the model, we generalize the effects of particle size, particle density, agglomeration state and agglomerate characteristics on target cell dosimetry in vitro. Conclusions Our results confirm our hypothesis that for liquid-based in vitro systems, the dose-rates and target cell doses for all particles are not equal; they can vary significantly, in direct contrast to the assumption of dose-equivalency implicit in the use of mass-based media concentrations as metrics of exposure for dose-response assessment. The difference between equivalent nominal media concentration exposures on a

  2. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Senchenko, A

    2012-01-01

    The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  3. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    Directory of Open Access Journals (Sweden)

    Almeida Jonas S

    2006-03-01

    Full Text Available Abstract Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else. Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web

  4. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    Science.gov (United States)

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over

  5. Chemical dosimetry principles in high dose dosimetry

    International Nuclear Information System (INIS)

    Mhatre, Sachin G.V.

    2016-01-01

    In radiation processing, activities of principal concern are process validation and process control. The objective of such formalized procedures is to establish documentary evidence that the irradiation process has achieved the desired results. The key element of such activities is inevitably a well characterized reliable dosimetry system that is traceable to recognized national and international dosimetry standards. Only such dosimetry systems can help establish the required documentary evidence. In addition, industrial radiation processing such as irradiation of foodstuffs and sterilization of health careproducts are both highly regulated, in particular with regard to dose. Besides, dosimetry is necessary for scaling up processes from the research level to the industrial level. Thus, accurate dosimetry is indispensable

  6. Cost effective distributed computing for Monte Carlo radiation dosimetry

    International Nuclear Information System (INIS)

    Wise, K.N.; Webb, D.V.

    2000-01-01

    Full text: An inexpensive computing facility has been established for performing repetitive Monte Carlo simulations with the BEAM and EGS4/EGSnrc codes of linear accelerator beams, for calculating effective dose from diagnostic imaging procedures and of ion chambers and phantoms used for the Australian high energy absorbed dose standards. The facility currently consists of 3 dual-processor 450 MHz processor PCs linked by a high speed LAN. The 3 PCs can be accessed either locally from a single keyboard/monitor/mouse combination using a SwitchView controller or remotely via a computer network from PCs with suitable communications software (e.g. Telnet, Kermit etc). All 3 PCs are identically configured to have the Red Hat Linux 6.0 operating system. A Fortran compiler and the BEAM and EGS4/EGSnrc codes are available on the 3 PCs. The preparation of sequences of jobs utilising the Monte Carlo codes is simplified using load-distributing software (enFuzion 6.0 marketed by TurboLinux Inc, formerly Cluster from Active Tools) which efficiently distributes the computing load amongst all 6 processors. We describe 3 applications of the system - (a) energy spectra from radiotherapy sources, (b) mean mass-energy absorption coefficients and stopping powers for absolute absorbed dose standards and (c) dosimetry for diagnostic procedures; (a) and (b) are based on the transport codes BEAM and FLURZnrc while (c) is a Fortran/EGS code developed at ARPANSA. Efficiency gains ranged from 3 for (c) to close to the theoretical maximum of 6 for (a) and (b), with the gain depending on the amount of 'bookkeeping' to begin each task and the time taken to complete a single task. We have found the use of a load-balancing batch processing system with many PCs to be an economical way of achieving greater productivity for Monte Carlo calculations or of any computer intensive task requiring many runs with different parameters. Copyright (2000) Australasian College of Physical Scientists and

  7. Integrating the DLD dosimetry system into the Almaraz NPP Corporative Database

    International Nuclear Information System (INIS)

    Gonzalez Crego, E.; Martin Lopez-Suevos, C.

    1996-01-01

    The article discusses the experience acquired during the integration of a new MGP Instruments DLD Dosimetry System into the Almaraz NPP corporative database and general communications network, following a client-server philosophy and taking into account the computer standards of the Plant. The most important results obtained are: Integration of DLD dosimetry information into corporative databases, permitting the use of new applications Sharing of existing personnel information with the DLD dosimetry application, thereby avoiding the redundant work of introducing data and improving the quality of the information. Facilitation of maintenance, both software and hardware, of the DLD system. Maximum explotation, from the computer point of view, of the initial investment. Adaptation of the application to the applicable legislation. (Author)

  8. Intelligent battery energy management and control for vehicle-to-grid via cloud computing network

    International Nuclear Information System (INIS)

    Khayyam, Hamid; Abawajy, Jemal; Javadi, Bahman; Goscinski, Andrzej; Stojcevski, Alex; Bab-Hadiashar, Alireza

    2013-01-01

    Highlights: • The intelligent battery energy management substantially reduces the interactions of PEV with parking lots. • The intelligent battery energy management improves the energy efficiency. • The intelligent battery energy management predicts the road load demand for vehicles. - Abstract: Plug-in Electric Vehicles (PEVs) provide new opportunities to reduce fuel consumption and exhaust emission. PEVs need to draw and store energy from an electrical grid to supply propulsive energy for the vehicle. As a result, it is important to know when PEVs batteries are available for charging and discharging. Furthermore, battery energy management and control is imperative for PEVs as the vehicle operation and even the safety of passengers depend on the battery system. Thus, scheduling the grid power electricity with parking lots would be needed for efficient charging and discharging of PEV batteries. This paper aims to propose a new intelligent battery energy management and control scheduling service charging that utilize Cloud computing networks. The proposed intelligent vehicle-to-grid scheduling service offers the computational scalability required to make decisions necessary to allow PEVs battery energy management systems to operate efficiently when the number of PEVs and charging devices are large. Experimental analyses of the proposed scheduling service as compared to a traditional scheduling service are conducted through simulations. The results show that the proposed intelligent battery energy management scheduling service substantially reduces the required number of interactions of PEV with parking lots and grid as well as predicting the load demand calculated in advance with regards to their limitations. Also it shows that the intelligent scheduling service charging using Cloud computing network is more efficient than the traditional scheduling service network for battery energy management and control

  9. CERN database services for the LHC computing grid

    International Nuclear Information System (INIS)

    Girone, M

    2008-01-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed

  10. CERN database services for the LHC computing grid

    Energy Technology Data Exchange (ETDEWEB)

    Girone, M [CERN IT Department, CH-1211 Geneva 23 (Switzerland)], E-mail: maria.girone@cern.ch

    2008-07-15

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  11. Development of an international matrix-solver prediction system on a French-Japanese international grid computing environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kushida, Noriyuki; Tatekawa, Takayuki; Teshima, Naoya; Caniou, Yves; Guivarch, Ronan; Dayde, Michel; Ramet, Pierre

    2010-01-01

    The 'Research and Development of International Matrix-Solver Prediction System (REDIMPS)' project aimed at improving the TLSE sparse linear algebra expert website by establishing an international grid computing environment between Japan and France. To help users in identifying the best solver or sparse linear algebra tool for their problems, we have developed an interoperable environment between French and Japanese grid infrastructures (respectively managed by DIET and AEGIS). Two main issues were considered. The first issue is how to submit a job from DIET to AEGIS. The second issue is how to bridge the difference of security between DIET and AEGIS. To overcome these issues, we developed APIs to communicate between different grid infrastructures by improving the client API of AEGIS. By developing a server deamon program (SeD) of DIET which behaves like an AEGIS user, DIET can call functions in AEGIS: authentication, file transfer, job submission, and so on. To intensify the security, we also developed functionalities to authenticate DIET sites and DIET users in order to access AEGIS computing resources. By this study, the set of software and computers available within TLSE to find an appropriate solver is enlarged over France (DIET) and Japan (AEGIS). (author)

  12. Computer-assisted segmentation of CT images by statistical region merging for the production of voxel models of anatomy for CT dosimetry

    Czech Academy of Sciences Publication Activity Database

    Caon, M.; Sedlář, Jiří; Bajger, M.; Lee, G.

    2014-01-01

    Roč. 37, č. 2 (2014), s. 393-403 ISSN 0158-9938 Institutional support: RVO:67985556 Keywords : Voxel model * Image segmentation * Statistical region merging * CT dosimetry Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.882, year: 2014 http://library.utia.cas.cz/separaty/2014/ZOI/sedlar-0428537.pdf

  13. 100 years of solid state dosimetry and radiation protection dosimetry

    International Nuclear Information System (INIS)

    Bartlett, David T.

    2008-01-01

    The use of solid state detectors in radiation dosimetry has passed its 100th anniversary. The major applications of these detectors in radiation dosimetry have been in personal dosimetry, retrospective dosimetry, dating, medical dosimetry, the characterization of radiation fields, and also in microdosimetry and radiobiology research. In this introductory paper for the 15th International Conference, I shall speak of the history of solid state dosimetry and of the radiation measurement quantities that developed at the same time, mention some landmark developments in detectors and applications, speak a bit more about dosimetry and measurement quantities, and briefly look at the past and future

  14. Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab

    Science.gov (United States)

    Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.

    2017-10-01

    The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called

  15. Probabilistic Learning by Rodent Grid Cells.

    Science.gov (United States)

    Cheung, Allen

    2016-10-01

    Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population

  16. Concept and computation of radiation dose at high energies

    International Nuclear Information System (INIS)

    Sarkar, P.K.

    2010-01-01

    Computational dosimetry, a subdiscipline of computational physics devoted to radiation metrology, is determination of absorbed dose and other dose related quantities by numbers. Computations are done separately both for external and internal dosimetry. The methodology used in external beam dosimetry is necessarily a combination of experimental radiation dosimetry and theoretical dose computation since it is not feasible to plan any physical dose measurements from inside a living human body

  17. Characterization of internal dosimetry practices

    International Nuclear Information System (INIS)

    Traub, R.J.; Heid, K.R.; Mann, J.C.

    1983-01-01

    Current practices in internal dosimetry at DOE facilities were evaluated with respect to consistency among DOE Contractors. All aspects of an internal dosimetry program were addressed. Items considered include, but are not necessarily limited to, record systems and ease of information retrieval; ease of integrating internal dose and external dose; modeling systems employed, including ability to modify models depending on excretion data, and verification of computer codes utilized; bioassay procedures, including quality control; and ability to relate air concentration data to individual workers and bioassay data. Feasibility of uranium analysis in solution by laser fluorescence excitation at uranium concentrations of one part per billion was demonstrated

  18. A Theorem on Grid Access Control

    Institute of Scientific and Technical Information of China (English)

    XU ZhiWei(徐志伟); BU GuanYing(卜冠英)

    2003-01-01

    The current grid security research is mainly focused on the authentication of grid systems. A problem to be solved by grid systems is to ensure consistent access control. This problem is complicated because the hosts in a grid computing environment usually span multiple autonomous administrative domains. This paper presents a grid access control model, based on asynchronous automata theory and the classic Bell-LaPadula model. This model is useful to formally study the confidentiality and integrity problems in a grid computing environment. A theorem is proved, which gives the necessary and sufficient conditions to a grid to maintain confidentiality.These conditions are the formalized descriptions of local (node) relations or relationship between grid subjects and node subjects.

  19. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Maynard, Matthew R; Geyer, John W; Bolch, Wesley [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL (United States); Aris, John P [Department of Anatomy and Cell Biology, University of Florida, Gainesville, FL (United States); Shifrin, Roger Y, E-mail: wbolch@ufl.edu [Department of Radiology, University of Florida, Gainesville, FL (United States)

    2011-08-07

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations

  20. Intercomparison on the usage of computational codes in radiation dosimetry

    International Nuclear Information System (INIS)

    Ilic, R.; Pesic, M.; Pavlovic, R.

    2003-01-01

    SRNA-2KG software package was modified for this work to include necessary input and output data and for predicted voxelized geometry and dosimetry. SRNA is a Monte Carlo code developed for applications in proton transport, radiotherapy and dosimetry. Protons within energy range from 100 keV to 250 MeV with predefined spectra are transported in 3D geometry through material zones confined by planes and second order surfaces or in 3D voxelized geometry. The code can treat proton transport in a few hundred different materials including elements from Z=1 to Z=98. Simulation of proton transport is based on the multiple scattering theory of charged particles and on the model for compound nucleus decay

  1. I-124 Imaging and Dosimetry

    Directory of Open Access Journals (Sweden)

    Russ Kuker

    2017-02-01

    Full Text Available Although radioactive iodine imaging and therapy are one of the earliest applications of theranostics, there still remain a number of unresolved clinical questions as to the optimization of diagnostic techniques and dosimetry protocols. I-124 as a positron emission tomography (PET radiotracer has the potential to improve the current clinical practice in the diagnosis and treatment of differentiated thyroid cancer. The higher sensitivity and spatial resolution of PET/computed tomography (CT compared to standard gamma scintigraphy can aid in the detection of recurrent or metastatic disease and provide more accurate measurements of metabolic tumor volumes. However the complex decay schema of I-124 poses challenges to quantitative PET imaging. More prospective studies are needed to define optimal dosimetry protocols and to improve patient-specific treatment planning strategies, taking into account not only the absorbed dose to tumors but also methods to avoid toxicity to normal organs. A historical perspective of I-124 imaging and dosimetry as well as future concepts are discussed.

  2. A Development of Lightweight Grid Interface

    International Nuclear Information System (INIS)

    Iwai, G; Kawai, Y; Sasaki, T; Watase, Y

    2011-01-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  3. Security on the US Fusion Grid

    Energy Technology Data Exchange (ETDEWEB)

    Burruss, Justin R.; Fredian, Tom W.; Thompson, Mary R.

    2005-06-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This work has led to the development of the US Fusion Grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large US fusion research facilities and with users both in the US and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER.

  4. Security on the US Fusion Grid

    International Nuclear Information System (INIS)

    Burruss, Justin R.; Fredian, Tom W.; Thompson, Mary R.

    2005-01-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This work has led to the development of the US Fusion Grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large US fusion research facilities and with users both in the US and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER

  5. Security on the US fusion grid

    International Nuclear Information System (INIS)

    Burruss, J.R.; Fredian, T.W.; Thompson, M.R.

    2006-01-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This has led to the development of the U.S. fusion grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large U.S. fusion research facilities and with users both in the U.S. and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER

  6. Computation for LHC experiments: a worldwide computing grid; Le calcul scientifique des experiences LHC: une grille de production mondiale

    Energy Technology Data Exchange (ETDEWEB)

    Fairouz, Malek [Universite Joseph-Fourier, LPSC, CNRS-IN2P3, Grenoble I, 38 (France)

    2010-08-15

    In normal operating conditions the LHC detectors are expected to record about 10{sup 10} collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10{sup 9} octets per second and recording capacity of a few tens of 10{sup 15} octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  7. Grids, virtualization, and clouds at Fermilab

    International Nuclear Information System (INIS)

    Timm, S; Chadwick, K; Garzoglio, G; Noh, S

    2014-01-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  8. Grids, virtualization, and clouds at Fermilab

    Science.gov (United States)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  9. MO-B-BRB-00: Three Dimensional Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by the development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an

  10. FermiGrid - experience and future plans

    International Nuclear Information System (INIS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Timm, S.; Yocum, D.

    2007-01-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and the Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems

  11. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  12. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  13. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    Science.gov (United States)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-04-01

    Hybrid computational phantoms combine voxel-based and simplified equation-based modelling approaches to provide unique advantages and more realism for the construction of anthropomorphic models. In this work, a methodology and C++ code are developed to generate hybrid computational phantoms covering statistical distributions of body morphometry in the paediatric population. The paediatric phantoms of the Virtual Population Series (IT’IS Foundation, Switzerland) were modified to match target anthropometric parameters, including body mass, body length, standing height and sitting height/stature ratio, determined from reference databases of the National Centre for Health Statistics and the National Health and Nutrition Examination Survey. The phantoms were selected as representative anchor phantoms for the newborn, 1, 2, 5, 10 and 15 years-old children, and were subsequently remodelled to create 1100 female and male phantoms with 10th, 25th, 50th, 75th and 90th body morphometries. Evaluation was performed qualitatively using 3D visualization and quantitatively by analysing internal organ masses. Overall, the newly generated phantoms appear very reasonable and representative of the main characteristics of the paediatric population at various ages and for different genders, body sizes and sitting stature ratios. The mass of internal organs increases with height and body mass. The comparison of organ masses of the heart, kidney, liver, lung and spleen with published autopsy and ICRP reference data for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric parameters.

  14. Recent developments in polymer gel dosimetry

    International Nuclear Information System (INIS)

    John Schreiner, L.; Olding, Tim; Holmes, Oliver; McAuley, Kim

    2008-01-01

    Modern radiation therapy particularly with intensity modulation techniques (IMRT) offers the potential to improve patient outcomes by better limiting high doses to the tumour alone. In this presentation we report our progress in developing gel dosimetry with new less toxic dosimeters using a fast commercial optical computed tomography (OCT) scanner. We will demonstrate that these adjustments in the approach to gel dosimetry help facilitate its introduction into clinical use. We will review practical advances in system quality assurance and scatter correction to improve optical CT quantification, and show an example of a clinical implementation of an IGRT treatment validation

  15. Effect of computational grid on accurate prediction of a wind turbine rotor using delayed detached-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bangga, Galih; Weihing, Pascal; Lutz, Thorsten; Krämer, Ewald [University of Stuttgart, Stuttgart (Germany)

    2017-05-15

    The present study focuses on the impact of grid for accurate prediction of the MEXICO rotor under stalled conditions. Two different blade mesh topologies, O and C-H meshes, and two different grid resolutions are tested for several time step sizes. The simulations are carried out using Delayed detached-eddy simulation (DDES) with two eddy viscosity RANS turbulence models, namely Spalart- Allmaras (SA) and Menter Shear stress transport (SST) k-ω. A high order spatial discretization, WENO (Weighted essentially non- oscillatory) scheme, is used in these computations. The results are validated against measurement data with regards to the sectional loads and the chordwise pressure distributions. The C-H mesh topology is observed to give the best results employing the SST k-ω turbulence model, but the computational cost is more expensive as the grid contains a wake block that increases the number of cells.

  16. FermiGrid-experience and future plans

    International Nuclear Information System (INIS)

    Chadwick, K; Berman, E; Canal, P; Hesselroth, T; Garzoglio, G; Levshina, T; Sergeev, V; Sfiligoi, I; Sharma, N; Timm, S; Yocum, D R

    2008-01-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems

  17. Dosimetry

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    The purpose of ionizing radiation dosimetry is the measurement of the physical and biological consequences of exposure to radiation. As these consequences are proportional to the local absorption of energy, the dosimetry of ionizing radiation is based on the measurement of this quantity. Owing to the size of the effects of ionizing radiation on materials in all of these area, dosimetry plays an essential role in the prevention and the control of radiation exposure. Its use is of great importance in two areas in particular where the employment of ionizing radiation relates to human health: radiation protection, and medical applications. Dosimetry is different for various reasons: owing to the diversity of the physical characteristics produced by different kinds of radiation according to their nature (X- and γ-photons, electrons, neutrons,...), their energy (from several keV to several MeV), the orders of magnitude of the doses being estimated (a factor of about 10 5 between diagnostic and therapeutic applications); and the temporal and spatial variation of the biological parameters entering into the calculations. On the practical level, dosimetry poses two distinct yet closely related problems: the determination of the absorbed dose received by a subject exposed to radiation from a source external to his body (external dosimetry); and the determination of the absorbed dose received by a subject owing to the presence within his body of some radioactive substance (internal dosimetry)

  18. An Efficient Approach for Fast and Accurate Voltage Stability Margin Computation in Large Power Grids

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-11-01

    Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.

  19. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gu Songxiang; Kyprianou, Iacovos [Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD (United States); Gupta, Rajiv, E-mail: songxiang.gu@fda.hhs.gov, E-mail: rgupta1@partners.org, E-mail: iacovos.kyprianou@fda.hhs.gov [Massachusetts General Hospital, Boston, MA (United States)

    2011-09-21

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  20. Computational high-resolution heart phantoms for medical imaging and dosimetry simulations

    International Nuclear Information System (INIS)

    Gu Songxiang; Kyprianou, Iacovos; Gupta, Rajiv

    2011-01-01

    Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user

  1. Beyond grid security

    International Nuclear Information System (INIS)

    Hoeft, B; Epting, U; Koenig, T

    2008-01-01

    While many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids). In contrast to groups such as OSCT (Operational Security Coordination Team) and JSPG (Joint Security Policy Group), the purpose of ISSeG is to provide a holistic approach to security for Grid computer centres, from strategic considerations to an implementation plan and its deployment. The generalised methodology of Integrated Site Security (ISS) is based on the knowledge gained during its implementation at several sites as well as through security audits, and this will be briefly discussed. Several examples of ISS implementation tasks at the Forschungszentrum Karlsruhe will be presented, including segregation of the network for administration and maintenance and the implementation of Application Gateways. Furthermore, the web-based ISSeG training material will be introduced. This aims to offer ISS implementation guidance to other Grid installations in order to help avoid common pitfalls

  2. Grids, Clouds and Virtualization

    CERN Document Server

    Cafaro, Massimo

    2011-01-01

    Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text/reference int

  3. Dosimetry Service

    CERN Multimedia

    2006-01-01

    Cern Staff and Users can now consult their dose records for an individual or an organizational unit with HRT. Please see more information on our web page: http://cern.ch/rp-dosimetry Dosimetry Service is open every morning from 8.30 - 12.00. Closed in the afternoons. We would like to remind you that dosimeters cannot be sent to customers by internal mail. Short-term dosimeters (VCT's) must always be returned to the Service after the use and must not be left on the racks in the experimental areas or in the secretariats. Dosimetry Service Tel. 7 2155 Dosimetry.service@cern.ch http://cern.ch/rp-dosimetry

  4. Reaching for the cloud: on the lessons learned from grid computing technology transfer process to the biomedical community.

    Science.gov (United States)

    Mohammed, Yassene; Dickmann, Frank; Sax, Ulrich; von Voigt, Gabriele; Smith, Matthew; Rienhoff, Otto

    2010-01-01

    Natural scientists such as physicists pioneered the sharing of computing resources, which led to the creation of the Grid. The inter domain transfer process of this technology has hitherto been an intuitive process without in depth analysis. Some difficulties facing the life science community in this transfer can be understood using the Bozeman's "Effectiveness Model of Technology Transfer". Bozeman's and classical technology transfer approaches deal with technologies which have achieved certain stability. Grid and Cloud solutions are technologies, which are still in flux. We show how Grid computing creates new difficulties in the transfer process that are not considered in Bozeman's model. We show why the success of healthgrids should be measured by the qualified scientific human capital and the opportunities created, and not primarily by the market impact. We conclude with recommendations that can help improve the adoption of Grid and Cloud solutions into the biomedical community. These results give a more concise explanation of the difficulties many life science IT projects are facing in the late funding periods, and show leveraging steps that can help overcoming the "vale of tears".

  5. Parallel Sn Sweeps on Unstructured Grids: Algorithms for Prioritization, Grid Partitioning, and Cycle Detection

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Hendrickson, Bruce; Burns, Shawn P.; McLendon, William III; Rauchwerger, Lawrence

    2005-01-01

    The method of discrete ordinates is commonly used to solve the Boltzmann transport equation. The solution in each ordinate direction is most efficiently computed by sweeping the radiation flux across the computational grid. For unstructured grids this poses many challenges, particularly when implemented on distributed-memory parallel machines where the grid geometry is spread across processors. We present several algorithms relevant to this approach: (a) an asynchronous message-passing algorithm that performs sweeps simultaneously in multiple ordinate directions, (b) a simple geometric heuristic to prioritize the computational tasks that a processor works on, (c) a partitioning algorithm that creates columnar-style decompositions for unstructured grids, and (d) an algorithm for detecting and eliminating cycles that sometimes exist in unstructured grids and can prevent sweeps from successfully completing. Algorithms (a) and (d) are fully parallel; algorithms (b) and (c) can be used in conjunction with (a) to achieve higher parallel efficiencies. We describe our message-passing implementations of these algorithms within a radiation transport package. Performance and scalability results are given for unstructured grids with up to 3 million elements (500 million unknowns) running on thousands of processors of Sandia National Laboratories' Intel Tflops machine and DEC-Alpha CPlant cluster

  6. Campus Grids: Bringing Additional Computational Resources to HEP Researchers

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Bockelman, Brian; Swanson, David

    2012-01-01

    It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize these resources by forming a lightweight campus grid. The campus grids framework enables jobs that are submitted to one cluster to overflow, when necessary, to other clusters within the campus using whatever authentication mechanisms are available on campus. This framework is currently being used on several campuses to run HEP and other science jobs. Further, the framework has in some cases been expanded beyond the campus boundary by bridging campus grids into a regional grid, and can even be used to integrate resources from a national cyberinfrastructure such as the Open Science Grid. This paper will highlight 18 months of operational experiences creating campus grids in the US, and the different campus configurations that have successfully utilized the campus grid infrastructure.

  7. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  8. Proceedings of the Spanish Conference on e-Science Grid Computing. March 1-2, 2007. Madrid (Spain)

    International Nuclear Information System (INIS)

    Casado, J.; Mayo, R.; Munoz, R.

    2007-01-01

    The Spanish Conference on e-Science Grid Computing and the EGEE-EELA Industrial Day (http://webrt.ciemat.es:8000/e-science/index.html) are the first edition of this open forum for the integration of Grid Technologies and its applications in the Spanish community. It has been organised by CIEMAT and CETA-CIEMAT, sponsored by IBM and HP and supported by the European Community through their funded projects EELA, EUChinaGrid and EUMedGrid. To all of them, the conference is very grateful. e-Science is the concept that defines those activities developed by using geographically distributed resources, which scientists (or whoever) can access through the Internet. However, commercial Internet does not fulfil resources such as calculus and massive storage -most frequently in demand in the field of e-Science- since they require high-speed networks devoted to research. These networks, alongside the collaborative work applications developed within them, are creating an ideal scenario for interaction among researchers. Thus, this technology that interconnects a huge variety of computers, information repositories, applications software and scientific tools will change the society in the next few years. The science, industry and services systems will benefit from his immense capacity of computation that will improve the quality of life and the well-being of citizens. The future generation of technologies, which will reach all of these areas in society, such as research, medicine, engineering, economy and entertainment will be based on integrated computers and networks, rendering a very high quality of services and applications through a friendly interface. The conference aims at becoming a liaison framework between Spanish and International developers and users of e-Science applications and at implementing these technologies in Spain. It intends to be a forum where the state of the art of different European projects on e- Science is shown, as well as developments in the research

  9. Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb

    Energy Technology Data Exchange (ETDEWEB)

    Herner, K. [Fermilab; Alba Hernandex, A. F. [Fermilab; Bhat, S. [Fermilab; Box, D. [Fermilab; Boyd, J. [Fermilab; Di Benedetto, V. [Fermilab; Ding, P. [Fermilab; Dykstra, D. [Fermilab; Fattoruso, M. [Fermilab; Garzoglio, G. [Fermilab; Kirby, M. [Fermilab; Kreymer, A. [Fermilab; Levshina, T. [Fermilab; Mazzacane, A. [Fermilab; Mengel, M. [Fermilab; Mhashilkar, P. [Fermilab; Podstavkov, V. [Fermilab; Retzke, K. [Fermilab; Sharma, N. [Fermilab; Teheran, J. [Fermilab

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed

  10. Automated personal dosimetry monitoring system for NPP

    International Nuclear Information System (INIS)

    Chanyshev, E.; Chechyotkin, N.; Kondratev, A.; Plyshevskaya, D.

    2006-01-01

    personal dosimeters (albedo dosimeters). Operational and emergency monitoring of external radiation exposure: - Gamma radiation dose and dose rate measurement using direct reading personal dosimeters. - Gamma radiation dose measurement using radio-photoluminescent personal dosimeters. Monitoring of internal radiation exposure: - Measurement of activity of incorporated radionuclides using whole body counters. Hardware of A.P.D.M.S. represents a complex of automated workplaces based on industrial computers and measuring equipment; all workplaces are connected to one local computational network. Client software installed on automated workplaces processes the results of dosimetry monitoring (spectrum processing, computation of personal dose, report generation, etc.) and provides data exchange with data base of A.P.D.M.S. in the remote server. Communication with A.P.D.M.S. server is organized via the local computational network Ethernet. (authors)

  11. EDISTR: a computer program to obtain a nuclear decay data base for radiation dosimetry

    International Nuclear Information System (INIS)

    Dillman, L.T.

    1980-01-01

    This report provides documentation for the computer program EDISTR. EDISTR uses basic radioactive decay data from the Evaluated Nuclear Structure Data File developed and maintained by the Nuclear Data Project at the Oak Ridge National Laboratory as input, and calculates the mean energies and absolute intensities of all principal radiations associated with the radioactive decay of a nuclide. The program is intended to provide a physical data base for internal dosimetry calculations. The principal calculations performed by EDISTR are the determination of (1) the average energy of beta particles in a beta transition, (2) the beta spectrum as function of energy, (3) the energies and intensities of x-rays and Auger electrons generated by radioactive decay processes, (4) the bremsstrahlung spectra accompanying beta decay and monoenergetic Auger and internal conversion electrons, and (5) the radiations accompanying spontaneous fission. This report discusses the theoretical and empirical methods used in EDISTR and also practical aspects of the computer implementation of the theory. Detailed instructions for preparing input data for the computer program are included, along with examples and discussion of the output data generated by EDISTR

  12. Grid today, clouds on the horizon

    Science.gov (United States)

    Shiers, Jamie

    2009-04-01

    By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.

  13. Ian Bird, head of Grid development at CERN

    CERN Multimedia

    Patrice Loïez

    2003-01-01

    "The Grid enables us to harness the power of scientific computing centres wherever they may be to provide the most powerful computing resource the world has to offer," said Ian Bird, head of Grid development at CERN. The Grid is a new method of sharing processing power between computers in centres around the world.

  14. Data security on the national fusion grid

    Energy Technology Data Exchange (ETDEWEB)

    Burruss, Justine R.; Fredian, Tom W.; Thompson, Mary R.

    2005-06-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This work has led to the development of the US Fusion Grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large US fusion research facilities and with users both in the US and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER.

  15. Data security on the national fusion grid

    International Nuclear Information System (INIS)

    Burruss, Justine R.; Fredian, Tom W.; Thompson, Mary R.

    2005-01-01

    The National Fusion Collaboratory project is developing and deploying new distributed computing and remote collaboration technologies with the goal of advancing magnetic fusion energy research. This work has led to the development of the US Fusion Grid (FusionGrid), a computational grid composed of collaborative, compute, and data resources from the three large US fusion research facilities and with users both in the US and in Europe. Critical to the development of FusionGrid was the creation and deployment of technologies to ensure security in a heterogeneous environment. These solutions to the problems of authentication, authorization, data transfer, and secure data storage, as well as the lessons learned during the development of these solutions, may be applied outside of FusionGrid and scale to future computing infrastructures such as those for next-generation devices like ITER

  16. Development of the two Korean adult tomographic computational phantoms for organ dosimetry

    International Nuclear Information System (INIS)

    Lee, Choonsik; Lee, Choonik; Park, Sang-Hyun; Lee, Jai-Ki

    2006-01-01

    stylized ORNL phantom. The armless KTMAN-1 can be applied to dosimetry for computed tomography or lateral x-ray examination, while the whole body KTMAN-2 can be used for radiation protection dosimetry

  17. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    Science.gov (United States)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  18. Experiences and performance of the Harshaw dosimetry system at two major processing centres

    International Nuclear Information System (INIS)

    Tawil, R.A.; Olhalber, T.; Rathbone, B.

    1996-01-01

    The installations, operating practice, dose algorithms and results and maintenance experience at two major dosimetry processing centres are described. System selection considerations and a comprehensive quality programme are described in the light of the publication of testing requirements by various dosimetry regulatory organisations. Reported information from Siemens Dosimetry Services comprises their selection of dosemeters and processing equipment including service history, a description of their dose computation algorithm, and detailed results of their testing against DOELAP standards. Battelle Pacific Northwest Laboratories (PNL) provides a description of their dosemeters and equipment with service history; in addition, a discussion of their new neural network approach to a dose computation algorithm and test results from that algorithm are presented. (Author)

  19. The Benefits of Grid Networks

    Science.gov (United States)

    Tennant, Roy

    2005-01-01

    In the article, the author talks about the benefits of grid networks. In speaking of grid networks the author is referring to both networks of computers and networks of humans connected together in a grid topology. Examples are provided of how grid networks are beneficial today and the ways in which they have been used.

  20. Lincoln Laboratory Grid

    Data.gov (United States)

    Federal Laboratory Consortium — The Lincoln Laboratory Grid (LLGrid) is an interactive, on-demand parallel computing system that uses a large computing cluster to enable Laboratory researchers to...

  1. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  2. On dosimetry of radiodiagnosis facilities, mainly focused on computed tomography units

    International Nuclear Information System (INIS)

    Ghitulescu, Zoe

    2008-01-01

    The 'talk' refers to the Dosimetry of computed tomography units and it has been thought and structured in three parts, more or less stressed each of them, thus: 1) Basics of image acquisition using computed tomography technique; 2) Effective Dose calculation for a patient and its assessment using BERT concept; 3) Recommended actions of getting a good compromise in between related dose and the image quality. The aim of the first part is that the reader to become acquainted with the CT technique in order to be able of understanding the Effective Dose calculation given example and its conversion into time units using the BERT concept . The drown conclusion is that: 1) Effective dose calculation accomplished by the medical physicist (using a special soft for the CT scanner and the exam type) and, converted in time units through BERT concept, could be then communicated by the radiologist together with the diagnostic notes. Thus, it is obviously necessary a minimum informal of the patients as regards the nature and type of radiation, for instance, by the help of some leaflets. In the third part are discussed the factors which lead to get a good image quality taking into account the ALARA principle of Radiation Protection which states the fact that the dose should be 'as low as reasonable achievable'. (author)

  3. Optimal dose reduction in computed tomography methodologies predicted from real-time dosimetry

    Science.gov (United States)

    Tien, Christopher Jason

    Over the past two decades, computed tomography (CT) has become an increasingly common and useful medical imaging technique. CT is a noninvasive imaging modality with three-dimensional volumetric viewing abilities, all in sub-millimeter resolution. Recent national scrutiny on radiation dose from medical exams has spearheaded an initiative to reduce dose in CT. This work concentrates on dose reduction of individual exams through two recently-innovated dose reduction techniques: organ dose modulation (ODM) and tube current modulation (TCM). ODM and TCM tailor the phase and amplitude of x-ray current, respectively, used by the CT scanner during the scan. These techniques are unique because they can be used to achieve patient dose reduction without any appreciable loss in image quality. This work details the development of the tools and methods featuring real-time dosimetry which were used to provide pioneering measurements of ODM or TCM in dose reduction for CT.

  4. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  5. Scheduling Network Traffic for Grid Purposes

    DEFF Research Database (Denmark)

    Gamst, Mette

    This thesis concerns scheduling of network traffic in grid context. Grid computing consists of a number of geographically distributed computers, which work together for solving large problems. The computers are connected through a network. When scheduling job execution in grid computing, data...... transmission has so far not been taken into account. This causes stability problems, because data transmission takes time and thus causes delays to the execution plan. This thesis proposes the integration of job scheduling and network routing. The scientific contribution is based on methods from operations...... research and consists of six papers. The first four considers data transmission in grid context. The last two solves the data transmission problem, where the number of paths per data connection is bounded from above. The thesis shows that it is possible to solve the integrated job scheduling and network...

  6. The Future of Distributed Computing Systems in ATLAS: Boldly Venturing Beyond Grids

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2018-01-01

    The Production and Distributed Analysis system (PanDA) for the ATLAS experiment at the Large Hadron Collider has seen big changes over the past couple of years to accommodate new types of distributed computing resources: clouds, HPCs, volunteer computers and other external resources. While PanDA was originally designed for fairly homogeneous resources available through the Worldwide LHC Computing Grid, the new resources are heterogeneous, at diverse scales and with diverse interfaces. Up to a fifth of the resources available to ATLAS are of such new types and require special techniques for integration into PanDA. In this talk, we present the nature and scale of these resources. We provide an overview of the various challenges faced, spanning infrastructure, software distribution, workload requirements, scaling requirements, workflow management, data management, network provisioning, and associated software and computing facilities. We describe the strategies for integrating these heterogeneous resources into ...

  7. Thermal Protection System Cavity Heating for Simplified and Actual Geometries Using Computational Fluid Dynamics Simulations with Unstructured Grids

    Science.gov (United States)

    McCloud, Peter L.

    2010-01-01

    Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.

  8. Tissue equivalence in neutron dosimetry

    International Nuclear Information System (INIS)

    Nutton, D.H.; Harris, S.J.

    1980-01-01

    A brief review is presented of the essential features of neutron tissue equivalence for radiotherapy and gives the results of a computation of relative absorbed dose for 14 MeV neutrons, using various tissue models. It is concluded that for the Bragg-Gray equation for ionometric dosimetry it is not sufficient to define the value of W to high accuracy and that it is essential that, for dosimetric measurements to be applicable to real body tissue to an accuracy of better than several per cent, a correction to the total absorbed dose must be made according to the test and tissue atomic composition, although variations in patient anatomy and other radiotherapy parameters will often limit the benefits of such detailed dosimetry. (U.K.)

  9. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  10. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, Alexey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  11. Exploring virtualisation tools with a new virtualisation provisioning method to test dynamic grid environments for ALICE grid jobs over ARC grid middleware

    International Nuclear Information System (INIS)

    Wagner, B; Kileng, B

    2014-01-01

    The Nordic Tier-1 centre for LHC is distributed over several computing centres. It uses ARC as the internal computing grid middleware. ALICE uses its own grid middleware AliEn to distribute jobs and the necessary software application stack. To make use of most of the AliEn infrastructure and software deployment methods for running ALICE grid jobs on ARC, we are investigating different possible virtualisation technologies. For this a testbed and possible framework for bridging different middleware systems is under development. It allows us to test a variety of virtualisation methods and software deployment technologies in the form of different virtual machines.

  12. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  13. Monitoring the EGEE/WLCG grid services

    International Nuclear Information System (INIS)

    Duarte, A; Nyczyk, P; Retico, A; Vicinanza, D

    2008-01-01

    Grids have the potential to revolutionise computing by providing ubiquitous, on demand access to computational services and resources. They promise to allow for on demand access and composition of computational services provided by multiple independent sources. Grids can also provide unprecedented levels of parallelism for high-performance applications. On the other hand, grid characteristics, such as high heterogeneity, complexity and distribution create many new technical challenges. Among these technical challenges, failure management is a key area that demands much progress. A recent survey revealed that fault diagnosis is still a major problem for grid users. When a failure appears at the user screen, it becomes very difficult for the user to identify whether the problem is in the application, somewhere in the grid middleware, or even lower in the fabric that comprises the grid. In this paper we present a tool able to check if a given grid service works as expected for a given set of users (Virtual Organisation) on the different resources available on a grid. Our solution deals with grid services as single components that should produce an expected output to a pre-defined input, what is quite similar to unit testing. The tool, called Service Availability Monitoring or SAM, is being currently used by several different Virtual Organizations to monitor more than 300 grid sites belonging to the largest grids available today. We also discuss how this tool is being used by some of those VOs and how it is helping in the operation of the EGEE/WLCG grid

  14. Grid and Entrepreneurship Workshop

    CERN Multimedia

    2006-01-01

    The CERN openlab is organising a special workshop about Grid opportunities for entrepreneurship. This one-day event will provide an overview of what is involved in spin-off technology, with a special reference to the context of computing and data Grids. Lectures by experienced entrepreneurs will introduce the key concepts of entrepreneurship and review, in particular, the industrial potential of EGEE (the EU co-funded Enabling Grids for E-sciencE project, led by CERN). Case studies will be given by CEOs of European start-ups already active in the Grid and computing cluster area, and regional experts will provide an overview of efforts in several European regions to stimulate entrepreneurship. This workshop is designed to encourage students and researchers involved or interested in Grid technology to consider the entrepreneurial opportunities that this technology may create in the coming years. This workshop is organized as part of the CERN openlab student programme, which is co-sponsored by CERN, HP, ...

  15. User's manual of a supporting system for treatment planning in boron neutron capture therapy. JAERI computational dosimetry system

    International Nuclear Information System (INIS)

    Kumada, Hiroaki; Torii, Yoshiya

    2002-09-01

    A boron neutron capture therapy (BNCT) with epithermal neutron beam is expected to treat effectively for malignant tumor that is located deeply in the brain. It is indispensable to estimate preliminarily the irradiation dose in the brain of a patient in order to perform the epithermal neutron beam BNCT. Thus, the JAERI Computational Dosimetry System (JCDS), which can calculate the dose distributions in the brain, has been developed. JCDS is a software that creates a 3-dimensional head model of a patient by using CT and MRI images and that generates a input data file automatically for calculation neutron flux and gamma-ray dose distribution in the brain by the Monte Carlo code: MCNP, and that displays the dose distribution on the head model for dosimetry by using the MCNP calculation results. JCDS has any advantages as follows; By treating CT data and MRI data which are medical images, a detail three-dimensional model of patient's head is able to be made easily. The three-dimensional head image is editable to simulate the state of a head after its surgical processes such as skin flap opening and bone removal for the BNCT with craniotomy that are being performed in Japan. JCDS can provide information for the Patient Setting System to set the patient in an actual irradiation position swiftly and accurately. This report describes basic design and procedure of dosimetry, operation manual, data and library structure for JCDS (ver.1.0). (author)

  16. Grid technologies and applications: architecture and achievements

    International Nuclear Information System (INIS)

    Ian Foster

    2001-01-01

    The 18 months since CHEP'2000 have seen significant advances in Grid computing, both within and outside high energy physics. While in early 2000, Grid computing was a novel concept that most CHEP attendees were being exposed to for the first time, now considerable consensus is seen on Grid architecture, a solid and widely adopted technology base, major funding initiatives, a wide variety of projects developing applications and technologies, and major deployment projects aimed at creating robust Grid infrastructures. The author provides a summary of major developments and trends, focusing on the Globus open source Grid software project and the GriPhyN data grid project

  17. Automatic knowledge extraction in sequencing analysis with multiagent system and grid computing.

    Science.gov (United States)

    González, Roberto; Zato, Carolina; Benito, Rocío; Bajo, Javier; Hernández, Jesús M; De Paz, Juan F; Vera, Vicente; Corchado, Juan M

    2012-12-01

    Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.

  18. Automatic knowledge extraction in sequencing analysis with multiagent system and grid computing

    Directory of Open Access Journals (Sweden)

    González Roberto

    2012-12-01

    Full Text Available Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.

  19. Meet the Grid

    CERN Multimedia

    Yurkewicz, Katie

    2005-01-01

    Today's cutting-edge scientific projects are larger, more complex, and more expensive than ever. Grid computing provides the resources that allow researchers to share knowledge, data, and computer processing power across boundaries

  20. Computational voxel phantom, associated to anthropometric and anthropomorphic real phantom for dosimetry in human male pelvis radiotherapy

    International Nuclear Information System (INIS)

    Silva, Cleuza Helena Teixeira; Campos, Tarcisio Passos Ribeiro de

    2005-01-01

    This paper addresses a computational model of voxels through MCNP5 Code and the experimental development of an anthropometric and anthropomorphic phantom for dosimetry in human male pelvis brachytherapy focusing prostatic tumors. For elaboration of the computational model of the human male pelvis, anatomical section images from the Visible Man Project were applied. Such selected and digital images were associated to a numeric representation, one for each section. Such computational representation of the anatomical sections was transformed into a bi-dimensional mesh of equivalent tissue. The group of bidimensional meshes was concatenated forming the three-dimensional model of voxels to be used by the MCNP5 code. In association to the anatomical information, data from the density and chemical composition of the basic elements, representatives of the organs and involved tissues, were setup in a material database for the MCNP-5. The model will be applied for dosimetric evaluations in situations of irradiation of the human masculine pelvis. Such 3D model of voxel is associated to the code of transport of particles MCNP5, allowing future simulations. It was also developed the construction of human masculine pelvis phantom, based on anthropometric and anthropomorphic dates and in the use of representative equivalent tissues of the skin, fatty, muscular and glandular tissue, as well as the bony structure.This part of work was developed in stages, being built the bony cast first, later the muscular structures and internal organs. They were then jointly mounted and inserted in the skin cast. The representative component of the fatty tissue was incorporate and accomplished the final retouchings in the skin. The final result represents the development of two important essential tools for elaboration of computational and experimental dosimetry. Thus, it is possible its use in calibrations of pre-existent protocols in radiotherapy, as well as for tests of new protocols, besides

  1. GENII: The Hanford Environmental Radiation Dosimetry Software System: Volume 1, Conceptual representation

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.; Ramsdell, J.V.

    1988-12-01

    The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). The purpose of this coupled system of computer codes is to analyze environmental contamination resulting from acute or chronic releases to, or initial contamination of, air, water, or soil. This is accomplished by calculating radiation doses to individuals or populations. GENII is described in three volumes of documentation. The first volume describes the theoretical considerations of the system. The second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. The third volume is a Code Maintenance Manual for the user who requires knowledge of code detail. It includes code logic diagrams, global dictionary, worksheets, example hand calculations, and listings of the code and its associated data libraries. 72 refs., 15 figs., 34 tabs.

  2. GENII: The Hanford Environmental Radiation Dosimetry Software System: Volume 1, Conceptual representation

    International Nuclear Information System (INIS)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.; Ramsdell, J.V.

    1988-12-01

    The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). The purpose of this coupled system of computer codes is to analyze environmental contamination resulting from acute or chronic releases to, or initial contamination of, air, water, or soil. This is accomplished by calculating radiation doses to individuals or populations. GENII is described in three volumes of documentation. The first volume describes the theoretical considerations of the system. The second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. The third volume is a Code Maintenance Manual for the user who requires knowledge of code detail. It includes code logic diagrams, global dictionary, worksheets, example hand calculations, and listings of the code and its associated data libraries. 72 refs., 15 figs., 34 tabs

  3. Current internal-dosimetry practices at US Department of Energy facilities

    International Nuclear Information System (INIS)

    Traub, R.J.; Murphy, B.L.; Selby, J.M.; Vallario, E.J.

    1985-04-01

    The internal dosimetry practice at DOE facilities were characterized. The purpose was to determine the size of the facilities' internal dosimetry programs, the uniformity of the programs among the facilities, and the areas of greatest concern to health physicists in providing and reporting accurate estimates of internal radiation dose and in meeting proposed changes in internal dosimetry. The differences among the internal-dosimetry programs are related to the radioelements in use at each facility and, to some extent, the number of workers at each facility. The differences include different frequencies in the use of quality control samples, different minimum detection levels, different methods of recording radionuclides, different amounts of data recorded in the permanent record, and apparent differences in modeling the metabolism of radionuclides within the body. Recommendations for improving internal-dosimetry practices include studying the relationship between air-monitoring/survey readings and bioassay data, establishing uniform methods for recording bioassay results, developing more sensitive direct-bioassay procedures, establishing a mechanism for sharing information on internal-dosimetry procedures among DOE facilities, and developing mathematical models and interactive computer codes that can help quantify the uptake of radioactive materials and predict their distribution in the body. 19 refs., 8 tabs

  4. Neutron personnel dosimetry

    International Nuclear Information System (INIS)

    Griffith, R.V.

    1981-01-01

    The current state-of-the-art in neutron personnel dosimetry is reviewed. Topics covered include dosimetry needs and alternatives, current dosimetry approaches, personnel monitoring devices, calibration strategies, and future developments

  5. Mathematical operations in cytogenetic dosimetry: Dosgen

    International Nuclear Information System (INIS)

    Garcia L, O.; Zequera J, T.

    1996-01-01

    Handling of formulas and mathematical procedures for fitting and using of dose-response relationships in cytogenetic dosimetry is often difficulted by the absence of collaborators specialized in mathematics and computation. DOSGEN program contains the main mathematical operations which are used in cytogenetic dosimetry. It is able to run in IBM compatible Pc's by non-specialized personnel.The program possibilities are: Poisson distribution fitting test for the number of aberration per cell, dose assessment for whole body irradiation, dose assessment for partial irradiation and determination of irradiated fraction. The program allows on screen visualization and printing of results. DOSGEN has been developed in turbo pascal and is 33Kb of size. (authors). 4 refs

  6. The Grid2003 Production Grid Principles and Practice

    CERN Document Server

    Foster, I; Gose, S; Maltsev, N; May, E; Rodríguez, A; Sulakhe, D; Vaniachine, A; Shank, J; Youssef, S; Adams, D; Baker, R; Deng, W; Smith, J; Yu, D; Legrand, I; Singh, S; Steenberg, C; Xia, Y; Afaq, A; Berman, E; Annis, J; Bauerdick, L A T; Ernst, M; Fisk, I; Giacchetti, L; Graham, G; Heavey, A; Kaiser, J; Kuropatkin, N; Pordes, R; Sekhri, V; Weigand, J; Wu, Y; Baker, K; Sorrillo, L; Huth, J; Allen, M; Grundhoefer, L; Hicks, J; Luehring, F C; Peck, S; Quick, R; Simms, S; Fekete, G; Van den Berg, J; Cho, K; Kwon, K; Son, D; Park, H; Canon, S; Jackson, K; Konerding, D E; Lee, J; Olson, D; Sakrejda, I; Tierney, B; Green, M; Miller, R; Letts, J; Martin, T; Bury, D; Dumitrescu, C; Engh, D; Gardner, R; Mambelli, M; Smirnov, Y; Voeckler, J; Wilde, M; Zhao, Y; Zhao, X; Avery, P; Cavanaugh, R J; Kim, B; Prescott, C; Rodríguez, J; Zahn, A; McKee, S; Jordan, C; Prewett, J; Thomas, T; Severini, H; Clifford, B; Deelman, E; Flon, L; Kesselman, C; Mehta, G; Olomu, N; Vahi, K; De, K; McGuigan, P; Sosebee, M; Bradley, D; Couvares, P; De Smet, A; Kireyev, C; Paulson, E; Roy, A; Koranda, S; Moe, B; Brown, B; Sheldon, P

    2004-01-01

    The Grid2003 Project has deployed a multi-virtual organization, application-driven grid laboratory ("GridS") that has sustained for several months the production-level services required by physics experiments of the Large Hadron Collider at CERN (ATLAS and CMS), the Sloan Digital Sky Survey project, the gravitational wave search experiment LIGO, the BTeV experiment at Fermilab, as well as applications in molecular structure analysis and genome analysis, and computer science research projects in such areas as job and data scheduling. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. We describe the principles that have guided the development of this unique infrastructure and the practical experiences that have resulted from its creation and use. We discuss application requirements for grid services deployment and configur...

  7. Preprocessor that Enables the Use of GridProTM Grids for Unsteady Reynolds-Averaged Navier-Stokes Code TURBO

    Science.gov (United States)

    Shyam, Vikram

    2010-01-01

    A preprocessor for the Computational Fluid Dynamics (CFD) code TURBO has been developed and tested. The preprocessor converts grids produced by GridPro (Program Development Company (PDC)) into a format readable by TURBO and generates the necessary input files associated with the grid. The preprocessor also generates information that enables the user to decide how to allocate the computational load in a multiple block per processor scenario.

  8. User's manual of a supporting system for treatment planning in boron neutron capture therapy. JAERI computational dosimetry system

    CERN Document Server

    Kumada, H

    2002-01-01

    A boron neutron capture therapy (BNCT) with epithermal neutron beam is expected to treat effectively for malignant tumor that is located deeply in the brain. It is indispensable to estimate preliminarily the irradiation dose in the brain of a patient in order to perform the epithermal neutron beam BNCT. Thus, the JAERI Computational Dosimetry System (JCDS), which can calculate the dose distributions in the brain, has been developed. JCDS is a software that creates a 3-dimensional head model of a patient by using CT and MRI images and that generates a input data file automatically for calculation neutron flux and gamma-ray dose distribution in the brain by the Monte Carlo code: MCNP, and that displays the dose distribution on the head model for dosimetry by using the MCNP calculation results. JCDS has any advantages as follows; By treating CT data and MRI data which are medical images, a detail three-dimensional model of patient's head is able to be made easily. The three-dimensional head image is editable to ...

  9. Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry studies.

    Science.gov (United States)

    Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle

    2009-01-01

    Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.

  10. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems.

    Directory of Open Access Journals (Sweden)

    Hajara Idris

    Full Text Available The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user's Quality of Service (QoS requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user's QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time.

  11. A transport layer protocol for the future high speed grid computing: SCTP versus fast tcp multihoming

    International Nuclear Information System (INIS)

    Arshad, M.J.; Mian, M.S.

    2010-01-01

    TCP (Transmission Control Protocol) is designed for reliable data transfer on the global Internet today. One of its strong points is its use of flow control algorithm that allows TCP to adjust its congestion window if network congestion is occurred. A number of studies and investigations have confirmed that traditional TCP is not suitable for each and every type of application, for example, bulk data transfer over high speed long distance networks. TCP sustained the time of low-capacity and short-delay networks, however, for numerous factors it cannot be capable to efficiently deal with today's growing technologies (such as wide area Grid computing and optical-fiber networks). This research work surveys the congestion control mechanism of transport protocols, and addresses the different issues involved for transferring the huge data over the future high speed Grid computing and optical-fiber networks. This work also presents the simulations to compare the performance of FAST TCP multihoming with SCTP (Stream Control Transmission Protocol) multihoming in high speed networks. These simulation results show that FAST TCP multihoming achieves bandwidth aggregation efficiently and outperforms SCTP multihoming under a similar network conditions. The survey and simulation results presented in this work reveal that multihoming support into FAST TCP does provide a lot of benefits like redundancy, load-sharing and policy-based routing, which largely improves the whole performance of a network and can meet the increasing demand of the future high-speed network infrastructures (such as in Grid computing). (author)

  12. Dosimetry

    International Nuclear Information System (INIS)

    Rezende, D.A.O. de

    1976-01-01

    The fundamental units of dosimetry are defined, such as exposure rate, absorbed dose and equivalent dose. A table is given of relative biological effectiveness values for the different types of radiation. The relation between the roentgen and rad units is calculated and the concepts of physical half-life, biological half-life and effective half-life are discussed. Referring to internal dosimetry, a mathematical treatment is given to β particle-and γ radiation dosimetry. The absorbed dose is calculated and a practical example is given of the calculation of the exposure and of the dose rate for a gama source [pt

  13. Huddersfield University Campus Grid: QGG of OSCAR Clusters

    International Nuclear Information System (INIS)

    Holmes, Violeta; Kureshi, Ibad

    2010-01-01

    In the last decade Grid Computing Technology, an innovative extension of distributed computing, is becoming an enabler for computing resource sharing among the participants in 'Virtual Organisations' (VO). Although there exist enormous research efforts on grid-based collaboration technologies, most of them are concentrated on large research and business institutions. In this paper we are considering the adoption of Grid Computing Technology in a VO of small to medium Further Education (FE) and Higher-Education (HE) institutions. We will concentrate on the resource sharing among the campuses of The University of Huddersfield in Yorkshire and colleges in Lancashire, UK, enabled by the Grid. Within this context, it is important to focus on standards that support resource and information sharing, toolkits and middleware solutions that would promote Grid adoption among the FE/HE institutions in the Virtual HE organisation.

  14. Radiological Protection Dosimetry Section report of work done and list of publications during 1981-1982

    International Nuclear Information System (INIS)

    Krishnan, D.; Venkataraman, G.

    1983-01-01

    Radiological Protection Dosimetry Section has as its objective development of dosimetric techniques, theoretical as well as experimental. To this end in view, research and development work on chemical and neutron dosimetry systems, computational dosimetry and dosimetry associated with protection problems is being done. Work is also carried out on radiobiological investigations at cellular level to understand radiation damage and interpret the basis of radiation exposure limits and attendant safety standards. These topics are covered by five groups in the section viz. Neutron Dosimetry, Chemical Dosimetry, Radiation Biophysics, Radium Hazards Evaluation and Control, and Theoretical studies. A brief outline of the activities of each of the above groups is given along with a list of publications for the last two years. (author)

  15. Satin: A high-level and efficient grid programming model

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.; Wrzesinska, G.; Jacobs, C.J.H.; Bal, H.E.

    2010-01-01

    Computational grids have an enormous potential to provide compute power. However, this power remains largely unexploited today for most applications, except trivially parallel programs. Developing parallel grid applications simply is too difficult. Grids introduce several problems not encountered

  16. Reshaping of computational system for dosimetry in neutron and photons radiotherapy based in stochastic methods - SISCODES

    International Nuclear Information System (INIS)

    Trindade, Bruno Machado

    2011-02-01

    This work shows the remodeling of the Computer System for Dosimetry of Neutrons and Photons in Radiotherapy Based on Stochastic Methods . SISCODES. The initial description and status, the alterations and expansions (proposed and concluded), and the latest system development status are shown. The SISCODES is a system that allows the execution of a 3D computational planning in radiation therapy, based on MCNP5 nuclear particle transport code. The SISCODES provides tools to build a patient's voxels model, to define a treatment planning, to simulate this planning, and to view the results of the simulation. The SISCODES implements a database of tissues, sources and nuclear data and an interface to access then. The graphical SISCODES modules were rewritten or were implemented using C++ language and GTKmm library. Studies about dose deviations were performed simulating a homogeneous water phantom as analogue of the human body in radiotherapy planning and a heterogeneous voxel phantom, pointing out possible dose miscalculations. The Soft-RT and PROPLAN computer codes that do interface with SISCODES are described. A set of voxels models created on the SISCODES are presented with its respective sizes and resolutions. To demonstrate the use of SISCODES, examples of radiation therapy and dosimetry simulations for prostate and heart are shown. Three protocols were simulated on the heart voxel model: Sm-153 filled balloon and P-32 stent, to prevent angioplasty restenosis; and Tl-201 myocardial perfusion, to imaging. Teletherapy with 6MV and 15MV beams were simulated to the prostate, and brachytherapy with I-125 seeds. The results of these simulations are shown on isodose curves and on dose-volume histograms. The SISCODES shows to be a useful tool for research of new radiation therapy treatments and, in future, can also be useful in medical practice. At the end, future improvements are proposed. I hope this work can contribute to develop more effective radiation therapy

  17. Determination of electrical characteristics of body tissues for computational dosimetry studies

    International Nuclear Information System (INIS)

    Silva, Rafael Monteiro da Cruz; Domingues, Luis Adriano M.C.; Neto, Athanasio Mpalantinos; Barbosa, Carlos Ruy Nunez

    2008-01-01

    Increasing public concern about human exposure to electromagnetic fields led to the development of International Exposure Standards, which reflect the actual scientific knowledge on this subject. Existing exposure limits (reference levels), are based on maximum admissible fields or induced currents densities inside human bodies, called basic restrictions. Since those physical quantities can not be readily measured, they must be estimated using techniques of computational dosimetry. These techniques rely on accurate computational modelling of human bodies to establish the relation of external field (electric / magnetic) to induced current (internal field). Nowadays the models available for human body simulation (FEM, FDM,...) are quite accurate, specially when using geometric discretization obtained from medical imaging techniques, however the determination of tissues characteristics (permittivity and conductivity) is still an issue to be dealt with. In current studies the electrical characteristics (permittivity and conductivity) of body tissues are based on values which were obtained from measurements done on tissue simples obtained from dead bodies. However those values may not represent adequately the behaviour of living tissues. In this paper a research designed to characterize the permittivity of human body tissues is presented, consisting of measurements and simulations designed to determine, using indirect methods, the electrical behaviour of living tissues. A study of exposure assessment on a real high voltage transmission line in Brazil, using measured permittivity values combined with a finite element model of the human body is presented in the panel. (author)

  18. The dynamic management system for grid resources information of IHEP

    International Nuclear Information System (INIS)

    Gu Ming; Sun Gongxing; Zhang Weiyi

    2003-01-01

    The Grid information system is an essential base for building a Grid computing environment, it collects timely the resources information of each resource in a Grid, and provides an entire information view of all resources to the other components in a Grid computing system. The Grid technology could support strongly the computing of HEP (High Energy Physics) with big science and multi-organization features. In this article, the architecture and implementation of a dynamic management system are described, as well as the grid and LDAP (Lightweight Directory Access Protocol), including Web-based design for resource information collecting, querying and modifying. (authors)

  19. Development of Probabilistic Internal Dosimetry Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Siwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kwon, Tae-Eun [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of); Lee, Jai-Ki [Korean Association for Radiation Protection, Seoul (Korea, Republic of)

    2017-02-15

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values (e.g. the 2.5{sup th}, 5{sup th}, median, 95{sup th}, and 97.5{sup th} percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various

  20. Development of Probabilistic Internal Dosimetry Computer Code

    International Nuclear Information System (INIS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki

    2017-01-01

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values (e.g. the 2.5 th , 5 th , median, 95 th , and 97.5 th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases

  1. Grid Databases for Shared Image Analysis in the MammoGrid Project

    CERN Document Server

    Amendolia, S R; Hauer, T; Manset, D; McClatchey, R; Odeh, M; Reading, T; Rogulin, D; Schottlander, D; Solomonides, T

    2004-01-01

    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UK

  2. Integrating Grid Services into the Cray XT4 Environment

    OpenAIRE

    Cholia, Shreyas

    2009-01-01

    The 38640 core Cray XT4 "Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a share...

  3. CERN readies world's biggest science grid The computing network now encompasses more than 100 sites in 31 countries

    CERN Multimedia

    Niccolai, James

    2005-01-01

    If the Large Hadron Collider (LHC) at CERN is to yield miraculous discoveries in particle physics, it may also require a small miracle in grid computing. By a lack of suitable tools from commercial vendors, engineers at the famed Geneva laboratory are hard at work building a giant grid to store and process the vast amount of data the collider is expected to produce when it begins operations in mid-2007 (2 pages)

  4. A multi VO Grid infrastructure at DESY

    International Nuclear Information System (INIS)

    Gellrich, Andreas

    2010-01-01

    As a centre for research with particle accelerators and synchrotron light, DESY operates a Grid infrastructure in the context of the EU-project EGEE and the national Grid initiative D-GRID. All computing and storage resources are located in one Grid infrastructure which supports a number of Virtual Organizations of different disciplines, including non-HEP groups such as the Photon Science community. Resource distribution is based on fair share methods without dedicating hardware to user groups. Production quality of the infrastructure is guaranteed by embedding it into the DESY computer centre.

  5. EIAGRID: In-field optimization of seismic data acquisition by real-time subsurface imaging using a remote GRID computing environment.

    Science.gov (United States)

    Heilmann, B. Z.; Vallenilla Ferrara, A. M.

    2009-04-01

    The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The

  6. The pilot way to Grid resources using glideinWMS

    CERN Document Server

    Sfiligoi, Igor; Holzman, Burt; Mhashilkar, Parag; Padhi, Sanjay; Wurthwrin, Frank

    Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of grid middleware shared between them. This deployment model has proven to be very convenient for computing resource providers, but has introduced several problems for the users of the system, the three major being the complexity of job scheduling, the non-uniformity of compute resources, and the lack of good job monitoring. Pilot jobs address all the above problems by creating a virtual private computing pool on top of grid resources. This paper presents both the general pilot concept, as well as a concrete implementation, called glideinWMS, deployed in the Open Science Grid.

  7. Pediatric Phantom Dosimetry of Kodak 9000 Cone-beam Computed Tomography.

    Science.gov (United States)

    Yepes, Juan F; Booe, Megan R; Sanders, Brian J; Jones, James E; Ehrlich, Ygal; Ludlow, John B; Johnson, Brandon

    2017-05-15

    The purpose of the study was to evaluate the radiation dose of the Kodak 9000 cone-beam computed tomography (CBCT) device for different anatomical areas using a pediatric phantom. Absorbed doses resulting from maxillary and mandibular region three by five cm CBCT volumes of an anthropomorphic 10-year-old child phantom were acquired using optical stimulated dosimetry. Equivalent doses were calculated for radiosensitive tissues in the head and neck area, and effective dose for maxillary and mandibular examinations were calculated following the 2007 recommendations of the International Commission on Radiological Protection (ICRP). Of the mandibular scans, the salivary glands had the highest equivalent dose (1,598 microsieverts [μSv]), followed by oral mucosa (1,263 μSv), extrathoracic airway (pharynx, larynx, and trachea; 859 μSv), and thyroid gland (578 μSv). For the maxilla, the salivary glands had the highest equivalent dose (1,847 μSv), followed closely by oral mucosa (1,673 μSv), followed by the extrathoracic airway (pharynx, larynx, and trachea; 1,011 μSv) and lens of the eye (202 μSv). Compared to previous research of the Kodak 9000, completed with the adult phantom, a child receives one to three times more radiation for mandibular scans and two to 10 times more radiation for maxillary scans.

  8. Polymer gel dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Baldock, C [Institute of Medical Physics, School of Physics, University of Sydney (Australia); De Deene, Y [Radiotherapy and Nuclear Medicine, Ghent University Hospital (Belgium); Doran, S [CRUK Clinical Magnetic Resonance Research Group, Institute of Cancer Research, Surrey (United Kingdom); Ibbott, G [Radiation Physics, UT M D Anderson Cancer Center, Houston, TX (United States); Jirasek, A [Department of Physics and Astronomy, University of Victoria, Victoria, BC (Canada); Lepage, M [Centre d' imagerie moleculaire de Sherbrooke, Departement de medecine nucleaire et de radiobiologie, Universite de Sherbrooke, Sherbrooke, QC (Canada); McAuley, K B [Department of Chemical Engineering, Queen' s University, Kingston, ON (Canada); Oldham, M [Department of Radiation Oncology, Duke University Medical Center, Durham, NC (United States); Schreiner, L J [Cancer Centre of South Eastern Ontario, Kingston, ON (Canada)], E-mail: c.baldock@physics.usyd.edu.au, E-mail: yves.dedeene@ugent.be

    2010-03-07

    Polymer gel dosimeters are fabricated from radiation sensitive chemicals which, upon irradiation, polymerize as a function of the absorbed radiation dose. These gel dosimeters, with the capacity to uniquely record the radiation dose distribution in three-dimensions (3D), have specific advantages when compared to one-dimensional dosimeters, such as ion chambers, and two-dimensional dosimeters, such as film. These advantages are particularly significant in dosimetry situations where steep dose gradients exist such as in intensity-modulated radiation therapy (IMRT) and stereotactic radiosurgery. Polymer gel dosimeters also have specific advantages for brachytherapy dosimetry. Potential dosimetry applications include those for low-energy x-rays, high-linear energy transfer (LET) and proton therapy, radionuclide and boron capture neutron therapy dosimetries. These 3D dosimeters are radiologically soft-tissue equivalent with properties that may be modified depending on the application. The 3D radiation dose distribution in polymer gel dosimeters may be imaged using magnetic resonance imaging (MRI), optical-computerized tomography (optical-CT), x-ray CT or ultrasound. The fundamental science underpinning polymer gel dosimetry is reviewed along with the various evaluation techniques. Clinical dosimetry applications of polymer gel dosimetry are also presented. (topical review)

  9. Establishment of the Dosicard operational dosimetry system in a nuclear studies center

    International Nuclear Information System (INIS)

    Banchetry, C.

    2001-01-01

    Since the decree of March 1999, each employer of the French nuclear industry must set an operational dosimetry in its company. The method is based on electronic dosimeters equipped with alarms and worn by all the employees. The dosimeters are linked to a computer network. The operational dosimetry is recommended, to optimize the protection of workers and limit the doses received, to respect the principle of equity between the workers, to preserve a ''margin of dose'' in case of any unexpected event. The CEA executives have decided to use the EURISYS MESURES DOSICARD as an operational and complementary dosimetry tool. (author)

  10. The Grid is operational – it’s official!

    CERN Multimedia

    2008-01-01

    On Friday, 3 October, CERN and its many partners around the world officially marked the end of seven years of development and deployment of the Worldwide LHC Computing Grid (WLCG) and the beginning of continuous operations with an all-day Grid Fest. Wolfgang von Rüden unveils the WLCG sculpture. Les Robertson speaking at the Grid Fest. At the LHC Grid Fest, Bob Jones highlights the far-reaching uses of grid computing. Over 250 grid-enthusiasts gathered in the Globe, including large delegations from the press and from industrial partners, as well as many of the people around the world who manage the distributed operations of the WLCG, which today comprises more than 140 computer centres in 33 countries. As befits a cutting-edge information technology, many participants joined virtually, by video, to mark the occasion. Unlike the start-up of the LHC, there was no single moment of high dram...

  11. Robust Grid-Current-Feedback Resonance Suppression Method for LCL-Type Grid-Connected Inverter Connected to Weak Grid

    DEFF Research Database (Denmark)

    Zhou, Xiaoping; Zhou, Leming; Chen, Yandong

    2018-01-01

    In this paper, a robust grid-current-feedback reso-nance suppression (GCFRS) method for LCL-type grid-connected inverter is proposed to enhance the system damping without introducing the switching noise and eliminate the impact of control delay on system robustness against grid-impedance variation....... It is composed of GCFRS method, the full duty-ratio and zero-beat-lag PWM method, and the lead-grid-current-feedback-resonance-suppression (LGCFRS) method. Firstly, the GCFRS is used to suppress the LCL-resonant peak well and avoid introducing the switching noise. Secondly, the proposed full duty-ratio and zero......-beat-lag PWM method is used to elimi-nate the one-beat-lag computation delay without introducing duty cycle limitations. Moreover, it can also realize the smooth switching from positive to negative half-wave of the grid current and improve the waveform quality. Thirdly, the proposed LGCFRS is used to further...

  12. Grid today, clouds on the horizon

    CERN Document Server

    Shiers, Jamie

    2009-01-01

    By the time of CCP 2008, the largest scientific machine in the world – the Large Hadron Collider – had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy (7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our “Higgs in one basket” – that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219–223]. After many years of preparation, 2008 saw a final “Common Computing Readiness Challenge” (CCRC'08) – aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change – as always – is on the horizon. The current funding model for Grids – which...

  13. Smart grid communication-enabled intelligence for the electric power grid

    CERN Document Server

    Bush, Stephen F

    2014-01-01

    This book bridges the divide between the fields of power systems engineering and computer communication through the new field of power system information theory. Written by an expert with vast experience in the field, this book explores the smart grid from generation to consumption, both as it is planned today and how it will evolve tomorrow. The book focuses upon what differentiates the smart grid from the ""traditional"" power grid as it has been known for the last century. Furthermore, the author provides the reader with a fundamental understanding of both power systems and communication ne

  14. Grid Interoperation with ARC Middleware for the CMS Experiment

    CERN Document Server

    Edelmann, Erik; Frey, Jaime; Gronager, Michael; Happonen, Kalle; Johansson, Daniel; Kleist, Josva; Klem, Jukka; Koivumaki, Jesper; Linden, Tomas; Pirinen, Antti; Qing, Di

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developi...

  15. A new science infrastruture: the grid

    International Nuclear Information System (INIS)

    Sun Gongxing

    2003-01-01

    As the depth and scale of science reserch growing, it's requirement of computing power will become bigger and bigger, as well as the global collaboration is being enhanced. therefore, integration and sharing of all available resources among the participating organizations is required, including computing, storage, networks, even human resource and intelligant instruments. Grid technology is developed for the goal mentioned above, and could become an infrastructure the future science research and engineering. As a global computing technology, there are a lot of key technologies to be addressed. In the paper, grid architecture and secure infrastructure and application domains and tools will be described, at last we will give the grid prospect in the future. (authors)

  16. Grid Generation Techniques Utilizing the Volume Grid Manipulator

    Science.gov (United States)

    Alter, Stephen J.

    1998-01-01

    This paper presents grid generation techniques available in the Volume Grid Manipulation (VGM) code. The VGM code is designed to manipulate existing line, surface and volume grids to improve the quality of the data. It embodies an easy to read rich language of commands that enables such alterations as topology changes, grid adaption and smoothing. Additionally, the VGM code can be used to construct simplified straight lines, splines, and conic sections which are common curves used in the generation and manipulation of points, lines, surfaces and volumes (i.e., grid data). These simple geometric curves are essential in the construction of domain discretizations for computational fluid dynamic simulations. By comparison to previously established methods of generating these curves interactively, the VGM code provides control of slope continuity and grid point-to-point stretchings as well as quick changes in the controlling parameters. The VGM code offers the capability to couple the generation of these geometries with an extensive manipulation methodology in a scripting language. The scripting language allows parametric studies of a vehicle geometry to be efficiently performed to evaluate favorable trends in the design process. As examples of the powerful capabilities of the VGM code, a wake flow field domain will be appended to an existing X33 Venturestar volume grid; negative volumes resulting from grid expansions to enable flow field capture on a simple geometry, will be corrected; and geometrical changes to a vehicle component of the X33 Venturestar will be shown.

  17. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry

    International Nuclear Information System (INIS)

    Brady, S. L.; Kaufman, R. A.

    2012-01-01

    Purpose: The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ∼25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. Methods: The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Results: Calibration precision was measured to be better than 5%–7%, 3%–5%, and 2%–4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy −1 versus the CT scatter phantom 29.2 ± 1.0 mV cGy −1 and FIA with x-ray 29.9 ± 1.1 mV cGy −1 methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ∼3000 mV. Conclusions: The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the

  18. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry.

    Science.gov (United States)

    Brady, S L; Kaufman, R A

    2012-06-01

    The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ~25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy(-1) versus the CT scatter phantom 29.2 ± 1.0 mV cGy(-1) and FIA with x-ray 29.9 ± 1.1 mV cGy(-1) methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ~3000 mV. The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error ~12

  19. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    Science.gov (United States)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-12-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.

  20. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    International Nuclear Information System (INIS)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-01-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we describe the WNoDeS architecture.

  1. Sort-Mid tasks scheduling algorithm in grid computing.

    Science.gov (United States)

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  2. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  3. Advances in Grid and Pervasive Computing: 5th International Conference, GPC 2010, Hualien, Taiwan, May 10-13, 2010: Proceedings

    NARCIS (Netherlands)

    Bellavista, P.; Chang, R.-S.; Chao, H.-C.; Lin, S.-F.; Sloot, P.M.A.

    2010-01-01

    This book constitutes the proceedings of the 5th international conference, CPC 2010, held in Hualien, Taiwan in May 2010. The 67 full papers are selected from 184 submissions and focus on topics such as cloud and Grid computing, peer-to-peer and pervasive computing, sensor and mobile networks,

  4. Internal sources dosimetry

    International Nuclear Information System (INIS)

    Savio, Eduardo

    1994-01-01

    The absorbed dose, need of estimation in risk evaluation in the application of radiopharmaceuticals in Nuclear Medicine practice,internal dosimetry,internal and external sources. Calculation methodology,Marinelli model,MIRD system for absorbed dose calculation based on biological parameters of radiopharmaceutical in human body or individual,energy of emitted radiations by administered radionuclide, fraction of emitted energy that is absorbed by target body.Limitation of the MIRD calculation model. A explanation of Marinelli method of dosimetry calculation,β dosimetry. Y dosimetry, effective dose, calculation in organs and tissues, examples. Bibliography .

  5. The International Symposium on Grids and Clouds and the Open Grid Forum

    Science.gov (United States)

    The International Symposium on Grids and Clouds 20111 was held at Academia Sinica in Taipei, Taiwan on 19th to 25th March 2011. A series of workshops and tutorials preceded the symposium. The aim of ISGC is to promote the use of grid and cloud computing in the Asia Pacific region. Over the 9 years that ISGC has been running, the programme has evolved to become more user community focused with subjects reaching out to a larger population. Research communities are making widespread use of distributed computing facilities. Linking together data centers, production grids, desktop systems or public clouds, many researchers are able to do more research and produce results more quickly. They could do much more if the computing infrastructures they use worked together more effectively. Changes in the way we approach distributed computing, and new services from commercial providers, mean that boundaries are starting to blur. This opens the way for hybrid solutions that make it easier for researchers to get their job done. Consequently the theme for ISGC2011 was the opportunities that better integrated computing infrastructures can bring, and the steps needed to achieve the vision of a seamless global research infrastructure. 2011 is a year of firsts for ISGC. First the title - while the acronym remains the same, its meaning has changed to reflect the evolution of computing: The International Symposium on Grids and Clouds. Secondly the programming - ISGC 2011 has always included topical workshops and tutorials. But 2011 is the first year that ISGC has been held in conjunction with the Open Grid Forum2 which held its 31st meeting with a series of working group sessions. The ISGC plenary session included keynote speakers from OGF that highlighted the relevance of standards for the research community. ISGC with its focus on applications and operational aspects complemented well with OGF's focus on standards development. ISGC brought to OGF real-life use cases and needs to be

  6. Monte Carlo computed machine-specific correction factors for reference dosimetry of TomoTherapy static beam for several ion chambers

    International Nuclear Information System (INIS)

    Sterpin, E.; Mackie, T. R.; Vynckier, S.

    2012-01-01

    Purpose: To determine k Q msr ,Q o f msr ,f o correction factors for machine-specific reference (msr) conditions by Monte Carlo (MC) simulations for reference dosimetry of TomoTherapy static beams for ion chambers Exradin A1SL, A12; PTW 30006, 31010 Semiflex, 31014 PinPoint, 31018 microLion; NE 2571. Methods: For the calibration of TomoTherapy units, reference conditions specified in current codes of practice like IAEA/TRS-398 and AAPM/TG-51 cannot be realized. To cope with this issue, Alfonso et al. [Med. Phys. 35, 5179–5186 (2008)] described a new formalism introducing msr factors k Q msr ,Q o f msr ,f o for reference dosimetry, applicable to static TomoTherapy beams. In this study, those factors were computed directly using MC simulations for Q 0 corresponding to a simplified 60 Co beam in TRS-398 reference conditions (at 10 cm depth). The msr conditions were a 10 × 5 cm 2 TomoTherapy beam, source-surface distance of 85 cm and 10 cm depth. The chambers were modeled according to technical drawings using the egs++ package and the MC simulations were run with the egs c hamber user code. Phase-space files used as the source input were produced using PENELOPE after simulation of a simplified 60 Co beam and the TomoTherapy treatment head modeled according to technical drawings. Correlated sampling, intermediate phase-space storage, and photon cross-section enhancement variance reduction techniques were used. The simulations were stopped when the combined standard uncertainty was below 0.2%. Results: Computed k Q msr ,Q o f msr ,f o values were all close to one, in a range from 0.991 for the PinPoint chamber to 1.000 for the Exradin A12 with a statistical uncertainty below 0.2%. Considering a beam quality Q defined as the TPR 20,10 for a 6 MV Elekta photon beam (0.661), the additional correction k Q msr, Q f msr, f ref to k Q,Q o defined in Alfonso et al. [Med. Phys. 35, 5179–5186 (2008)] formalism was in a range from 0.997 to 1.004.Conclusion: The MC computed

  7. The Grid is open, so please come in…

    CERN Multimedia

    Caroline Duc

    2012-01-01

    During the week of 17 to 21 September 2012, the European Grid Infrastructure Technical Forum was held in Prague. At this event, organised by EGI (European Grid Infrastructure), grid computing experts set about tackling the challenge of opening their doors to a still wider community. This provided an excellent opportunity to look back at similar initiatives by EGI in the past.   EGI's aim is to coordinate the computing resources of the European Grid Infrastructure and to encourage exchanges between the collaboration and users. Initially dedicated mainly to high-energy particle physics, the European Grid Infrastructure is now welcoming new disciplines and communities. The EGI Technical Forum is organised once a year and is a key date in the community's calendar. The 2012 edition, organised in Prague, was an opportunity to review the advances made and to look constructively into a future where the use of computing grids becomes more widespread. Since 2010, EGI has supported the ...

  8. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    Science.gov (United States)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  9. The LHCb Grid Simulation

    CERN Multimedia

    Baranov, Alexander

    2016-01-01

    The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the Grid work. Moreover, job scheduling and data distribution strategy have a great impact on the grid performance. However, it is hard to choose an appropriate algorithm and strategies as they need a lot of time to be tested on the real grid. In this study, we describe the LHCb Grid simulator. The simulator reproduces the LHCb Grid structure with its sites and their number of CPUs, amount of disk storage and bandwidth connection. We demonstrate how well the simulator reproduces the grid work, show its advantages and limitations. We show how well the simulator reproduces job scheduling and network anomalies, consider methods ...

  10. AGIS: The ATLAS Grid Information System

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  11. Service task partition and distribution in star topology computer grid subject to data security constraints

    Energy Technology Data Exchange (ETDEWEB)

    Xiang Yanping [Collaborative Autonomic Computing Laboratory, School of Computer Science, University of Electronic Science and Technology of China (China); Levitin, Gregory, E-mail: levitin@iec.co.il [Collaborative Autonomic Computing Laboratory, School of Computer Science, University of Electronic Science and Technology of China (China); Israel electric corporation, P. O. Box 10, Haifa 31000 (Israel)

    2011-11-15

    The paper considers grid computing systems in which the resource management systems (RMS) can divide service tasks into execution blocks (EBs) and send these blocks to different resources. In order to provide a desired level of service reliability the RMS can assign the same blocks to several independent resources for parallel execution. The data security is a crucial issue in distributed computing that affects the execution policy. By the optimal service task partition into the EBs and their distribution among resources, one can achieve the greatest possible service reliability and/or expected performance subject to data security constraints. The paper suggests an algorithm for solving this optimization problem. The algorithm is based on the universal generating function technique and on the evolutionary optimization approach. Illustrative examples are presented. - Highlights: > Grid service with star topology is considered. > An algorithm for evaluating service reliability and data security is presented. > A tradeoff between the service reliability and data security is analyzed. > A procedure for optimal service task partition and distribution is suggested.

  12. Service task partition and distribution in star topology computer grid subject to data security constraints

    International Nuclear Information System (INIS)

    Xiang Yanping; Levitin, Gregory

    2011-01-01

    The paper considers grid computing systems in which the resource management systems (RMS) can divide service tasks into execution blocks (EBs) and send these blocks to different resources. In order to provide a desired level of service reliability the RMS can assign the same blocks to several independent resources for parallel execution. The data security is a crucial issue in distributed computing that affects the execution policy. By the optimal service task partition into the EBs and their distribution among resources, one can achieve the greatest possible service reliability and/or expected performance subject to data security constraints. The paper suggests an algorithm for solving this optimization problem. The algorithm is based on the universal generating function technique and on the evolutionary optimization approach. Illustrative examples are presented. - Highlights: → Grid service with star topology is considered. → An algorithm for evaluating service reliability and data security is presented. → A tradeoff between the service reliability and data security is analyzed. → A procedure for optimal service task partition and distribution is suggested.

  13. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods

    International Nuclear Information System (INIS)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.

    2009-01-01

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  14. DataGrid passes its exams

    CERN Multimedia

    2003-01-01

    DataGrid, the European project to build a computational and data-intensive grid infrastructure, is now entering its third year. Thanks to its achievements in 2002, it has just come out of its latest annual review with flying colours.

  15. Grid interoperability: the interoperations cookbook

    Energy Technology Data Exchange (ETDEWEB)

    Field, L; Schulz, M [CERN (Switzerland)], E-mail: Laurence.Field@cern.ch, E-mail: Markus.Schulz@cern.ch

    2008-07-01

    Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards.

  16. Grid interoperability: the interoperations cookbook

    International Nuclear Information System (INIS)

    Field, L; Schulz, M

    2008-01-01

    Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards

  17. Grid Technologies: scientific and industrial prospects

    CERN Multimedia

    2002-01-01

    On Friday 27th, 17:00-21:00, CERN will for the first time be host to the popular 'First Tuesday Geneva' events for entrepreneurs, investors and all those interested in how new technologies will impact industry. Organised by the non-profit group Rezonance, these evening events typically attract over 300 persons, and combine a series of short presentations on a hot topic with an informal networking session. The topic for this 'First Tuesday@CERN' is Grid Technologies. Over the last year, the concept of a Grid of geographically distributed computers providing huge computing resources 'on tap' to companies and institutions has led to a great deal of interest and activity from major computer hardware and software companies. The session is hosted by the CERN openlab for DataGrid applications, a recently established industrial partnership on Grid technologies, and will profile both CERN's activities in this emerging field and those of some key industrial players. Speakers include: Hans Hoffmann: CERN, The LHC a...

  18. Grid for Earth Science Applications

    Science.gov (United States)

    Petitdidier, Monique; Schwichtenberg, Horst

    2013-04-01

    The civil society at large has addressed to the Earth Science community many strong requirements related in particular to natural and industrial risks, climate changes, new energies. The main critical point is that on one hand the civil society and all public ask for certainties i.e. precise values with small error range as it concerns prediction at short, medium and long term in all domains; on the other hand Science can mainly answer only in terms of probability of occurrence. To improve the answer or/and decrease the uncertainties, (1) new observational networks have been deployed in order to have a better geographical coverage and more accurate measurements have been carried out in key locations and aboard satellites. Following the OECD recommendations on the openness of research and public sector data, more and more data are available for Academic organisation and SMEs; (2) New algorithms and methodologies have been developed to face the huge data processing and assimilation into simulations using new technologies and compute resources. Finally, our total knowledge about the complex Earth system is contained in models and measurements, how we put them together has to be managed cleverly. The technical challenge is to put together databases and computing resources to answer the ES challenges. However all the applications are very intensive computing. Different compute solutions are available and depend on the characteristics of the applications. One of them is Grid especially efficient for independent or embarrassingly parallel jobs related to statistical and parametric studies. Numerous applications in atmospheric chemistry, meteorology, seismology, hydrology, pollution, climate and biodiversity have been deployed successfully on Grid. In order to fulfill requirements of risk management, several prototype applications have been deployed using OGC (Open geospatial Consortium) components with Grid middleware. The Grid has permitted via a huge number of runs to

  19. Towards a global service registry for the world-wide LHC computing grid

    International Nuclear Information System (INIS)

    Field, Laurence; Pradillo, Maria Alandes; Girolamo, Alessandro Di

    2014-01-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages

  20. Towards a Global Service Registry for the World-Wide LHC Computing Grid

    Science.gov (United States)

    Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro

    2014-06-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the

  1. Investigation of optimal scanning protocol for X-ray computed tomography polymer gel dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Sellakumar, P. [Bangalore Institute of Oncology, 44-45/2, II Cross, RRMR Extension, Bangalore 560 027 (India)], E-mail: psellakumar@rediffmail.com; James Jebaseelan Samuel, E. [School of Science and Humanities, VIT University, Vellore 632 014 (India); Supe, Sanjay S. [Department of Radiation Physics, Kidwai Memorial Institute of Oncology, Hosur Road, Bangalore 560 027 (India)

    2007-11-15

    X-ray computed tomography is one of the potential tool used to evaluate the polymer gel dosimeters in three dimensions. The purpose of this study is to investigate the factors which affect the image noise for X-ray CT polymer gel dosimetry. A cylindrical water filled phantom was imaged with single slice Siemens Somatom Emotion CT scanner. The imaging parameters like tube voltage, tube current, slice scan time, slice thickness and reconstruction algorithm were varied independently to study the dependence of noise on each other. Reductions of noise with number of images to be averaged and spatial uniformity of the image were also investigated. Normoxic polymer gel PAGAT was manufactured and irradiated using Siemens Primus linear accelerator. The radiation induced change in CT number was evaluated using X-ray CT scanner. From this study it is clear that image noise is reduced with increase in tube voltage, tube current, slice scan time, slice thickness and also reduced with increasing the number of images averaged. However to reduce the tube load and total scan time, it was concluded that tube voltage of 130 kV, tube current of 200 mA, scan time of 1.5 s, slice thickness of 3 mm for high dose gradient and 5 mm for low dose gradient were optimal scanning protocols for this scanner. Optimum number of images to be averaged was concluded to be 25 for X-ray CT polymer gel dosimetry. Choice of reconstruction algorithm was also critical. From the study it is also clear that CT number increase with imaging tube voltage and shows the energy dependency of polymer gel dosimeter. Hence for evaluation of polymer gel dosimeters with X-ray CT scanner needs the optimization of scanning protocols to reduce the image noise.

  2. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  3. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    Science.gov (United States)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  4. Optimizing Grid Patterns on Photovoltaic Cells

    Science.gov (United States)

    Burger, D. R.

    1984-01-01

    CELCAL computer program helps in optimizing grid patterns for different photovoltaic cell geometries and metalization processes. Five different powerloss phenomena associated with front-surface metal grid pattern on photovoltaic cells.

  5. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  6. Techniques for radiation measurements: Micro-dosimetry and dosimetry

    International Nuclear Information System (INIS)

    Waker, A. J.

    2006-01-01

    Experimental Micro-dosimetry is concerned with the determination of radiation quality and how this can be specified in terms of the distribution of energy deposition arising from the interaction of a radiation field with a particular target site. This paper discusses various techniques that have been developed to measure radiation energy deposition over the three orders of magnitude of site-size; nano-meter, micrometer and millimetre, which radiation biology suggests is required to fully account for radiation quality. Inevitably, much of the discussion will concern the use of tissue-equivalent proportional counters and variants of this device, but other technologies that have been studied, or are under development, for their potential in experimental Micro-dosimetry are also covered. Through an examination of some of the quantities used in radiation metrology and dosimetry the natural link with Micro-dosimetric techniques will be shown and the particular benefits of using Micro-dosimetric methods for dosimetry illustrated. (authors)

  7. Progress in Grid Generation: From Chimera to DRAGON Grids

    Science.gov (United States)

    Liou, Meng-Sing; Kao, Kai-Hsiung

    1994-01-01

    Hybrid grids, composed of structured and unstructured grids, combines the best features of both. The chimera method is a major stepstone toward a hybrid grid from which the present approach is evolved. The chimera grid composes a set of overlapped structured grids which are independently generated and body-fitted, yielding a high quality grid readily accessible for efficient solution schemes. The chimera method has been shown to be efficient to generate a grid about complex geometries and has been demonstrated to deliver accurate aerodynamic prediction of complex flows. While its geometrical flexibility is attractive, interpolation of data in the overlapped regions - which in today's practice in 3D is done in a nonconservative fashion, is not. In the present paper we propose a hybrid grid scheme that maximizes the advantages of the chimera scheme and adapts the strengths of the unstructured grid while at the same time keeps its weaknesses minimal. Like the chimera method, we first divide up the physical domain by a set of structured body-fitted grids which are separately generated and overlaid throughout a complex configuration. To eliminate any pure data manipulation which does not necessarily follow governing equations, we use non-structured grids only to directly replace the region of the arbitrarily overlapped grids. This new adaptation to the chimera thinking is coined the DRAGON grid. The nonstructured grid region sandwiched between the structured grids is limited in size, resulting in only a small increase in memory and computational effort. The DRAGON method has three important advantages: (1) preserving strengths of the chimera grid; (2) eliminating difficulties sometimes encountered in the chimera scheme, such as the orphan points and bad quality of interpolation stencils; and (3) making grid communication in a fully conservative and consistent manner insofar as the governing equations are concerned. To demonstrate its use, the governing equations are

  8. Using Grid for the BABAR Experiment

    International Nuclear Information System (INIS)

    Bozzi, C.

    2005-01-01

    The BaBar experiment has been taking data since 1999. In 2001 the computing group started to evaluate the possibility to evolve toward a distributed computing model in a grid environment. We built a prototype system, based on the European Data Grid (EDG), to submit full-scale analysis and Monte Carlo simulation jobs. Computing elements, storage elements, and worker nodes have been installed at SLAC and at various European sites. A BaBar virtual organization (VO) and a test replica catalog (RC) are maintained in Manchester, U.K., and the experiment is using three EDG testbed resource brokers in the U.K. and in Italy. First analysis tests were performed under the assumption that a standard BaBar software release was available at the grid target sites, using RC to register information about the executable and the produced n-tuples. Hundreds of analysis jobs accessing either Objectivity or Root data files ran on the grid. We tested the Monte Carlo production using a farm of the INFN-grid testbed customized to install an Objectivity database and run BaBar simulation software. First simulation production tests were performed using standard Job Description Language commands and the output files were written on the closest storage element. A package that can be officially distributed to grid sites not specifically customized for BaBar has been prepared. We are studying the possibility to add a user friendly interface to access grid services for BaBar

  9. User's manual of a supporting system for treatment planning in boron neutron capture therapy. JAERI computational dosimetry system

    Energy Technology Data Exchange (ETDEWEB)

    Kumada, Hiroaki; Torii, Yoshiya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-09-01

    A boron neutron capture therapy (BNCT) with epithermal neutron beam is expected to treat effectively for malignant tumor that is located deeply in the brain. It is indispensable to estimate preliminarily the irradiation dose in the brain of a patient in order to perform the epithermal neutron beam BNCT. Thus, the JAERI Computational Dosimetry System (JCDS), which can calculate the dose distributions in the brain, has been developed. JCDS is a software that creates a 3-dimensional head model of a patient by using CT and MRI images and that generates a input data file automatically for calculation neutron flux and gamma-ray dose distribution in the brain by the Monte Carlo code: MCNP, and that displays the dose distribution on the head model for dosimetry by using the MCNP calculation results. JCDS has any advantages as follows; By treating CT data and MRI data which are medical images, a detail three-dimensional model of patient's head is able to be made easily. The three-dimensional head image is editable to simulate the state of a head after its surgical processes such as skin flap opening and bone removal for the BNCT with craniotomy that are being performed in Japan. JCDS can provide information for the Patient Setting System to set the patient in an actual irradiation position swiftly and accurately. This report describes basic design and procedure of dosimetry, operation manual, data and library structure for JCDS (ver.1.0). (author)

  10. Clinical dosimetry in diagnostic and interventional radiology

    International Nuclear Information System (INIS)

    Dimcheva, M.; Sergieva, S.; Jovanovska, A.

    2012-01-01

    Full text: Introduction: Diagnostic and interventional procedures involving x-rays are the most significant contributor to total population dose form man made sources of ionizing radiation. Purpose and aim: X-ray imaging generally covers a diverse range of examination types, many of which are increasing in frequency and technical complexity. Materials and methods: The European Directives 96/29 and 97/43 EURATOM stress the importance of accurate dosimetry and require calibration of all measuring equipment related to application of ionizing radiation in medicine. Results: The paper gives and overview of current system of dosimetry of ionizing radiations that is relevant for metrology and clinical applications. It also reflects recently achieved international harmonization in the field promoted by International Atomic Energy Agency (IAEA). Discussion: Objectives of clinical dose measurements in diagnostic and interventional radiology are multiple, as assessment of equipment performance, or assessment of risk emerging from use of ionizing radiation Conclusion: Therefore, from the clinical point of view, the requirements for dosimeters and procedures to assess dose to standard dosimetry phantoms and patients in clinical diverse modalities, as computed tomography are presented

  11. Thermoluminescence albedo-neutron dosimetry

    International Nuclear Information System (INIS)

    Strand, T.; Storruste, A.

    1986-10-01

    The report discusses neutron detection with respect to dosimetry and compares different thermoluminescent dosimetry materials for neutron dosimetry. Construction and calibration of a thermoluminescence albedo neutron dosemeter, developed by the authors, is described

  12. Grid Interoperation with ARC middleware for the CMS experiment

    International Nuclear Information System (INIS)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva; Field, Laurence; Qing, Di; Frey, Jaime; Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  13. Grid Interoperation with ARC middleware for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva [Nordic DataGrid Facility, Kastruplundgade 22, 1., DK-2770 Kastrup (Denmark); Field, Laurence; Qing, Di [CERN, CH-1211 Geneve 23 (Switzerland); Frey, Jaime [University of Wisconsin-Madison, 1210 W. Dayton St., Madison, WI (United States); Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti, E-mail: Jukka.Klem@cern.c [Helsinki Institute of Physics, PO Box 64, FIN-00014 University of Helsinki (Finland)

    2010-04-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  14. Applications of gel dosimetry

    International Nuclear Information System (INIS)

    Ibbott, Geoffrey S

    2004-01-01

    Gel dosimetry has been examined as a clinical dosimeter since the 1950s. During the last two decades, however, a rapid increase in the number of investigators has been seen, and the body of knowledge regarding gel dosimetry has expanded considerably. Gel dosimetry is still considered a research project, and the introduction of this tool into clinical use is proceeding slowly. This paper will review the characteristics of gel dosimetry that make it desirable for clinical use, the postulated and demonstrated applications of gel dosimetry, and some complications, set-backs, and failures that have contributed to the slow introduction into routine clinical use

  15. Is intraoperative real-time dosimetry in prostate seed brachytherapy predictive of biochemical outcome?

    Directory of Open Access Journals (Sweden)

    Daniel Taussky

    2017-06-01

    Full Text Available Purpose : To analyze intraoperative (IO dosimetry using transrectal ultrasound (TRUS, performed before and after prostate low-dose-rate brachytherapy (LDR-BT, and compare it to dosimetry performed 30 days following the LDR-BT implant (Day 30. Material and methods : A total of 236 patients underwent prostate LDR-BT using 125 I that was performed with a three-dimensional TRUS-guided interactive inverse preplanning system (preimplant dosimetry. After the implant procedure, the TRUS was repeated in the operating room, and the dosimetry was recalculated (postimplant dosimetry and compared to dosimetry on Day 30 computed tomography (CT scans. Area under curve (AUC statistics was used for models predictive of dosimetric parameters at Day 30. Results : The median follow-up for patients without BF was 96 months, the 5-year and 8-year biochemical recurrence (BR-free rate was 96% and 90%, respectively. The postimplant median D 90 was 3.8 Gy lower (interquartile range [IQR], 12.4-0.9, and the V 100 only 1% less (IQR, 2.9-0.2% than the preimplant dosimetry. When comparing the postimplant and the Day 30 dosimetries, the postimplant median D 90 was 9.6 Gy higher (IQR [–] 9.5-30.3 Gy, and the V 100 was 3.2% greater (0.2-8.9% than Day 30 postimplant dosimetry. The variables that best predicted the D 90 of Day 30 was the postimplant D 90 (AUC = 0.62, p = 0.038. None of the analyzed values for IO or Day 30 dosimetry showed any predictive value for BR. Conclusions : Although improving the IO preimplant and postimplant dosimetry improved dosimetry on Day 30, the BR-free rate was not dependent on any dosimetric parameter. Unpredictable factors such as intraprostatic seed migration and IO factors, prevented the accurate prediction of Day 30 dosimetry.

  16. Development of a guidance guide for dosimetry in computed tomography; Desenvolvimento de um guia orientativo para dosimetria em tomografia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Fontes, Ladyjane Pereira

    2016-07-01

    Due to frequent questions from users of ionization chambers pencil type calibrated in the Instrument Calibration Laboratory of the Institute of Energy and Nuclear Research (LCI - IPEN), on how to properly apply the factors indicated in their calibration certificates, a guide was prepared guidance for dosimetry in computed tomography. The guide includes guidance prior knowledge of half value layer (HVL), as it is necessary to know the effective beam energy for application quality for correction factor (kq). The evaluation of HVL in TC scanners becomes a difficult task due to system geometry and therefore a survey was conducted of existing methodologies for the determination of HVL in clinical beams Computed Tomography, taking into account technical, practical and economic factors. In this work it was decided to test a Tandem System consists of absorbing covers made in the workshop of IPEN, based on preliminary studies due to low cost and good response. The Tandem system consists of five cylindrical absorbing layers of 1mm, 3mm, 5mm, 7mm and 10mm aluminum and 3 cylindrical absorbing covers 15mm, 25mm and acrylic 35mm (PMMA) coupled to the ionization chamber of commercial pencil type widely used in quality control tests in dosimetry in clinical beams Computed tomography. Through Tandem curves it was possible to assess HVL values and from the standard curve pencil-type ionization chamber, Kq find the appropriate beam. The elaborate Guide provides information on how to build the calibration curve on the basis of CSR, to find the Kq and information for construction Tandem curve, to find values close to CSR. (author)

  17. Failure probability analysis of optical grid

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  18. Grids, Clouds, and Virtualization

    Science.gov (United States)

    Cafaro, Massimo; Aloisio, Giovanni

    This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.

  19. Cost-effective pediatric head and body phantoms for computed tomography dosimetry and its evaluation using pencil ion chamber and CT dose profiler

    Directory of Open Access Journals (Sweden)

    A Saravanakumar

    2015-01-01

    Full Text Available In the present work, a pediatric head and body phantom was fabricated using polymethyl methacrylate (PMMA at a low cost when compared to commercially available phantoms for the purpose of computed tomography (CT dosimetry. The dimensions of head and body phantoms were 10 cm diameter, 15 cm length and 16 cm diameter, 15 cm length, respectively. The dose from a 128-slice CT machine received by the head and body phantom at the center and periphery were measured using a 100 mm pencil ion chamber and 150 mm CT dose profiler (CTDP. Using these values, the weighted computed tomography dose index (CTDIw and in turn the volumetric CTDI (CTDIv were calculated for various combinations of tube voltage and current-time product. A similar study was carried out using standard calibrated phantom and the results have been compared with the fabricated ones to ascertain that the performance of the latter is equivalent to that of the former. Finally, CTDIv measured using fabricated and standard phantoms were compared with respective values displayed on the console. The difference between the values was well within the limits specified by Atomic Energy Regulatory Board (AERB, India. These results indicate that the cost-effective pediatric phantom can be employed for CT dosimetry.

  20. Organ dosimetry

    International Nuclear Information System (INIS)

    Kaul, Dean C.; Egbert, Stephen D.; Otis, Mark D.; Kuhn, Thomas; Kerr, George D.; Eckerman, Keith F.; Cristy, Mark; Ryman, Jeffrey C.; Tang, Jabo S.; Maruyama, Takashi

    1987-01-01

    This chapter describes the technical approach, complicating factors, and sensitivities and uncertainties of calculations of doses to the organs of the A-bomb survivors. It is the object of the effort so described to provide data that enables the dosimetry system to determine the fluence, kerma, absorbed dose, and similar quantities in 14 organs and the fetus, specified as being of radiobiological interest. This object was accomplished through the use of adjoint Monte Carlo computations, which use a number of random particle histories to determine the relationship of incident neutrons and gamma rays to those transported to a target organ. The system uses these histories to correlate externally-incident energy- and angle-differential fluences with the fluence spectrum (energy differential only) within the target organ. In order for the system to work in the most efficient manner possible, two levels of data were provided. The first level, represented by approximately 6,000 random adjoint-particle histories, enables the computation of the fluence spectrum with sufficient precision to provide statistically reliable (± 6 %) mean doses within any given organ. With this limited history inventory, the system can be run rapidly for all survivors. Mean organ dose and dose uncertainty are obtainable in this mode. The second mode of operation enables the system to produce a good approximation to fluence spectrum within any organ or to produce the dose in each of an array of organ subvolumes. To be statistically reliable, this level of detail requires far more random histories, approximately 40,000 per organ. Thus, operation of the dosimetry system in this mode (i.e., with this data set) is intended to be on an as-needed, organ-specific basis, since the system run time is eight times that in the mean dose mode. (author)

  1. BIG: a Grid Portal for Biomedical Data and Images

    Directory of Open Access Journals (Sweden)

    Giovanni Aloisio

    2004-06-01

    Full Text Available Modern management of biomedical systems involves the use of many distributed resources, such as high performance computational resources to analyze biomedical data, mass storage systems to store them, medical instruments (microscopes, tomographs, etc., advanced visualization and rendering tools. Grids offer the computational power, security and availability needed by such novel applications. This paper presents BIG (Biomedical Imaging Grid, a Web-based Grid portal for management of biomedical information (data and images in a distributed environment. BIG is an interactive environment that deals with complex user's requests, regarding the acquisition of biomedical data, the "processing" and "delivering" of biomedical images, using the power and security of Computational Grids.

  2. Data Grids and High Energy Physics: A Melbourne Perspective

    Science.gov (United States)

    Winton, Lyle

    2003-04-01

    The University of Melbourne, Experimental Particle Physics group recognises that the future of computing is an important issue for the scientific community. It is in the nature of research for the questions posed to become more complex, requiring larger computing resources for each generation of experiment. As institutes and universities around the world increasingly pool their resources and work together to solve these questions, the need arises for more sophisticated computing techniques. One such technique, grid computing, is under investigation by many institutes across many disciplines and is the focus of much development in the computing community. ‘The Grid’, as it is commonly named, is heralded as the future of computing for research, education, and industry alike. This paper will introduce the basic concepts of grid technologies including the Globus toolkit and data grids as of July 2002. It will highlight the challenges faced in developing appropriate resource brokers and schedulers, and will look at the future of grids within high energy physics.

  3. Understanding and Mastering Dynamics in Computing Grids Processing Moldable Tasks with User-Level Overlay

    CERN Document Server

    Moscicki, Jakub Tomasz

    Scientic communities are using a growing number of distributed systems, from lo- cal batch systems, community-specic services and supercomputers to general-purpose, global grid infrastructures. Increasing the research capabilities for science is the raison d'^etre of such infrastructures which provide access to diversied computational, storage and data resources at large scales. Grids are rather chaotic, highly heterogeneous, de- centralized systems where unpredictable workloads, component failures and variability of execution environments are commonplace. Understanding and mastering the hetero- geneity and dynamics of such distributed systems is prohibitive for end users if they are not supported by appropriate methods and tools. The time cost to learn and use the interfaces and idiosyncrasies of dierent distributed environments is another challenge. Obtaining more reliable application execution times and boosting parallel speedup are important to increase the research capabilities of scientic communities. L...

  4. iSERVO: Implementing the International Solid Earth Research Virtual Observatory by Integrating Computational Grid and Geographical Information Web Services

    Science.gov (United States)

    Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry

    2006-12-01

    We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.

  5. Dosimetry in radiodiagnosis. Individual irradiation card. Dosimetric application of electrets

    International Nuclear Information System (INIS)

    Lisbona, Albert.

    1981-09-01

    This study deals with a radiodiagnosis dosimetry, and contains two parts. First of all, the combination between a dosimetric data acquisition from an ionization chamber and a micro-computer allows the realization of individual irradiation card for a well established examination. The method is extensible to almost totality of radiological examinations. The second part describes the following of an original work about the application of electrets in radiodiagnosis dosimetry. At least a theorical study is shown; it takes account of different involving phenomena and allows a starting interpretation of experimental results [fr

  6. Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids

    Science.gov (United States)

    Ma, Xinrong; Duan, Zhijian

    2018-04-01

    High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.

  7. Search for β2 adrenergic receptor ligands by virtual screening via grid computing and investigation of binding modes by docking and molecular dynamics simulations.

    Directory of Open Access Journals (Sweden)

    Qifeng Bai

    Full Text Available We designed a program called MolGridCal that can be used to screen small molecule database in grid computing on basis of JPPF grid environment. Based on MolGridCal program, we proposed an integrated strategy for virtual screening and binding mode investigation by combining molecular docking, molecular dynamics (MD simulations and free energy calculations. To test the effectiveness of MolGridCal, we screened potential ligands for β2 adrenergic receptor (β2AR from a database containing 50,000 small molecules. MolGridCal can not only send tasks to the grid server automatically, but also can distribute tasks using the screensaver function. As for the results of virtual screening, the known agonist BI-167107 of β2AR is ranked among the top 2% of the screened candidates, indicating MolGridCal program can give reasonable results. To further study the binding mode and refine the results of MolGridCal, more accurate docking and scoring methods are used to estimate the binding affinity for the top three molecules (agonist BI-167107, neutral antagonist alprenolol and inverse agonist ICI 118,551. The results indicate agonist BI-167107 has the best binding affinity. MD simulation and free energy calculation are employed to investigate the dynamic interaction mechanism between the ligands and β2AR. The results show that the agonist BI-167107 also has the lowest binding free energy. This study can provide a new way to perform virtual screening effectively through integrating molecular docking based on grid computing, MD simulations and free energy calculations. The source codes of MolGridCal are freely available at http://molgridcal.codeplex.com.

  8. The thermoluminscent dosimetry service of the radiation protection bureau

    International Nuclear Information System (INIS)

    1978-12-01

    Thermoluminescent materials have been used in radiation dosimetry for many years, but their application to nationwide personnel dosimetry has been scarce. An undertaking of this nature requires that methods be established for identification of dosimeters and for fast interpretation and communication of dose to the users across the country. It is also necessary that records of cumulative dose of individual radiation workers be continuously updated, and such records be maintained for a prolonged period. To do this many problems pertinent to associated equpment, vis. the computer, TL reader, their interfacing, and to the operational procedures of the service had to be resolved. Since April 1977, the Radiation Protection Bureau has been providing a Thermoluminescent Dosimetry Service to Canadian radiation workers. This document describes the RPB dosimeter, its characteristics, various aspects of the service, objectives of the service, and how the objective goals of the service are achieved. (auth)

  9. Services on Application Level in Grid for Scientific Calculations

    OpenAIRE

    Goranova, Radoslava

    2010-01-01

    AMS Subj. Classification: 00-02, (General) The Grid is a hardware and software infrastructure that coordinates access to distribute computational and data resources, shared by different institutes, computational centres and organizations. The Open Grid Services Architecture (OGSA) describes an architecture for a service-oriented grid computing environment, based on Web service technologies, WSDL and SOAP. In this article we investigate possibilities for realization of business process com...

  10. Development of the JAERI computational dosimetry system (JCDS) for boron neutron capture therapy. Cooperative research

    Energy Technology Data Exchange (ETDEWEB)

    Kumada, Hiroaki; Yamamoto, Kazuyoshi; Torii, Yoshiya; Uchiyama, Junzo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Matsumura, Akira; Yamamoto, Tetsuya; Nose, Tadao [Tsukuba Univ., Tsukuba, Ibaraki (Japan); Nakagawa, Yoshinobu [National Sanatorium Kagawa-Children' s Hospital, Kagawa (Japan); Kageji, Teruyoshi [Tokushima Univ., Tokushima (Japan)

    2003-03-01

    The Neutron Beam Facility at JRR-4 enables us to carry out boron neutron capture therapy with epithermal neutron beam. In order to make treatment plans for performing the epithermal neutron beam BNCT, it is necessary to estimate radiation doses in a patient's head in advance. The JAERI Computational Dosimetry System (JCDS), which can estimate distributions of radiation doses in a patient's head by simulating in order to support the treatment planning for epithermal neutron beam BNCT, was developed. JCDS is a software that creates a 3-dimentional head model of a patient by using CT and MRI images, and that generates a input data file automatically for calculation of neutron flux and gamma-ray dose distributions in the brain with the Monte Carlo code MCNP, and that displays these dose distributions on the head model for dosimetry by using the MCNP calculation results. JCDS has any advantages as follows; By using CT data and MRI data which are medical images, a detail three-dimensional model of patient's head is able to be made easily. The three-dimensional head image is editable to simulate the state of a head after its surgical processes such as skin flap opening and bone removal in the BNCT with craniotomy that are being performed in Japan. JCDS can provide information for the Patient Setting System which can support to set the patient to an actual irradiation position swiftly and accurately. This report describes basic design of JCDS and functions in several processing, calculation methods, characteristics and performance of JCDS. (author)

  11. Dosimetry Service

    CERN Multimedia

    2005-01-01

    Please remember to read your dosimeter at least once a month. Regular read-outs are vital to ensure that your personal dose is periodically monitored. Dosimeters should be read even if you have not visited the controlled areas. Dosimetry Service - Tel. 72155 http://cern.ch/rp-dosimetry

  12. SU-F-T-562: Validation of EPID-Based Dosimetry for FSRS Commissioning

    International Nuclear Information System (INIS)

    Song, Y; Saleh, Z; Obcemea, C; Chan, M; Tang, X; Lim, S; Lovelock, D; Ballangrud, A; Mueller, B; Zinovoy, M; Gelblum, D; Mychalczak, B; Both, S

    2016-01-01

    Purpose: The prevailing approach to frameless SRS (fSRS) small field dosimetry is Gafchromic film. Though providing continuous information, its intrinsic uncertainties in fabrication, response, scan, and calibration often make film dosimetry subject to different interpretations. In this study, we explored the feasibility of using EPID portal dosimetry as a viable alternative to film for small field dosimetry. Methods: Plans prescribed a dose of 21 Gy were created on a flat solid water phantom with Eclipse V11 and iPlan for small static square fields (1.0 to 3.0 cm). In addition, two clinical test plans were computed by employing iPlan on a CIRS Kesler head phantom for target dimensions of 1.2cm and 2.0cm. Corresponding portal dosimetry plans were computed using the Eclipse TPS and delivered on a Varian TrueBeam machine. EBT-XD film dosimetry was performed as a reference. The isocenter doses were measured using EPID, OSLD, stereotactic diode, and CC01 ion chamber. Results: EPID doses at the center of the square field were higher than Eclipse TPS predicted portal doses, with the mean difference being 2.42±0.65%. Doses measured by EBT-XD film, OSLD, stereotactic diode, and CC01 ion chamber revealed smaller differences (except OSLDs), with mean differences being 0.36±3.11%, 4.12±4.13%, 1.7±2.76%, 1.45±2.37% for Eclipse and −1.36±0.85%, 2.38±4.2%, −0.03±0.50%, −0.27±0.78% for iPlan. The profiles measured by EPID and EBT-XD film resembled TPS (Eclipse and iPlan) predicted ones within 3.0%. For the two clinical test plans, the EPID mean doses at the center of field were 2.66±0.68% and 2.33±0.32% higher than TPS predicted doses. Conclusion: We found that results obtained with EPID portal dosimetry were slightly higher (∼2%) than those obtained with EBT-XD film, diode, and CC01 ion chamber with the exception of OSLDs, but well within IROC tolerance (5.0%). Therefore, EPID has the potential to become a viable real-time alternative method to film dosimetry.

  13. SU-F-T-562: Validation of EPID-Based Dosimetry for FSRS Commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Song, Y; Saleh, Z; Obcemea, C; Chan, M; Tang, X; Lim, S; Lovelock, D; Ballangrud, A; Mueller, B; Zinovoy, M; Gelblum, D; Mychalczak, B; Both, S [Memorial Sloan Kettering Cancer Center, NY (United States)

    2016-06-15

    Purpose: The prevailing approach to frameless SRS (fSRS) small field dosimetry is Gafchromic film. Though providing continuous information, its intrinsic uncertainties in fabrication, response, scan, and calibration often make film dosimetry subject to different interpretations. In this study, we explored the feasibility of using EPID portal dosimetry as a viable alternative to film for small field dosimetry. Methods: Plans prescribed a dose of 21 Gy were created on a flat solid water phantom with Eclipse V11 and iPlan for small static square fields (1.0 to 3.0 cm). In addition, two clinical test plans were computed by employing iPlan on a CIRS Kesler head phantom for target dimensions of 1.2cm and 2.0cm. Corresponding portal dosimetry plans were computed using the Eclipse TPS and delivered on a Varian TrueBeam machine. EBT-XD film dosimetry was performed as a reference. The isocenter doses were measured using EPID, OSLD, stereotactic diode, and CC01 ion chamber. Results: EPID doses at the center of the square field were higher than Eclipse TPS predicted portal doses, with the mean difference being 2.42±0.65%. Doses measured by EBT-XD film, OSLD, stereotactic diode, and CC01 ion chamber revealed smaller differences (except OSLDs), with mean differences being 0.36±3.11%, 4.12±4.13%, 1.7±2.76%, 1.45±2.37% for Eclipse and −1.36±0.85%, 2.38±4.2%, −0.03±0.50%, −0.27±0.78% for iPlan. The profiles measured by EPID and EBT-XD film resembled TPS (Eclipse and iPlan) predicted ones within 3.0%. For the two clinical test plans, the EPID mean doses at the center of field were 2.66±0.68% and 2.33±0.32% higher than TPS predicted doses. Conclusion: We found that results obtained with EPID portal dosimetry were slightly higher (∼2%) than those obtained with EBT-XD film, diode, and CC01 ion chamber with the exception of OSLDs, but well within IROC tolerance (5.0%). Therefore, EPID has the potential to become a viable real-time alternative method to film dosimetry.

  14. Thermoluminescence in medical dosimetry

    International Nuclear Information System (INIS)

    Rivera, T.

    2011-10-01

    The dosimetry by thermoluminescence (Tl) is applied in the entire world for the dosimetry of ionizing radiations specially to personal and medical dosimetry. This dosimetry method has been very interesting for measures in vivo because the Tl dosimeters have the advantage of being very sensitive in a very small volume and they are also equivalent to tissue and they do not need additional accessories (for example, cable, electrometer, etc.) The main characteristics of the diverse Tl materials to be used in the radiation measures and practical applications are: the Tl curve, the share homogeneity, the signal stability after the irradiation, precision and exactitude, the response in function with the dose and the energy influence. In this work a brief summary of the advances of the radiations dosimetry is presented by means of the thermally stimulated luminescence and its application to the dosimetry in radiotherapy. (Author)

  15. A study of authorization architectures for grid security

    International Nuclear Information System (INIS)

    Pang Yanguang; Sun Gongxing; Pei Erming; Ma Nan

    2006-01-01

    Grid security is one of key issues in grid computing, while current research focus is put on the grid authorization. There is a brief discussion about the drawback of the common GSI (Grid Security Infrastructure) authorization firstly, then analysis is made on the latest several grid authorization architectures, such as structures, policy descriptions, engines, applications, and finally their features are summarized. (authors)

  16. Sort-Mid tasks scheduling algorithm in grid computing

    Directory of Open Access Journals (Sweden)

    Naglaa M. Reda

    2015-11-01

    Full Text Available Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  17. Neutron dosimetry - A review

    Energy Technology Data Exchange (ETDEWEB)

    Baum, J W

    1955-03-29

    This review summarizes information on the following subjects: (1) physical processes of importance in neutron dosimetry; (2) biological effects of neutrons; (3) neutron sources; and (4) instruments and methods used in neutron dosimetry. Also, possible improvements in dosimetry instrumentation are outlined and discussed. (author)

  18. JENDL Dosimetry File

    International Nuclear Information System (INIS)

    Nakazawa, Masaharu; Iguchi, Tetsuo; Kobayashi, Katsuhei; Iwasaki, Shin; Sakurai, Kiyoshi; Ikeda, Yujiro; Nakagawa, Tsuneo.

    1992-03-01

    The JENDL Dosimetry File based on JENDL-3 was compiled and integral tests of cross section data were performed by the Dosimetry Integral Test Working Group of the Japanese Nuclear Data Committee. Data stored in the JENDL Dosimetry File are the cross sections and their covariance data for 61 reactions. The cross sections were mainly taken from JENDL-3 and the covariances from IRDF-85. For some reactions, data were adopted from other evaluated data files. The data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-5 format. In order to confirm reliability of the data, several integral tests were carried out; comparison with the data in IRDF-85 and average cross sections measured in fission neutron fields, fast reactor spectra, DT neutron fields and Li(d, n) neutron fields. As a result, it has been found that the JENDL Dosimetry File gives better results than IRDF-85 but there are some problems to be improved in future. The contents of the JENDL Dosimetry File and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form. (author) 76 refs

  19. JENDL Dosimetry File

    Energy Technology Data Exchange (ETDEWEB)

    Nakazawa, Masaharu; Iguchi, Tetsuo [Tokyo Univ. (Japan). Faculty of Engineering; Kobayashi, Katsuhei [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Iwasaki, Shin [Tohoku Univ., Sendai (Japan). Faculty of Engineering; Sakurai, Kiyoshi; Ikeda, Yujior; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1992-03-15

    The JENDL Dosimetry File based on JENDL-3 was compiled and integral tests of cross section data were performed by the Dosimetry Integral Test Working Group of the Japanese Nuclear Data Committee. Data stored in the JENDL Dosimetry File are the cross sections and their covariance data for 61 reactions. The cross sections were mainly taken from JENDL-3 and the covariances from IRDF-85. For some reactions, data were adopted from other evaluated data files. The data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-5 format. In order to confirm reliability of the data, several integral tests were carried out; comparison with the data in IRDF-85 and average cross sections measured in fission neutron fields, fast reactor spectra, DT neutron fields and Li(d,n) neutron fields. As a result, it has been found that the JENDL Dosimetry File gives better results than IRDF-85 but there are some problems to be improved in future. The contents of the JENDL Dosimetry File and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form.

  20. Dosimetry Service

    CERN Multimedia

    2005-01-01

    Please remember to read your dosimeter at least once a month. Regular read-outs are vital to ensure that your personal dose is periodically monitored. Dosimeters should be read even if you have not visited the controlled areas. Dosimetry Service - Tel. 7 2155 http://cern.ch/rp-dosimetry

  1. Dosimetry Service

    CERN Multimedia

    Dosimetry Service

    2005-01-01

    Please remember to read your dosimeter at least once a month. Regular read-outs are vital to ensure that your personal dose is periodically monitored. Dosimeters should be read even if you have not visited the controlled areas. Dosimetry Service Tel. 7 2155 http://cern.ch/rp-dosimetry

  2. The impact of the reassessment of A-bomb dosimetry

    International Nuclear Information System (INIS)

    Kopecky, K.J.; Preston, D.L.

    1988-07-01

    This report examines the anticipated impact of the adoption by RERF of a new atomic bomb radiation dosimetry system to replace the revised tentative 1965 dosimetry system (T65DR). The current binational effort to reassess A-bomb dosimetry will eventually produce information about air doses and attenuation due to shielding by structures and body tissue. A method for computing individual survivors' total body surface exposure doses and organ doses from such data was developed, and a set of interim 1985 dosimetry (I85D) estimates was computed by this method using the data available to RERF in late 1984. Estimates of I85D total body surface exposure doses could be computed for 64,804 of 91,231 exposed survivors with T65DR dose estimates; following present plans, revised dose estimates may become available for an additional group of 10,000 to 12,000 exposed survivors. Mortality from leukemia and from all cancers except leukemia was examined in relation to I85D total body surface exposure doses (gamma plus neutron); parallel analyses using T65DR exposure doses were also conducted for the same set of survivors. Overall estimates of radiogenic excess risk based on I85D total body surface doses were about 50 % greater than those based on T65DR doses. Nonsignificant differences of only 3 % or less between the radiogenic excess risks for Hiroshima and Nagasaki survivors were observed in relation to I85D doses. Modification of the radiation dose response by sex, age at the time of the bombing, or time since exposure was qualitatively similar for I85D and T65DR. For both leukemia and nonleukemic cancer mortality, the radiogenic excess risk was found to increase as a linear function of I85D total body surface dose; significantly poorer fits were obtained with pure quadratic dose-response functions, while linear-quadratic dose responses did not provide significantly better fits. (J.P.N.)

  3. Dosimetry for radiation processing

    DEFF Research Database (Denmark)

    Miller, Arne

    1986-01-01

    During the past few years significant advances have taken place in the different areas of dosimetry for radiation processing, mainly stimulated by the increased interest in radiation for food preservation, plastic processing and sterilization of medical products. Reference services both...... and sterilization dosimetry, optichromic dosimeters in the shape of small tubes for food processing, and ESR spectroscopy of alanine for reference dosimetry. In this paper the special features of radiation processing dosimetry are discussed, several commonly used dosimeters are reviewed, and factors leading...

  4. A study of computational dosimetry and boron biodistribution for ex – situ lung BNCT at RA-3 Reactor

    International Nuclear Information System (INIS)

    Garabalino, M.A.; Trivillin, V. A.; Monti Hughes, A.; Pozzi, E.C.C.; Thorp, S.; Curotto, P; Miller, M.; Santa Cruz, G.A.; Saint Martin, G.; Schwint, A.E.; González, S.J.; Farías, R.O; Portu, A.; Ferraris, S.; Santa María, J.; Lange, F.; Bortolussi, S.; Altieri, S.

    2013-01-01

    Within the context of the preclinical ex-situ BNCT Project for the treatment of diffuse lung metastases, we performed boron biodistribution studies in a sheep model and computational dosimetry studies in human lung to evaluate the potential therapeutic efficacy of the proposed technique. Herein we report preliminary data that supports the use of the sheep model as an adequate human surrogate in terms of boron kinetics and uptake in clinically relevant tissues. Furthermore, the estimation of the potential therapeutic efficacy of the proposed treatment in humans, based on boron uptake values in the large animal model, yields promising tumor control probability values even in the most conservative scenario considered. (author)

  5. ENHANCED HYBRID PSO – ACO ALGORITHM FOR GRID SCHEDULING

    Directory of Open Access Journals (Sweden)

    P. Mathiyalagan

    2010-07-01

    Full Text Available Grid computing is a high performance computing environment to solve larger scale computational demands. Grid computing contains resource management, task scheduling, security problems, information management and so on. Task scheduling is a fundamental issue in achieving high performance in grid computing systems. A computational GRID is typically heterogeneous in the sense that it combines clusters of varying sizes, and different clusters typically contains processing elements with different level of performance. In this, heuristic approaches based on particle swarm optimization and ant colony optimization algorithms are adopted for solving task scheduling problems in grid environment. Particle Swarm Optimization (PSO is one of the latest evolutionary optimization techniques by nature. It has the better ability of global searching and has been successfully applied to many areas such as, neural network training etc. Due to the linear decreasing of inertia weight in PSO the convergence rate becomes faster, which leads to the minimal makespan time when used for scheduling. To make the convergence rate faster, the PSO algorithm is improved by modifying the inertia parameter, such that it produces better performance and gives an optimized result. The ACO algorithm is improved by modifying the pheromone updating rule. ACO algorithm is hybridized with PSO algorithm for efficient result and better convergence in PSO algorithm.

  6. The EURADOS/CONRAD activities on radiation protection dosimetry in medicine

    International Nuclear Information System (INIS)

    Vanhavere, F.; Struelens, L.; Bordy, J.M.; Daures, J.; Denozieres, M.; Buls, N.; Clerinx, P.; Carinou, E.; Clairand, I.; Debroas, J.; Donadille, L.; Itie, C.; Ginjaume, M.; Jansen, J.; Jaervinen, H.; Miljanic, S.; Ranogajec-Komor, M.; Nikodemova, D.; Rimpler, A.; Sans Merce, M.; D'Errico, F.

    2008-01-01

    Full text: This presentation gives an overview on the research activities that EURADOS coordinates in the field of radiation protection dosimetry in medicine. EURADOS is an organization founded in 1981 to advance the scientific understanding and the technical development of the dosimetry of ionising radiation in the fields of radiation protection, radiobiology, radiation therapy and medical diagnosis by promoting collaboration between European laboratories. EURADOS operates by setting up Working Groups dealing with particular topics. Currently funded through the CONRAD project of the 6th EU Framework Programme, EURADOS has working groups on Computational Dosimetry, Internal Dosimetry, Complex mixed radiation fields at workplaces, and Radiation protection dosimetry of medical staff. The latter working group coordinates and promotes European research for the assessment of occupational exposures to staff in therapeutic and diagnostic radiology workplaces. Research is coordinated by sub-groups covering three specific areas: 1: Extremity dosimetry in nuclear medicine and interventional radiology: this sub-group coordinates investigations in the specific fields of the hospitals and studies of doses to different parts of the hands, arms, legs and feet; 2: Practice of double dosimetry: this sub-group reviews and evaluates the different methods and algorithms for the use of dosemeters placed above and below lead aprons, especially to determine personal doses to cardiologists during cardiac catheterisation, but also in CT-fluoroscopy and some nuclear medicine developments (e.g. use of Re-188); and 3: Use of electronic personal dosemeters in interventional radiology: this sub-group coordinates investigations in laboratories and hospitals, and intercomparisons with passive dosemeters with the aim to enable the formulation of standards. (author)

  7. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)

    2015-12-15

    Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry

  8. Optorsim: A Grid Simulator for Studying Dynamic Data Replication Strategies

    CERN Document Server

    Bell, William H; Millar, A Paul; Capozza, Luigi; Stockinger, Kurt; Zini, Floriano

    2003-01-01

    Computational grids process large, computationally intensive problems on small data sets. In contrast, data grids process large computational problems that in turn require evaluating, mining and producing large amounts of data. Replication, creating geographically disparate identical copies of data, is regarded as one of the major optimization techniques for reducing data access costs. In this paper, several replication algorithms are discussed. These algorithms were studied using the Grid simulator: OptorSim. OptorSim provides a modular framework within which optimization strategies can be studied under different Grid configurations. The goal is to explore the stability and transient behaviour of selected optimization techniques. We detail the design and implementation of OptorSim and analyze various replication algorithms based on different Grid workloads.

  9. Grid workflow job execution service 'Pilot'

    Science.gov (United States)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-12-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  10. Grid workflow job execution service 'Pilot'

    International Nuclear Information System (INIS)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-01-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  11. Software, component, and service deployment in computational Grids

    International Nuclear Information System (INIS)

    von Laszewski, G.; Blau, E.; Bletzinger, M.; Gawor, J.; Lane, P.; Martin, S.; Russell, M.

    2002-01-01

    Grids comprise an infrastructure that enables scientists to use a diverse set of distributed remote services and resources as part of complex scientific problem-solving processes. We analyze some of the challenges involved in deploying software and components transparently in Grids. We report on three practical solutions used by the Globus Project. Lessons learned from this experience lead us to believe that it is necessary to support a variety of software and component deployment strategies. These strategies are based on the hosting environment

  12. Removal of apparent singularity in grid computations

    International Nuclear Information System (INIS)

    Jakubovics, J.P.

    1993-01-01

    A self-consistency test for magnetic domain wall models was suggested by Aharoni. The test consists of evaluating the ratio S = var-epsilon wall /var-epsilon wall , where var-epsilon wall is the wall energy, and var-epsilon wall is the integral of a certain function of the direction cosines of the magnetization, α, β, γ over the volume occupied by the domain wall. If the computed configuration is a good approximation to one corresponding to an energy minimum, the ratio is close to 1. The integrand of var-epsilon wall contains terms that are inversely proportional to γ. Since γ passes through zero at the centre of the domain wall, these terms have a singularity at these points. The integral is finite and its evaluation does not usually present any problems when the direction cosines are known in terms of continuous functions. In many cases, significantly better results for magnetization configurations of domain walls can be obtained by computations using finite element methods. The direction cosines are then only known at a set of discrete points, and integration over the domain wall is replaced by summation over these points. Evaluation of var-epsilon wall becomes inaccurate if the terms in the summation are taken to be the values of the integrand at the grid points, because of the large contribution of points close to where γ changes sign. The self-consistency test has recently been generalised to a larger number of cases. The purpose of this paper is to suggest a method of improving the accuracy of the evaluation of integrals in such cases. Since the self-consistency test has so far only been applied to two-dimensional magnetization configurations, the problem and its solution will be presented for that specific case. Generalisation to three or more dimensions is straight forward

  13. Alanine dosimetry for clinical applications. Proceedings

    International Nuclear Information System (INIS)

    Anton, M.

    2006-05-01

    The following topics are dealt with: Therapy level alanine dosimetry at the UK Nationational Physical Laboratory, alanine as a precision validation tool for reference dosimetry, composition of alanine pellet dosimeters, the angular dependence of the alanine ESR spectrum, the CIAE alanine dosimeter for radiotherapy level, a correction for temporal evolution effects in alanine dosimetry, next-generation services foe e-traceability to ionization radiation national standards, establishing e-traceability to HIST high-dose measurement standards, alanine dosimetry of dose delivery from clinical accelerators, the e-scan alanine dosimeter reader, alanine dosimetry at ISS, verification of the integral delivered dose for IMRT treatment in the head and neck region with ESR/alanine dosimetry, alanine dosimetry in helical tomotherapy beams, ESR dosimetry research and development at the University of Palermo, lithium formate as a low-dose EPR radiation dosimeter, sensitivity enhancement of alanine/EPR dosimetry. (HSI)

  14. Adaptive grid generation in a patient-specific cerebral aneurysm

    Science.gov (United States)

    Hodis, Simona; Kallmes, David F.; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce

  15. Developments in physical dosimetry and radiation protection; Entwicklungen in der physikalischen Dosimetrie im Strahlenschutz

    Energy Technology Data Exchange (ETDEWEB)

    Fiebich, Martin [Technische Hochschule Mittelhessen, Giessen (Germany). Inst. fuer Medizinische Physik und Strahlenschutz

    2017-07-01

    In the frame of physical dosimetry new dose units have been defined: the depth personal dose (equivalent dose in 10 mm depth) and the surface personal dose (equivalent dose in 0.07 mm depth). Physical dosimetry is applied for the determination of occupational radiation exposure, the radiation protected area control, the estimation of radiation exposure of patients during radiotherapy, for quality assurance and in research projects and optimization challenges. Developments have appeared with respect to punctual measuring chambers, eye lens dosimetry, OSL (optically stimulated luminescence) dosimetry, real-time dosimetry and Monte Carlo methods. New detection limits of about 1 micro Gy were reached.

  16. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configurat...

  17. Textbook of dosimetry. 4. ed.

    International Nuclear Information System (INIS)

    Ivanov, V.I.

    1999-01-01

    This textbook of dosimetry is devoted to the students in physics and technical physics of high education institutions, confronted with different application of atomic energy as well as with protection of population and environment against ionizing radiations. Atomic energy is highly beneficial for man but unfortunately incorporates potential dangers which manifest in accidents, the source of which is either insufficient training of the personnel, a criminal negligence or insufficient reliability of the nuclear facilities. The majority of the incident and accident events have had as origin the personnel errors. This was the case with both the 'Three Miles Island' (1979) and Chernobyl (1986) NPP accidents. The dosimetry science acquires a vital significance in accident situations since the data obtained by its procedures are essential in choosing the correct immediate actions, behaviour tactics, orientation of liquidation of accident consequences as well as in ensuring the health of population. An important accent is placed in this manual on clarification of the nature of physical processes taken place in dosimetric detectors, in establishing the relation between radiation field characteristics and the detector response as well as in defining different dosimetric quantities. The terminology and the units of physical quantities is based on the international system of units. The book contains the following 15 chapters: 1. Ionizing radiation field; 2. Radiation doses; 3. Physical bases of gamma radiation dosimetry; 4. Ionization dosimetric detectors; 5. Semiconductor dosimetric detectors; 6. Scintillation detection in the gamma radiation dosimetry; 7. Luminescent methods in dosimetry; 8. The photographic and chemical methods of gamma radiation dosimetry; 9. Neutron dosimetry; 10. Dosimetry of high intensity radiation; 11. Dosimetry of high energy Bremsstrahlung; 12. Measurement of the linear energy transfer; 13. Microdosimetry; 14. Dosimetry of incorporated

  18. The spatial resolution in dosimetry with normoxic polymer-gels investigated with the dose modulation transfer approach

    International Nuclear Information System (INIS)

    Bayreder, Christian; Schoen, Robert; Wieland, M.; Georg, Dietmar; Moser, Ewald; Berg, Andreas

    2008-01-01

    The verification of dose distributions with high dose gradients as appearing in brachytherapy or stereotactic radiotherapy for example, calls for dosimetric methods with sufficiently high spatial resolution. Polymer gels in combination with a MR or optical scanner as a readout device have the potential of performing the verification of a three-dimensional dose distribution within a single measurement. The purpose of this work is to investigate the spatial resolution achievable in MR-based polymer gel dosimetry. The authors show that dosimetry on a very small spatial scale (voxel size: 94x94x1000 μm 3 ) can be performed with normoxic polymer gels using parameter selective T2 imaging. In order to prove the spatial resolution obtained we are relying on the dose-modulation transfer function (DMTF) concept based on very fine dose modulations at half periods of 200 μm. Very fine periodic dose modulations of a 60 Co photon field were achieved by means of an absorption grid made of tungsten-carbide, specifically designed for quality control. The dose modulation in the polymer gel is compared with that of film dosimetry in one plane via the DMTF concept for general access to the spatial resolution of a dose imaging system. Additionally Monte Carlo simulations were performed and used for the calculation of the DMTF of both, the polymer gel and film dosimetry. The results obtained by film dosimetry agree well with those of Monte Carlo simulations, whereas polymer gel dosimetry overestimates the amplitude value of the fine dose modulations. The authors discuss possible reasons. The in-plane resolution achieved in this work competes with the spatial resolution of standard clinical film-scanner systems

  19. Personnel neutron dosimetry

    International Nuclear Information System (INIS)

    Hankins, D.

    1982-04-01

    This edited transcript of a presentation on personnel neutron discusses the accuracy of present dosimetry practices, requirements, calibration, dosemeter types, quality factors, operational problems, and dosimetry for a criticality accident. 32 figs

  20. SU-F-T-508: A Collimator-Based 3-Dimensional Grid Therapy Technique in a Small Animal Radiation Research Platform

    International Nuclear Information System (INIS)

    Jin, J; Kong, V; Zhang, H

    2016-01-01

    Purpose: Three dimensional (3D) Grid Therapy using MLC-based inverse-planning has been proposed to achieve the features of both conformal radiotherapy and spatially fractionated radiotherapy, which may deliver very high dose in a single fraction to portions of a large tumor with relatively low normal tissue dose. However, the technique requires relatively long delivery time. This study aims to develop a collimator-based 3D grid therapy technique. Here we report the development of the technique in a small animal radiation research platform. Methods: Similar as in the MLC-based technique, 9 non-coplanar beams in special channeling directions were used for the 3D grid therapy technique. Two specially designed grid collimators were fabricated, and one of them was selectively used to match the corresponding gantry/couch angles so that the grid opening of all 9 beams are met in the 3D space in the target. A stack of EBT3 films were used as 3D dosimetry to demonstrate the 3D grid-like dose distribution in the target. Three 1-mm beams were delivered to the stack of films in the area outside the target for alignment when all the films were scanned to reconstruct the 3D dosimtric image. Results: 3D film dosimetry showed a lattice-like dose distribution in the 3D target as well as in the axial, sagittal and coronal planes. The dose outside the target also showed a grid like dose distribution, and the average dose gradually decreased with the distance to the target. The peak to valley ratio was approximately 5:1. The delivery time was 7 minutes for 18 Gy peak dose, comparing to 6 minutes to deliver a 18-Gy 3D conformal plan. Conclusion: We have demonstrated the feasibility of the collimator-based 3D grid therapy technique which can significantly reduce delivery time comparing to MLC-based inverse planning technique.

  1. SU-F-T-508: A Collimator-Based 3-Dimensional Grid Therapy Technique in a Small Animal Radiation Research Platform

    Energy Technology Data Exchange (ETDEWEB)

    Jin, J; Kong, V; Zhang, H [Georgia Regents University, Augusta, GA (Georgia)

    2016-06-15

    Purpose: Three dimensional (3D) Grid Therapy using MLC-based inverse-planning has been proposed to achieve the features of both conformal radiotherapy and spatially fractionated radiotherapy, which may deliver very high dose in a single fraction to portions of a large tumor with relatively low normal tissue dose. However, the technique requires relatively long delivery time. This study aims to develop a collimator-based 3D grid therapy technique. Here we report the development of the technique in a small animal radiation research platform. Methods: Similar as in the MLC-based technique, 9 non-coplanar beams in special channeling directions were used for the 3D grid therapy technique. Two specially designed grid collimators were fabricated, and one of them was selectively used to match the corresponding gantry/couch angles so that the grid opening of all 9 beams are met in the 3D space in the target. A stack of EBT3 films were used as 3D dosimetry to demonstrate the 3D grid-like dose distribution in the target. Three 1-mm beams were delivered to the stack of films in the area outside the target for alignment when all the films were scanned to reconstruct the 3D dosimtric image. Results: 3D film dosimetry showed a lattice-like dose distribution in the 3D target as well as in the axial, sagittal and coronal planes. The dose outside the target also showed a grid like dose distribution, and the average dose gradually decreased with the distance to the target. The peak to valley ratio was approximately 5:1. The delivery time was 7 minutes for 18 Gy peak dose, comparing to 6 minutes to deliver a 18-Gy 3D conformal plan. Conclusion: We have demonstrated the feasibility of the collimator-based 3D grid therapy technique which can significantly reduce delivery time comparing to MLC-based inverse planning technique.

  2. GENMOD - A program for internal dosimetry calculations

    International Nuclear Information System (INIS)

    Dunford, D.W.; Johnson, J.R.

    1987-12-01

    The computer code GENMOD was created to calculate the retention and excretion, and the integrated retention for selected radionuclides under a variety of exposure conditions. Since the creation of GENMOD new models have been developed and interfaced to GENMOD. This report describes the models now included in GENMOD, the dosimetry factors database, and gives a brief description of the GENMOD program

  3. LHCb: The Evolution of the LHCb Grid Computing Model

    CERN Multimedia

    Arrabito, L; Bouvet, D; Cattaneo, M; Charpentier, P; Clarke, P; Closier, J; Franchini, P; Graciani, R; Lanciotti, E; Mendez, V; Perazzini, S; Nandkumar, R; Remenska, D; Roiser, S; Romanovskiy, V; Santinelli, R; Stagni, F; Tsaregorodtsev, A; Ubeda Garcia, M; Vedaee, A; Zhelezov, A

    2012-01-01

    The increase of luminosity in the LHC during its second year of operation (2011) was achieved by delivering more protons per bunch and increasing the number of bunches. Taking advantage of these changed conditions, LHCb ran with a higher pileup as well as a much larger charm physics introducing a bigger event size and processing times. These changes led to shortages in the offline distributed data processing resources, an increased need of cpu capacity by a factor 2 for reconstruction, higher storage needs at T1 sites by 70\\% and subsequently problems with data throughput for file access from the storage elements. To accommodate these changes the online running conditions and the Computing Model for offline data processing had to be adapted accordingly. This paper describes the changes implemented for the offline data processing on the Grid, relaxing the Monarc model in a first step and going beyond it subsequently. It further describes other operational issues discovered and solved during 2011, present the ...

  4. HP advances Grid Strategy for the adaptive enterprise

    CERN Multimedia

    2003-01-01

    "HP today announced plans to further enable its enterprise infrastructure technologies for grid computing. By leveraging open grid standards, HP plans to help customers simplify the use and management of distributed IT resources. The initiative will integrate industry grid standards, including the Globus Toolkit and Open Grid Services Architecture (OGSA), across HP's enterprise product lines" (1 page).

  5. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  6. Chimera grids in the simulation of three-dimensional flowfields in turbine-blade-coolant passages

    Science.gov (United States)

    Stephens, M. A.; Rimlinger, M. J.; Shih, T. I.-P.; Civinskas, K. C.

    1993-01-01

    When computing flows inside geometrically complex turbine-blade coolant passages, the structure of the grid system used can affect significantly the overall time and cost required to obtain solutions. This paper addresses this issue while evaluating and developing computational tools for the design and analysis of coolant-passages, and is divided into two parts. In the first part, the various types of structured and unstructured grids are compared in relation to their ability to provide solutions in a timely and cost-effective manner. This comparison shows that the overlapping structured grids, known as Chimera grids, can rival and in some instances exceed the cost-effectiveness of unstructured grids in terms of both the man hours needed to generate grids and the amount of computer memory and CPU time needed to obtain solutions. In the second part, a computational tool utilizing Chimera grids was used to compute the flow and heat transfer in two different turbine-blade coolant passages that contain baffles and numerous pin fins. These computations showed the versatility and flexibility offered by Chimera grids.

  7. Radiation processing dosimetry - past, present and future

    International Nuclear Information System (INIS)

    McLaughlin, W.L.

    1999-01-01

    Since the two United Nations Conferences were held in Geneva in 1955 and 1958 on the Peaceful Uses of Atomic Energy and the concurrent foundation of the International Atomic Energy Agency in 1957, the IAEA has fostered high-dose dosimetry and its applications. This field is represented in industrial radiation processing, agricultural programmes, and therapeutic and preventative medicine. Such dosimetry is needed specifically for pest and quarantine control and in the processing of medical products, pharmaceuticals, blood products, foodstuffs, solid, liquid and gaseous wastes, and a variety of useful commodities, e.g. polymers, composites, natural rubber and elastomers, packaging, electronic, and automotive components, as well as in radiotherapy. Improvements and innovations of dosimetry materials and analytical systems and software continue to be important goals for these applications. Some of the recent advances in high-dose dosimetry include tetrazolium salts and substituted polydiacetylene as radiochromic media, on-line real-time as well as integrating semiconductor and diamond-detector monitors, quantitative label dosimeters, photofluorescent sensors for broad dose range applications, and improved and simplified parametric and computational codes for imaging and simulating 3D radiation dose distributions in model products. The use of certain solid-state devices, e.g. optical quality LiF, at low (down to 4K) and high (up to 500 K) temperatures, is of interest for materials testing. There have also been notable developments in experimental dose mapping procedures, e.g. 2D and 3D dose distribution analyses by flat-bed optical scanners and software applied to radiochromic and photofluorescent images. In addition, less expensive EPR spectrometers and new EPR dosimetry materials and high-resolution semiconductor diode arrays, charge injection devices, and photostimulated storage phosphors have been introduced. (author)

  8. Skin dosimetry - radiological protection aspects of skin dosimetry

    International Nuclear Information System (INIS)

    Dennis, J.A.

    1991-01-01

    Following a Workshop in Skin Dosimetry, a summary of the radiological protection aspects is given. Aspects discussed include routine skin monitoring and dose limits, the need for careful skin dosimetry in high accidental exposures, techniques for assessing skin dose at all relevant depths and the specification of dose quantities to be measured by personal dosemeters and the appropriate methods to be used in their calibration. (UK)

  9. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    International Nuclear Information System (INIS)

    Cass, Tony

    2012-01-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  10. Computerized planning and dosimetry for brachytherapy in carcinomas cervix

    International Nuclear Information System (INIS)

    Kizilbash, N.A.; Jabeen, K.; Hussain, R.

    1996-01-01

    A project on the use of computerize planning and dosimetry for brachytherapy in carcinoma of cervix was started at NORI (Nuclear Medicine, Oncology and Radiotherapy Institute, Islamabad) in september 1990. A total number of 182 patients were included in the study over a period of three years. The treatment of all these patients was done by external radiation as well as the intracavitary therapy. Planning and dosimetry was done according to ICRU 38 recommendations. 70 patients were planned with two computers TP-II (Dr. J. Cunningham's software) and PC based system (Dr. Kallinger's software, BTI system). From the results of the two computers TP-II and PC, it can be seen that the difference in a absorbed dose for all recommended points in not going to harm the patient. The dose to the bladder and the rectum in our studies is quite low because of the low activity in the ovoid sources. (author)

  11. ESR Dosimetry

    International Nuclear Information System (INIS)

    Baffa, Oswaldo; Rossi, Bruno; Graeff, Carlos; Kinoshita, Angela; Chen Abrego, Felipe; Santos, Adevailton Bernardo dos

    2004-01-01

    ESR dosimetry is widely used for several applications such as dose assessment in accidents, medical applications and sterilization of food and other materials. In this work the dosimetric properties of natural and synthetic Hydroxyapatite, Alanine, and 2-Methylalanine are presented. Recent results on the use of a K-Band (24 GHz) ESR spectrometer in dosimetry are also presented

  12. Roadmap for the ARC Grid Middleware

    DEFF Research Database (Denmark)

    Kleist, Josva; Eerola, Paula; Ekelöf, Tord

    2006-01-01

    The Advanced Resource Connector (ARC) or the NorduGrid middleware is an open source software solution enabling production quality computational and data Grids, with special emphasis on scalability, stability, reliability and performance. Since its first release in May 2002, the middleware is depl...

  13. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry

    International Nuclear Information System (INIS)

    Ait Abderrahim, H.; D'Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The 'Concrete Benchmark' experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the 'Concrete Benchmark' experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs

  14. Agent-Mining of Grid Log-Files: A Case Study

    NARCIS (Netherlands)

    Stoter, A.; Dalmolen, Simon; Mulder, .W.

    2013-01-01

    Grid monitoring requires analysis of large amounts of log files across multiple domains. An approach is described for automated extraction of job-flow information from large computer grids, using software agents and genetic computation. A prototype was created as a first step towards communities of

  15. Dosimetry of ionizing radiation

    International Nuclear Information System (INIS)

    Musilek, L.; Seda, J.; Trousil, J.

    1992-01-01

    The publication deals with a major field of ionizing radiation dosimetry, viz., integrating dosimetric methods, which are the basic means of operative dose determination. It is divided into the following sections: physical and chemical effects of ionizing radiation; integrating dosimetric methods for low radiation doses (film dosimetry, nuclear emulsions, thermoluminescence, radiophotoluminescence, solid-state track detectors, integrating ionization dosemeters); dosimetry of high ionizing radiation doses (chemical dosimetric methods, dosemeters based on the coloring effect, activation detectors); additional methods applicable to integrating dosimetry (exoelectron emission, electron spin resonance, lyoluminescence, etc.); and calibration techniques for dosimetric instrumentation. (Z.S.). 422 refs

  16. Engineering of an Extreme Rainfall Detection System using Grid Computing

    Directory of Open Access Journals (Sweden)

    Olivier Terzo

    2012-10-01

    Full Text Available This paper describes a new approach for intensive rainfall data analysis. ITHACA's Extreme Rainfall Detection System (ERDS is conceived to provide near real-time alerts related to potential exceptional rainfalls worldwide, which can be used by WFP or other humanitarian assistance organizations to evaluate the event and understand the potentially floodable areas where their assistance is needed. This system is based on precipitation analysis and it uses rainfall data from satellite at worldwide extent. This project uses the Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis dataset, a NASA-delivered near real-time product for current rainfall condition monitoring over the world. Considering the great deal of data to process, this paper presents an architectural solution based on Grid Computing techniques. Our focus is on the advantages of using a distributed architecture in terms of performances for this specific purpose.

  17. Scheduling in Heterogeneous Grid Environments: The Effects of DataMigration

    Energy Technology Data Exchange (ETDEWEB)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Smith, Warren

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this goal can be fully realized. One problem critical to the effective utilization of computational grids is efficient job scheduling. Our prior work addressed this challenge by defining a grid scheduling architecture and several job migration strategies. The focus of this study is to explore the impact of data migration under a variety of demanding grid conditions. We evaluate our grid scheduling algorithms by simulating compute servers, various groupings of servers into sites, and inter-server networks, using real workloads obtained from leading supercomputing centers. Several key performance metrics are used to compare the behavior of our algorithms against reference local and centralized scheduling schemes. Results show the tremendous benefits of grid scheduling, even in the presence of input/output data migration - while highlighting the importance of utilizing communication-aware scheduling schemes.

  18. Availability measurement of grid services from the perspective of a scientific computing centre

    International Nuclear Information System (INIS)

    Marten, H; Koenig, T

    2011-01-01

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard 'IT Infrastructure Library (ITIL)' was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  19. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    Science.gov (United States)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  20. Reactor Dosimetry State of the Art 2008

    Science.gov (United States)

    Voorbraak, Wim; Debarberis, Luigi; D'Hondt, Pierre; Wagemans, Jan

    2009-08-01

    Oral session 1: Retrospective dosimetry. Retrospective dosimetry of VVER 440 reactor pressure vessel at the 3rd unit of Dukovany NPP / M. Marek ... [et al.]. Retrospective dosimetry study at the RPV of NPP Greifswald unit 1 / J. Konheiser ... [et al.]. Test of prototype detector for retrospective neutron dosimetry of reactor internals and vessel / K. Hayashi ... [et al.]. Neutron doses to the concrete vessel and tendons of a magnox reactor using retrospective dosimetry / D. A. Allen ... [et al.]. A retrospective dosimetry feasibility study for Atucha I / J. Wagemans ... [et al.]. Retrospective reactor dosimetry with zirconium alloy samples in a PWR / L. R. Greenwood and J. P. Foster -- Oral session 2: Experimental techniques. Characterizing the Time-dependent components of reactor n/y environments / P. J. Griffin, S. M. Luker and A. J. Suo-Anttila. Measurements of the recoil-ion response of silicon carbide detectors to fast neutrons / F. H. Ruddy, J. G. Seidel and F. Franceschini. Measurement of the neutron spectrum of the HB-4 cold source at the high flux isotope reactor at Oak Ridge National Laboratory / J. L. Robertson and E. B. Iverson. Feasibility of cavity ring-down laser spectroscopy for dose rate monitoring on nuclear reactor / H. Tomita ... [et al.]. Measuring transistor damage factors in a non-stable defect environment / D. B. King ... [et al.]. Neutron-detection based monitoring of void effects in boiling water reactors / J. Loberg ... [et al.] -- Poster session 1: Power reactor surveillance, retrospective dosimetry, benchmarks and inter-comparisons, adjustment methods, experimental techniques, transport calculations. Improved diagnostics for analysis of a reactor pulse radiation environment / S. M. Luker ... [et al.]. Simulation of the response of silicon carbide fast neutron detectors / F. Franceschini, F. H. Ruddy and B. Petrović. NSV A-3: a computer code for least-squares adjustment of neutron spectra and measured dosimeter responses / J. G

  1. Use of computational methods for substitution and numerical dosimetry of real bones

    International Nuclear Information System (INIS)

    Silva, I.C.S.; Gonzalez, K.M.L.; Barbosa, A.J.A.; Lucindo Junior, C.R.; Vieira, J.W.; Lima, F.R.A.

    2017-01-01

    Estimating the dose that ionizing radiation deposits in the soft tissues of the skeleton within the cavities of the trabecular bones represents one of the greatest difficulties faced by numerical dosimetry. The Numerical Dosimetry Group (GDN/CNPq) Brazil, Recife-PE has used a method based on micro-CT images. The problem of the implementation of micro-CT is the difficulty in obtaining samples of real bones (OR). The objective of this work was to evaluate the sample of a virtual block of trabecular bone through the nonparametric method based on the voxel frequencies (VF) and samples of the climbing plant called Luffa aegyptica, whose dry fruit is known as vegetal bush (BV) substitution of OR samples. For this, a theoretical study of the two techniques developed by the GDN was made. The study showed in both techniques, after the dosimetric evaluations, that the actual sample can be replaced by the synthetic samples, since they have shown dose estimates close to the actual one

  2. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data analysis tasks by the geographically close or local scientific groups, and which usually comprise a range of architectures without Grid middleware. Therefore a substantial part of the ATLAS monitoring tools which make use of Grid middleware, cannot be used for a large fraction of Tier3 sites. The presentation will describe the T3mon project, which aims to develop a software suite for monitoring the Tier3 sites, both from the perspective of the local site administrator and that of the ATLAS VO, thereby enabling the global view of the contribution from Tier3 sites to the ATLAS computing activities. Special attention in p...

  3. Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems

    CERN Document Server

    Kołodziej, Joanna

    2012-01-01

    One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications.   This book covers hot topics in t...

  4. The self-adaptation to dynamic failures for efficient virtual organization formations in grid computing context

    International Nuclear Information System (INIS)

    Han Liangxiu

    2009-01-01

    Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. However, due to the nature of heterogeneous and dynamic resources, dynamic failures in the distributed grid environment usually occur more than in traditional computation platforms, which cause failed VO formations. In this paper, we develop a novel self-adaptive mechanism to dynamic failures during VO formations. Such a self-adaptive scheme allows an individual and member of VOs to automatically find other available or replaceable one once a failure happens and therefore makes systems automatically recover from dynamic failures. We define dynamic failure situations of a system by using two standard indicators: mean time between failures (MTBF) and mean time to recover (MTTR). We model both MTBF and MTTR as Poisson distributions. We investigate and analyze the efficiency of the proposed self-adaptation mechanism to dynamic failures by comparing the success probability of VO formations before and after adopting it in three different cases: (1) different failure situations; (2) different organizational structures and scales; (3) different task complexities. The experimental results show that the proposed scheme can automatically adapt to dynamic failures and effectively improve the dynamic VO formation performance in the event of node failures, which provide a valuable addition to the field.

  5. Clinical dosimetry

    International Nuclear Information System (INIS)

    Rassow, J.

    1973-01-01

    The main point of this paper on clinical dosimetry which is to be understood here as application of physical dosimetry on accelerators in medical practice, is based on dosimetric methodics. Following an explanation of the dose parameters and description of the dose distribution important for clinical practice as well as geometric irradiation parameters, the significance of a series of physical parameters such as accelerator energy, surface energy of average stopping power etc. is dealt with in detail. Following a section on field homogenization with bremsstrahlung and electron radiation, details on dosimetry in clinical practice are given. Finally, a few problems of dosemeter or monitor calibration on accelerators are described. The explanations are supplemented by a series of diagrams and tables. (ORU/LH) [de

  6. A 3-D chimera grid embedding technique

    Science.gov (United States)

    Benek, J. A.; Buning, P. G.; Steger, J. L.

    1985-01-01

    A three-dimensional (3-D) chimera grid-embedding technique is described. The technique simplifies the construction of computational grids about complex geometries. The method subdivides the physical domain into regions which can accommodate easily generated grids. Communication among the grids is accomplished by interpolation of the dependent variables at grid boundaries. The procedures for constructing the composite mesh and the associated data structures are described. The method is demonstrated by solution of the Euler equations for the transonic flow about a wing/body, wing/body/tail, and a configuration of three ellipsoidal bodies.

  7. Finite volume methods for the incompressible Navier-Stokes equations on unstructured grids

    Energy Technology Data Exchange (ETDEWEB)

    Meese, Ernst Arne

    1998-07-01

    Most solution methods of computational fluid dynamics (CFD) use structured grids based on curvilinear coordinates for compliance with complex geometries. In a typical industry application, about 80% of the time used to produce the results is spent constructing computational grids. Recently the use of unstructured grids has been strongly advocated. For unstructured grids there are methods for generating them automatically on quite complex domains. This thesis focuses on the design of Navier-Stokes solvers that can cope with unstructured grids and ''low quality grids'', thus reducing the need for human intervention in the grid generation.

  8. Fast neutron spectrometry and dosimetry; Spectrometrie et dosimetrie des neutrons rapides

    Energy Technology Data Exchange (ETDEWEB)

    Blaize, S; Ailloud, J; Mariani, J; Millot, J P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1958-07-01

    We have studied fast neutron spectrometry and dosimetry through the recoil protons they produce in hydrogenated samples. In spectrometric, we used nuclear emulsions, in dosimetric, we used polyethylene coated with zinc sulphide and placed before a photomultiplier. (author)Fren. [French] Nous avons etudie la spectrometrie et la dosimetrie des neutrons rapides en utilisant les protons de recul qu'ils produisent dans une matiere hydrogenee. En spectrometrie, nous avons employe des emulsions nucleaires, en dosimetrie, du polyethylene recouvert de sulfure de zinc place devant un photomultiplicateur. (auteur)

  9. Development and Operation of the D-Grid Infrastructure

    Science.gov (United States)

    Fieseler, Thomas; Gűrich, Wolfgang

    D-Grid is the German national grid initiative, granted by the German Federal Ministry of Education and Research. In this paper we present the Core D-Grid which acts as a condensation nucleus to build a production grid and the latest developments of the infrastructure. The main difference compared to other international grid initiatives is the support of three middleware systems, namely LCG/gLite, Globus, and UNICORE for compute resources. Storage resources are connected via SRM/dCache and OGSA-DAI. In contrast to homogeneous communities, the partners in Core D-Grid have different missions and backgrounds (computing centres, universities, research centres), providing heterogeneous hardware from single processors to high performance supercomputing systems with different operating systems. We present methods to integrate these resources and services for the DGrid infrastructure like a point of information, centralized user and virtual organization management, resource registration, software provision, and policies for the implementation (firewalls, certificates, user mapping).

  10. Dosimetry Control: Technic and methods. Proceedings of the international workshop 'Actual problems of dosimetry'

    International Nuclear Information System (INIS)

    Lyutsko, A.M.; Nesterenko, V.B.; Chudakov, V.A.; Konoplya, E.F.; Milyutin, A.A.

    1997-10-01

    There is a number of unsolved problems of both dosimetric and radiometric control, questions of the biological dosimetry, reconstruction of dozes of irradiation of the population at radiation incidents, which require coordination of efforts of scientists in various areas of a science. The submitted materials are grouped on five units: dosimetry engineering, biological dosimetry and markers of radiation impact, dosimetry of a medical irradiation, normative and measurement assurance of the dosimetric control, monitoring and reconstruction of dozes at radiation incidents

  11. Job scheduling in a heterogenous grid environment

    Energy Technology Data Exchange (ETDEWEB)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Smith, Warren

    2004-02-11

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  12. Dosimetry applications in GATE Monte Carlo toolkit.

    Science.gov (United States)

    Papadimitroulas, Panagiotis

    2017-09-01

    Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Nevada test site neutron dosimetry-problems/solutions

    International Nuclear Information System (INIS)

    Sygitowicz, L.S.; Bastian, C.T.; Wells, I.J.; Koch, P.N.

    1991-01-01

    Historically, neutron dosimetry at the NTS was done using NTA film and albedo LiF TLD's. In 1987 the dosimeter type was changed from the albedo TLD based system to a CR-39 track etch based system modeled after the program developed by D. Hankins at LLNL. Routine issue and return is performed quarterly for selected personnel using bar-code readers at permanent locations. The capability exists for work site issue as-needed. Issue data are transmitted by telephone to a central computer where it is stored until the dosimeter is returned, processed and read, and the dose calculation is performed. Dose equivalent calculations are performed using LOTUS 123 and the results are printed as a hard copy record. The issue and dose information are hand-entered into the Dosimetry database. An application is currently being developed to automate this sequence

  14. Dosimetry and Calibration Section

    International Nuclear Information System (INIS)

    Otto, T.

    1999-01-01

    The Dosimetry and Calibration Section fulfils two tasks within CERN's Radiation Protection Group: the Individual Dosimetry Service monitors more than 5000 persons potentially exposed to ionizing radiation on the CERN sites, and the Calibration Laboratory verifies throughout the year, at regular intervals, over 1000 instruments, monitors, and electronic dosimeters used by RP Group. The establishment of a Quality Assurance System for the Individual Dosimetry Service, a requirement of the new Swiss Ordinance for personal dosimetry, put a considerable workload on the section. Together with an external consultant it was decided to identify and then describe the different 'processes' of the routine work performed in the dosimetry service. The resulting Quality Manual was submitted to the Federal Office for Public Health in Bern in autumn. The CERN Individual Dosimetry Service will eventually be officially endorsed after a successful technical test in March 1999. On the technical side, the introduction of an automatic development machine for gamma films was very successful. It processes the dosimetric films without an operator being present, and its built-in regeneration mechanism keeps the concentration of the processing chemicals at a constant level

  15. Dosimetry for radiation processing

    International Nuclear Information System (INIS)

    McLaughlin, W.L.; Boyd, A.W.; Chadwick, K.H.; McDonald, J.C.; Miller, A.

    1989-01-01

    Radiation processing is a relatively young industry with broad applications and considerable commercial success. Dosimetry provides an independent and effective way of developing and controlling many industrial processes. In the sterilization of medical devices and in food irradiation, where the radiation treatment impacts directly on public health, the measurements of dose provide the official means of regulating and approving its use. In this respect, dosimetry provides the operator with a means of characterizing the facility, of proving that products are treated within acceptable dose limits and of controlling the routine operation. This book presents an up-to-date review of the theory, data and measurement techniques for radiation processing dosimetry in a practical and useful way. It is hoped that this book will lead to improved measurement procedures, more accurate and precise dosimetry and a greater appreciation of the necessity of dosimetry for radiation processing. (author)

  16. One grid to rule them all

    CERN Multimedia

    2004-01-01

    Efforts are under way to create a computer the size of the world. The stated goal of grid computing is to create a worldwide network of computers interconnected so well and so fast that they act as one (1 page)

  17. World Wide Grid

    CERN Multimedia

    Grätzel von Grätz, Philipp

    2007-01-01

    Whether for genetic risk analysis or 3D-rekonstruktion of the cerebral vessels: the modern medicine requires more computing power. With a grid infrastructure, this one can be if necessary called by the network. (4 pages)

  18. Context-Aware Usage-Based Grid Authorization Framework

    Institute of Scientific and Technical Information of China (English)

    CUI Yongquan; HONG Fan; FU Cai

    2006-01-01

    Due to inherent heterogeneity, multi-domain characteristic and highly dynamic nature, authorization is a critical concern in grid computing. This paper proposes a general authorization and access control architecture, grid usage control (GUCON), for grid computing. It's based on the next generation access control mechanism usage control (UCON) model. The GUCON Framework dynamic grants and adapts permission to the subject based on a set of contextual information collected from the system environments; while retaining the authorization by evaluating access requests based on subject attributes, object attributes and requests. In general, GUCON model provides very flexible approaches to adapt the dynamically security request. GUCON model is being implemented in our experiment prototype.

  19. Revue of some dosimetry and dose assessment European projects

    International Nuclear Information System (INIS)

    Bolognese-Milsztajn, T.; Frank, D.; Lacoste, V.; Pihet, P.

    2006-01-01

    internal exposure monitoring programmes. Current monitoring programmes were critically reviewed, the major sources of uncertainty in assessed internal dose investigated, and guidance formulated on factors such as programme design, choice of method/techniques, monitoring intervals, and monitoring frequency. OMINEX promoted a common and harmonized approach to the design and implementation of internal dose monitoring programmes throughout the EU. The Coordination Action 'CONRAD' of the 6. Framework Programme will continue the work initiated within the 5. Framework Program in specific areas of dosimetry requiring coordination of research activities: computational dosimetry, internal dosimetry, complex mixed radiation fields at workplaces and radiation protection dosimetry of medical staff. (authors)

  20. Fast neutron dosimetry: Progress summary

    International Nuclear Information System (INIS)

    DeLuca, P.M. Jr.

    1988-01-01

    The purpose was to investigate the radiological physics and biology of very low energy photons derived from a 1-GeV electron synchrotron storage ring. An extensive beam line and irradiation apparatus was designed, developed, and constructed. Dosimetry measurements required invention and testing of a miniature absolute calorimeter and a cell irradiation fixture suitable for scanning exposures under computer control. Measurements of the kerma factors of oxygen, aluminum and silicon for 14-20 MeV neutrons. Custom designed miniature proportional counters of cylindrical symmetry were employed in these determinations. The oxygen kerma factor was found significantly lower than values calculated from microscopic cross sections. We also tested Mg and Fe walled conventional spherical counters. The direct neutron-counting gas interaction is significant enough for these counters that a correction is needed. We also investigated the application of Nuclear Magnetic Resonance spectroscopy to radiation dosimetry. Our purpose was to take advantage of recent development of very high-field magnets, complex RF-pulse techniques for solvent suppression, and improved spectral analysis techniques