WorldWideScience

Sample records for computing capacity resource

  1. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  2. Computational Chemistry Capacity Building in an Underprivileged ...

    African Journals Online (AJOL)

    Bridging the gap with the other continents requires the identification of capacity ... university in South Africa), where computational chemistry research capacity has ... testifies the feasibility of such capacity building also in conditions of limited ...

  3. Resource allocation in grid computing

    NARCIS (Netherlands)

    Koole, Ger; Righter, Rhonda

    2007-01-01

    Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of

  4. Challenges of human resource capacity building assistance

    International Nuclear Information System (INIS)

    Noro, Naoko

    2013-01-01

    At the first Nuclear Security Summit in Washington DC in 2010, Integrated Support Center for Nuclear Nonproliferation and Nuclear Security (ISCN) of the Japan Atomic Energy Agency was established based on Japan's National Statement which expressed Japan's strong commitment to contribute to the strengthening of nuclear security in Asian region. ISCN began its activities from JFY 2011. One of the main activities of ISCN is human resource capacity building support. Since JFY 2011, ISCN has offered various nuclear security training courses, seminars and workshops and total number of the participants to the ISCN's event reached more than 700. For the past three years, ISCN has been facing variety of challenges of nuclear security human resource assistance. This paper will briefly illustrate ISCN's achievement in the past years and introduce challenges and measures of ISCN in nuclear security human resource capacity building assistance. (author)

  5. Exploitation of heterogeneous resources for ATLAS Computing

    CERN Document Server

    Chudoba, Jiri; The ATLAS collaboration

    2018-01-01

    LHC experiments require significant computational resources for Monte Carlo simulations and real data processing and the ATLAS experiment is not an exception. In 2017, ATLAS exploited steadily almost 3M HS06 units, which corresponds to about 300 000 standard CPU cores. The total disk and tape capacity managed by the Rucio data management system exceeded 350 PB. Resources are provided mostly by Grid computing centers distributed in geographically separated locations and connected by the Grid middleware. The ATLAS collaboration developed several systems to manage computational jobs, data files and network transfers. ATLAS solutions for job and data management (PanDA and Rucio) were generalized and now are used also by other collaborations. More components are needed to include new resources such as private and public clouds, volunteers' desktop computers and primarily supercomputers in major HPC centers. Workflows and data flows significantly differ for these less traditional resources and extensive software re...

  6. Research on Water Resources Design Carrying Capacity

    Directory of Open Access Journals (Sweden)

    Guanghua Qin

    2016-04-01

    Full Text Available Water resources carrying capacity (WRCC is a recently proposed management concept, which aims to support sustainable socio-economic development in a region or basin. However, the calculation of future WRCC is not well considered in most studies, because water resources and the socio-economic development mode for one area or city in the future are quite uncertain. This paper focused on the limits of traditional methods of WRCC and proposed a new concept, water resources design carrying capacity (WRDCC, which incorporated the concept of design. In WRDCC, the population size that the local water resources can support is calculated based on the balance of water supply and water consumption, under the design water supply and design socio-economic development mode. The WRDCC of Chengdu city in China is calculated. Results show that the WRDCC (population size of Chengdu city in development modeI (II, III will be 997 ×104 (770 × 104, 504 × 104 in 2020, and 934 × 104 (759 × 104, 462 × 104 in 2030. Comparing the actual population to the carrying population (WRDCC in 2020 and 2030, a bigger gap will appear, which means there will be more and more pressure on the society-economic sustainable development.

  7. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  8. Aggregated Computational Toxicology Resource (ACTOR)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aggregated Computational Toxicology Resource (ACTOR) is a database on environmental chemicals that is searchable by chemical name and other identifiers, and by...

  9. Aggregated Computational Toxicology Online Resource

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...

  10. Computer Resources | College of Engineering & Applied Science

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  11. Managing resource capacity using hybrid simulation

    Science.gov (United States)

    Ahmad, Norazura; Ghani, Noraida Abdul; Kamil, Anton Abdulbasah; Tahar, Razman Mat

    2014-12-01

    Due to the diversity of patient flows and interdependency of the emergency department (ED) with other units in hospital, the use of analytical models seems not practical for ED modeling. One effective approach to study the dynamic complexity of ED problems is by developing a computer simulation model that could be used to understand the structure and behavior of the system. Attempts to build a holistic model using DES only will be too complex while if only using SD will lack the detailed characteristics of the system. This paper discusses the combination of DES and SD in order to get a better representation of the actual system than using either modeling paradigm solely. The model is developed using AnyLogic software that will enable us to study patient flows and the complex interactions among hospital resources for ED operations. Results from the model show that patients' length of stay is influenced by laboratories turnaround time, bed occupancy rate and ward admission rate.

  12. Preliminary research on quantitative methods of water resources carrying capacity based on water resources balance sheet

    Science.gov (United States)

    Wang, Yanqiu; Huang, Xiaorong; Gao, Linyun; Guo, Biying; Ma, Kai

    2018-06-01

    Water resources are not only basic natural resources, but also strategic economic resources and ecological control factors. Water resources carrying capacity constrains the sustainable development of regional economy and society. Studies of water resources carrying capacity can provide helpful information about how the socioeconomic system is both supported and restrained by the water resources system. Based on the research of different scholars, major problems in the study of water resources carrying capacity were summarized as follows: the definition of water resources carrying capacity is not yet unified; the methods of carrying capacity quantification based on the definition of inconsistency are poor in operability; the current quantitative research methods of water resources carrying capacity did not fully reflect the principles of sustainable development; it is difficult to quantify the relationship among the water resources, economic society and ecological environment. Therefore, it is necessary to develop a better quantitative evaluation method to determine the regional water resources carrying capacity. This paper proposes a new approach to quantifying water resources carrying capacity (that is, through the compilation of the water resources balance sheet) to get a grasp of the regional water resources depletion and water environmental degradation (as well as regional water resources stock assets and liabilities), figure out the squeeze of socioeconomic activities on the environment, and discuss the quantitative calculation methods and technical route of water resources carrying capacity which are able to embody the substance of sustainable development.

  13. Civil Service Human Resource Capacity and Information Technology

    African Journals Online (AJOL)

    Tesfaye

    2009-01-01

    Jan 1, 2009 ... had no impact on the size of jobs that require high-level of human resource capacity. Furthermore ... level human resource capacity has an effect on the size of supervisors, which is the main ...... depreciation. 5 This indicates ...

  14. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  15. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  16. Strengthening Research Capacity to Enhance Natural Resources ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... to Enhance Natural Resources Management and Improve Rural Livelihoods ... and contribute to the food and income security of the rural poor by enhancing the ... of its 2017 call for proposals to establish Cyber Policy Centres in the Global South. ... partnering on a new initiative, aimed at reducing the emerging risk that.

  17. Resource Management in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  18. Classification of CO2 Geologic Storage: Resource and Capacity

    Science.gov (United States)

    Frailey, S.M.; Finley, R.J.

    2009-01-01

    The use of the term capacity to describe possible geologic storage implies a realistic or likely volume of CO2 to be sequestered. Poor data quantity and quality may lead to very high uncertainty in the storage estimate. Use of the term "storage resource" alleviates the implied certainty of the term "storage capacity". This is especially important to non- scientists (e.g. policy makers) because "capacity" is commonly used to describe the very specific and more certain quantities such as volume of a gas tank or a hotel's overnight guest limit. Resource is a term used in the classification of oil and gas accumulations to infer lesser certainty in the commercial production of oil and gas. Likewise for CO2 sequestration, a suspected porous and permeable zone can be classified as a resource, but capacity can only be estimated after a well is drilled into the formation and a relatively higher degree of economic and regulatory certainty is established. Storage capacity estimates are lower risk or higher certainty compared to storage resource estimates. In the oil and gas industry, prospective resource and contingent resource are used for estimates with less data and certainty. Oil and gas reserves are classified as Proved and Unproved, and by analogy, capacity can be classified similarly. The highest degree of certainty for an oil or gas accumulation is Proved, Developed Producing (PDP) Reserves. For CO2 sequestration this could be Proved Developed Injecting (PDI) Capacity. A geologic sequestration storage classification system is developed by analogy to that used by the oil and gas industry. When a CO2 sequestration industry emerges, storage resource and capacity estimates will be considered a company asset and consequently regulated by the Securities and Exchange Commission. Additionally, storage accounting and auditing protocols will be required to confirm projected storage estimates and assignment of credits from actual injection. An example illustrates the use of

  19. Statistics Online Computational Resource for Education

    Science.gov (United States)

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  20. Two-period resource duopoly with endogenous intertemporal capacity constraints

    International Nuclear Information System (INIS)

    Berk, Istemi

    2014-01-01

    This paper analyzes the strategic firm behavior within the context of a two-period resource duopoly model in which firms face endogenous intertemporal capacity constraints. Firms are allowed to invest in capacity in between two periods in order to increase their initial endowment of exhaustible resource stocks. Using this setup, we nd that the equilibrium price weakly decreases over time. Moreover, asymmetric distribution of initial resource stocks leads to a significant change in equilibrium outcome, provided that firms do not have the same cost structure in capacity additions. It is also verified that if only one company is capable of investment in capacity, the market moves to a more concentrated structure in the second period.

  1. Two-period resource duopoly with endogenous intertemporal capacity constraints

    Energy Technology Data Exchange (ETDEWEB)

    Berk, Istemi

    2014-07-15

    This paper analyzes the strategic firm behavior within the context of a two-period resource duopoly model in which firms face endogenous intertemporal capacity constraints. Firms are allowed to invest in capacity in between two periods in order to increase their initial endowment of exhaustible resource stocks. Using this setup, we nd that the equilibrium price weakly decreases over time. Moreover, asymmetric distribution of initial resource stocks leads to a significant change in equilibrium outcome, provided that firms do not have the same cost structure in capacity additions. It is also verified that if only one company is capable of investment in capacity, the market moves to a more concentrated structure in the second period.

  2. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resourcesresources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  3. Planning for partnerships: Maximizing surge capacity resources through service learning.

    Science.gov (United States)

    Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B

    2015-01-01

    Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts.

  4. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  5. Improving resource capacity planning in hospitals with business approaches.

    NARCIS (Netherlands)

    van Lent, Wineke Agnes Marieke; van Lent, W.A.M.

    2011-01-01

    This dissertation contributed to the knowledge on the translation of approaches from businesses and services to improve the resource capacity planning on tactical and operational level in (oncologic) hospital care. The following studies were presented: * Chapter 2 surveyed the business approaches

  6. Adaptive capacity and community-based natural resource management.

    Science.gov (United States)

    Armitage, Derek

    2005-06-01

    Why do some community-based natural resource management strategies perform better than others? Commons theorists have approached this question by developing institutional design principles to address collective choice situations, while other analysts have critiqued the underlying assumptions of community-based resource management. However, efforts to enhance community-based natural resource management performance also require an analysis of exogenous and endogenous variables that influence how social actors not only act collectively but do so in ways that respond to changing circumstances, foster learning, and build capacity for management adaptation. Drawing on examples from northern Canada and Southeast Asia, this article examines the relationship among adaptive capacity, community-based resource management performance, and the socio-institutional determinants of collective action, such as technical, financial, and legal constraints, and complex issues of politics, scale, knowledge, community and culture. An emphasis on adaptive capacity responds to a conceptual weakness in community-based natural resource management and highlights an emerging research and policy discourse that builds upon static design principles and the contested concepts in current management practice.

  7. SOCR: Statistics Online Computational Resource

    OpenAIRE

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis...

  8. The state of human dimensions capacity for natural resource management: needs, knowledge, and resources

    Science.gov (United States)

    Sexton, Natalie R.; Leong, Kirsten M.; Milley, Brad J.; Clarke, Melinda M.; Teel, Tara L.; Chase, Mark A.; Dietsch, Alia M.

    2013-01-01

    The social sciences have become increasingly important in understanding natural resource management contexts and audiences, and are essential in design and delivery of effective and durable management strategies. Yet many agencies and organizations do not have the necessary resource management. We draw on the textbook definition of HD: how and why people value natural resources, what benefits people seek and derive from those resources, and how people affect and are affected by those resources and their management (Decker, Brown, and Seimer 2001). Clearly articulating how HD information can be used and integrated into natural resource management planning and decision-making is an important challenge faced by the HD field. To address this challenge, we formed a collaborative team to explore the issue of HD capacity-building for natural resource organizations and to advance the HD field. We define HD capacity as activities, efforts, and resources that enhance the ability of HD researchers and practitioners and natural managers and decision-makers to understand and address the social aspects of conservation.Specifically, we sought to examine current barriers to integration of HD into natural resource management, knowledge needed to improve HD capacity, and existing HD tools, resources, and training opportunities. We conducted a needs assessment of HD experts and practitioners, developed a framework for considering HD activities that can contribute both directly and indirectly throughout any phase of an adaptive management cycle, and held a workshop to review preliminary findings and gather additional input through breakout group discussions. This paper provides highlights from our collaborative initiative to help frame and inform future HD capacity-building efforts and natural resource organizations and also provides a list of existing human dimensions tools and resources.

  9. Carrying capacity of water resources in Bandung Basin

    Science.gov (United States)

    Marganingrum, D.

    2018-02-01

    The concept of carrying capacity is widely used in various sectors as a management tool for sustainable development processes. This idea has also been applied in watershed or basin scale. Bandung Basin is the upstream of Citarum watershed known as one of the national strategic areas. This area has developed into a metropolitan area loaded with various environmental problems. Therefore, research that is related to environmental carrying capacity in this area becomes a strategic issue. However, research on environmental carrying capacity that has been done in this area is still partial either in water balance terminology, land suitability, ecological footprint, or balance of supply and demand of resources. This paper describes the application of the concept of integrated environmental carrying capacity in order to overcome the increasing complexity and dynamic environmental problems. The sector that becomes the focus of attention is the issue of water resources. The approach method to be carried out is to combine the concept of maximum balance and system dynamics. The dynamics of the proposed system is the ecological dynamics and population that cannot be separated from one another as a unity of the Bandung Basin ecosystem.

  10. computational chemistry capacity building in an underprivileged ...

    African Journals Online (AJOL)

    dell

    ABSTRACT. Computational chemistry is a fast developing branch of modern chemistry, focusing on the study of molecules to enable better understanding of the properties of substances. Its applications comprise a variety of fields, from drug design to the design of compounds with desired properties. (e.g., catalysts with ...

  11. Strengthening Capacity to Respond to Computer Security Incidents ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... in the form of spam, improper access to confidential data and cyber theft. ... These teams are usually known as computer security incident response teams ... regional capacity for preventing and responding to cyber security incidents in Latin ...

  12. Development of human resource capacity building assistance for nuclear security

    International Nuclear Information System (INIS)

    Nakamura, Yo; Noro, Naoko

    2014-01-01

    The Integrated Support Center for Nuclear Nonproliferation and Nuclear Security (ISCN) of the Japan Atomic Energy Agency (JAEA) has been providing nuclear security human resource development projects targeting at nuclear emerging countries in Asia in cooperation with the authorities concerned including the Sandia National Laboratory (SNL) and the International Atomic Energy Agency (IAEA). In the aftermath of the attacks of Sept. 11, the threat of terrorism was internationally recognized and thus the human resource capacity building is underway as an urgent task. In order to responding to emerging threats, the human resource capacity building that ISCN has implemented thus far needs to be multilaterally analyzed in order to develop more effective training programs. This paper studies ISCN's future direction by analyzing its achievements, as well as introduces the collaborative relationships with SNL that contributes to the reflection and maintenance of international trends for the contents of nuclear security training, the nuclear security enhancement support with which Japan is to provide nuclear emerging countries in Asia, and the achievements of the nuclear security training program that ISCN implemented. (author)

  13. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  14. Framework of Resource Management for Intercloud Computing

    Directory of Open Access Journals (Sweden)

    Mohammad Aazam

    2014-01-01

    Full Text Available There has been a very rapid increase in digital media content, due to which media cloud is gaining importance. Cloud computing paradigm provides management of resources and helps create extended portfolio of services. Through cloud computing, not only are services managed more efficiently, but also service discovery is made possible. To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible for standalone clouds to handle everything with the increasing user demands. For scalability and better service provisioning, at times, clouds have to communicate with other clouds and share their resources. This scenario is called Intercloud computing or cloud federation. The study on Intercloud computing is still in its start. Resource management is one of the key concerns to be addressed in Intercloud computing. Already done studies discuss this issue only in a trivial and simplistic way. In this study, we present a resource management model, keeping in view different types of services, different customer types, customer characteristic, pricing, and refunding. The presented framework was implemented using Java and NetBeans 8.0 and evaluated using CloudSim 3.0.3 toolkit. Presented results and their discussion validate our model and its efficiency.

  15. Capacity of Distribution Feeders for Hosting Distributed Energy Resources

    DEFF Research Database (Denmark)

    Papathanassiou, S.; Hatziargyriou, N.; Anagnostopoulos, P.

    The last two decades have seen an unprecedented development of distributed energy resources (DER) all over the world. Several countries have adopted a variety of support schemes (feed-in tariffs, green certificates, direct subsidies, tax exemptions etc.) so as to promote distributed generation (DG...... standards of the networks. To address this need in a timely and effective manner, simplified methodologies and practical rules of thumbs are often applied to assess the DER hosting capacity of existing distribution networks, avoiding thus detailed and time consuming analytical studies. The scope...

  16. Cloud Provider Capacity Augmentation Through Automated Resource Bartering

    OpenAIRE

    Gohera, Syeda ZarAfshan; Bloodsworth, Peter; Rasool, Raihan Ur; McClatchey, Richard

    2018-01-01

    Growing interest in Cloud Computing places a heavy workload on cloud providers which is becoming increasingly difficult for them to manage with their primary datacenter infrastructures. Resource limitations can make providers vulnerable to significant reputational damage and it often forces customers to select services from the larger, more established companies, sometimes at a higher price. Funding limitations, however, commonly prevent emerging and even established providers from making con...

  17. Biophysical constraints on the computational capacity of biochemical signaling networks

    Science.gov (United States)

    Wang, Ching-Hao; Mehta, Pankaj

    Biophysics fundamentally constrains the computations that cells can carry out. Here, we derive fundamental bounds on the computational capacity of biochemical signaling networks that utilize post-translational modifications (e.g. phosphorylation). To do so, we combine ideas from the statistical physics of disordered systems and the observation by Tony Pawson and others that the biochemistry underlying protein-protein interaction networks is combinatorial and modular. Our results indicate that the computational capacity of signaling networks is severely limited by the energetics of binding and the need to achieve specificity. We relate our results to one of the theoretical pillars of statistical learning theory, Cover's theorem, which places bounds on the computational capacity of perceptrons. PM and CHW were supported by a Simons Investigator in the Mathematical Modeling of Living Systems Grant, and NIH Grant No. 1R35GM119461 (both to PM).

  18. Optimised resource construction for verifiable quantum computation

    International Nuclear Information System (INIS)

    Kashefi, Elham; Wallden, Petros

    2017-01-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. (paper)

  19. Alluvial Diamond Resource Potential and Production Capacity Assessment of Ghana

    Science.gov (United States)

    Chirico, Peter G.; Malpeli, Katherine C.; Anum, Solomon; Phillips, Emily C.

    2010-01-01

    In May of 2000, a meeting was convened in Kimberley, South Africa, and attended by representatives of the diamond industry and leaders of African governments to develop a certification process intended to assure that rough, exported diamonds were free of conflictual concerns. This meeting was supported later in 2000 by the United Nations in a resolution adopted by the General Assembly. By 2002, the Kimberley Process Certification Scheme (KPCS) was ratified and signed by both diamond-producing and diamond-importing countries. Over 70 countries were included as members at the end of 2007. To prevent trade in 'conflict' diamonds while protecting legitimate trade, the KPCS requires that each country set up an internal system of controls to prevent conflict diamonds from entering any imported or exported shipments of rough diamonds. Every diamond or diamond shipment must be accompanied by a Kimberley Process (KP) certificate and be contained in tamper-proof packaging. The objective of this study was to assess the alluvial diamond resource endowment and current production capacity of the alluvial diamond-mining sector in Ghana. A modified volume and grade methodology was used to estimate the remaining diamond reserves within the Birim and Bonsa diamond fields. The production capacity of the sector was estimated using a formulaic expression of the number of workers reported in the sector, their productivity, and the average grade of deposits mined. This study estimates that there are approximately 91,600,000 carats of alluvial diamonds remaining in both the Birim and Bonsa diamond fields: 89,000,000 carats in the Birim and 2,600,000 carats in the Bonsa. Production capacity is calculated to be 765,000 carats per year, based on the formula used and available data on the number of workers and worker productivity. Annual production is highly dependent on the international diamond market and prices, the numbers of seasonal workers actively mining in the sector, and

  20. VECTR: Virtual Environment Computational Training Resource

    Science.gov (United States)

    Little, William L.

    2018-01-01

    The Westridge Middle School Curriculum and Community Night is an annual event designed to introduce students and parents to potential employers in the Central Florida area. NASA participated in the event in 2017, and has been asked to come back for the 2018 event on January 25. We will be demonstrating our Microsoft Hololens Virtual Rovers project, and the Virtual Environment Computational Training Resource (VECTR) virtual reality tool.

  1. LHCb Computing Resource usage in 2017

    CERN Document Server

    Bozzi, Concezio

    2018-01-01

    This document reports the usage of computing resources by the LHCb collaboration during the period January 1st – December 31st 2017. The data in the following sections have been compiled from the EGI Accounting portal: https://accounting.egi.eu. For LHCb specific information, the data is taken from the DIRAC Accounting at the LHCb DIRAC Web portal: http://lhcb-portal-dirac.cern.ch.

  2. Function Package for Computing Quantum Resource Measures

    Science.gov (United States)

    Huang, Zhiming

    2018-05-01

    In this paper, we present a function package for to calculate quantum resource measures and dynamics of open systems. Our package includes common operators and operator lists, frequently-used functions for computing quantum entanglement, quantum correlation, quantum coherence, quantum Fisher information and dynamics in noisy environments. We briefly explain the functions of the package and illustrate how to use the package with several typical examples. We expect that this package is a useful tool for future research and education.

  3. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  4. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  5. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  6. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Tupputi, S A; Girolamo, A Di; Kouba, T; Schovancová, J

    2014-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  7. NMRbox: A Resource for Biomolecular NMR Computation.

    Science.gov (United States)

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  8. Alluvial diamond resource potential and production capacity assessment of Mali

    Science.gov (United States)

    Chirico, Peter G.; Barthelemy, Francis; Kone, Fatiaga

    2010-01-01

    In May of 2000, a meeting was convened in Kimberley, South Africa, and attended by representatives of the diamond industry and leaders of African governments to develop a certification process intended to assure that rough, exported diamonds were free of conflictual concerns. This meeting was supported later in 2000 by the United Nations in a resolution adopted by the General Assembly. By 2002, the Kimberley Process Certification Scheme (KPCS) was ratified and signed by diamond-producing and diamond-importing countries. Over 70 countries were included as members of the KPCS at the end of 2007. To prevent trade in "conflict diamonds" while protecting legitimate trade, the KPCS requires that each country set up an internal system of controls to prevent conflict diamonds from entering any imported or exported shipments of rough diamonds. Every diamond or diamond shipment must be accompanied by a Kimberley Process (KP) certificate and be contained in tamper-proof packaging. The objective of this study was (1) to assess the naturally occurring endowment of diamonds in Mali (potential resources) based on geological evidence, previous studies, and recent field data and (2) to assess the diamond-production capacity and measure the intensity of mining activity. Several possible methods can be used to estimate the potential diamond resource. However, because there is generally a lack of sufficient and consistent data recording all diamond mining in Mali and because time to conduct fieldwork and accessibility to the diamond mining areas are limited, four different methodologies were used: the cylindrical calculation of the primary kimberlitic deposits, the surface area methodology, the volume and grade approach, and the content per kilometer approach. Approximately 700,000 carats are estimated to be in the alluvial deposits of the Kenieba region, with 540,000 carats calculated to lie within the concentration grade deposits. Additionally, 580,000 carats are estimated to have

  9. ACToR - Aggregated Computational Toxicology Resource

    International Nuclear Information System (INIS)

    Judson, Richard; Richard, Ann; Dix, David; Houck, Keith; Elloumi, Fathi; Martin, Matthew; Cathey, Tommy; Transue, Thomas R.; Spencer, Richard; Wolf, Maritja

    2008-01-01

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast TM

  10. Computing Bounds on Resource Levels for Flexible Plans

    Science.gov (United States)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  11. Refining teacher design capacity: mathematics teachers' interactions with digital curriculum resources

    NARCIS (Netherlands)

    Pepin, B.; Gueudet, G.; Trouche, L.

    2017-01-01

    The goal of this conceptual paper is to develop enhanced understandings of mathematics teacher design and design capacity when interacting with digital curriculum resources. We argue that digital resources in particular offer incentives and increasing opportunities for mathematics teachers’ design,

  12. Contract on using computer resources of another

    Directory of Open Access Journals (Sweden)

    Cvetković Mihajlo

    2016-01-01

    Full Text Available Contractual relations involving the use of another's property are quite common. Yet, the use of computer resources of others over the Internet and legal transactions arising thereof certainly diverge from the traditional framework embodied in the special part of contract law dealing with this issue. Modern performance concepts (such as: infrastructure, software or platform as high-tech services are highly unlikely to be described by the terminology derived from Roman law. The overwhelming novelty of high-tech services obscures the disadvantageous position of contracting parties. In most cases, service providers are global multinational companies which tend to secure their own unjustified privileges and gain by providing lengthy and intricate contracts, often comprising a number of legal documents. General terms and conditions in these service provision contracts are further complicated by the '.service level agreement', rules of conduct and (nonconfidentiality guarantees. Without giving the issue a second thought, users easily accept the pre-fabricated offer without reservations, unaware that such a pseudo-gratuitous contract actually conceals a highly lucrative and mutually binding agreement. The author examines the extent to which the legal provisions governing sale of goods and services, lease, loan and commodatum may apply to 'cloud computing' contracts, and analyses the scope and advantages of contractual consumer protection, as a relatively new area in contract law. The termination of a service contract between the provider and the user features specific post-contractual obligations which are inherent to an online environment.

  13. Environmental sustainability control by water resources carrying capacity concept: application significance in Indonesia

    Science.gov (United States)

    Djuwansyah, M. R.

    2018-02-01

    This paper reviews the use of Water Resources carrying capacity concept to control environmental sustainability with the particular note for the case in Indonesia. Carrying capacity is a capability measure of an environment or an area to support human and the other lives as well as their activities in a sustainable manner. Recurrently water-related hazards and environmental problems indicate that the environments are exploited over its carrying capacity. Environmental carrying capacity (ECC) assessment includes Land and Water Carrying Capacity analysis of an area, suggested to always refer to the dimension of the related watershed as an incorporated hydrologic unit on the basis of resources availability estimation. Many countries use this measure to forecast the future sustainability of regional development based on water availability. Direct water Resource Carrying Capacity (WRCC) assessment involves population number determination together with their activities could be supported by available water, whereas indirect WRCC assessment comprises the analysis of supply-demand balance status of water. Water resource limits primarily environmental carrying capacity rather than the land resource since land capability constraints are easier. WRCC is a crucial factor known to control land and water resource utilization, particularly in a growing densely populated area. Even though capability of water resources is relatively perpetual, the utilization pattern of these resources may change by socio-economic and cultural technology level of the users, because of which WRCC should be evaluated periodically to maintain usage sustainability of water resource and environment.

  14. [Evaluation of comprehensive capacity of resources and environments in Poyang Lake Eco-economic Zone].

    Science.gov (United States)

    Song, Yan-Chun; Yu, Dan

    2014-10-01

    With the development of the society and economy, the contradictions among population, resources and environment are increasingly worse. As a result, the capacity of resources and environment becomes one of the focal issues for many countries and regions. Through investigating and analyzing the present situation and the existing problems of resources and environment in Poyang Lake Eco-economic Zone, seven factors were chosen as the evaluation criterion layer, namely, land resources, water resources, biological resources, mineral resources, ecological-geological environment, water environment and atmospheric environment. Based on the single factor evaluation results and with the county as the evaluation unit, the comprehensive capacity of resources and environment was evaluated by using the state space method in Poyang Lake Eco-economic Zone. The results showed that it boasted abundant biological resources, quality atmosphere and water environment, and relatively stable geological environment, while restricted by land resource, water resource and mineral resource. Currently, although the comprehensive capacity of the resources and environments in Poyang Lake Eco-economic Zone was not overloaded as a whole, it has been the case in some counties/districts. State space model, with clear indication and high accuracy, could serve as another approach to evaluating comprehensive capacity of regional resources and environment.

  15. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  16. Human resource capacity for information management in selected ...

    African Journals Online (AJOL)

    Results: it was established that capacity building was usually undertaken through on-job trainings i.e. 85.1% (103) health workers had on-job training on filling of data collection tools and only 10% (13) had received formal classroom training on the same. Further, only 9.1% (11) health workers had received information ...

  17. LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents the computing resources needed by LHCb in 2019 and a reassessment of the 2018 requests, as resulting from the current experience of Run2 data taking and minor changes in the LHCb computing model parameters.

  18. Some issues of creation of belarusian language computer resources

    OpenAIRE

    Rubashko, N.; Nevmerjitskaia, G.

    2003-01-01

    The main reason for creation of computer resources of natural language is the necessity to bring into accord the ways of language normalization with the form of its existence - the computer form of language usage should correspond to the computer form of language standards fixation. This paper discusses various aspects of the creation of Belarusian language computer resources. It also briefly gives an overview of the objectives of the project involved.

  19. Concept and Connotation of Water Resources Carrying Capacity in Water Ecological Civilization Construction

    Science.gov (United States)

    Chao, Zhilong; Song, Xiaoyu; Feng, Xianghua

    2018-01-01

    Water ecological civilization construction is based on the water resources carrying capacity, guided by the sustainable development concept, adhered to the human-water harmony thoughts. This paper has comprehensive analyzed the concept and characteristics of the carrying capacity of water resources in the water ecological civilization construction, and discussed the research methods and evaluation index system of water carrying capacity in the water ecological civilization construction, finally pointed out that the problems and solutions of water carrying capacity in the water ecological civilization construction and put forward the future research prospect.

  20. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  1. Resource management in utility and cloud computing

    CERN Document Server

    Zhao, Han

    2013-01-01

    This SpringerBrief reviews the existing market-oriented strategies for economically managing resource allocation in distributed systems. It describes three new schemes that address cost-efficiency, user incentives, and allocation fairness with regard to different scheduling contexts. The first scheme, taking the Amazon EC2? market as a case of study, investigates the optimal resource rental planning models based on linear integer programming and stochastic optimization techniques. This model is useful to explore the interaction between the cloud infrastructure provider and the cloud resource c

  2. Improving ATLAS computing resource utilization with HammerCloud

    CERN Document Server

    Schovancova, Jaroslava; The ATLAS collaboration

    2018-01-01

    HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which have yet to be automated, and improve utilization of available computing resources. We present recent evolution of the auto-exclusion/recovery tools: faster inclusion of new resources in testing machinery, machine learning algorithms for anomaly detection, categorized resources as master vs. slave for the purpose of blacklisting, and a tool for auto-exclusion/recovery of resources triggered by Event Service job failures that is being extended to other workflows besides the Event Service. We describe how HammerCloud helped commissioning various concepts and components of distributed systems: simplified configuration of qu...

  3. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  4. System capacity and economic modeling computer tool for satellite mobile communications systems

    Science.gov (United States)

    Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.

    1988-01-01

    A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.

  5. Evaluation of Water Resources Carrying Capacity in Shandong Province Based on Fuzzy Comprehensive Evaluation

    Directory of Open Access Journals (Sweden)

    Zhao Qiang

    2018-01-01

    Full Text Available Water resources carrying capacity is the maximum available water resources supporting by the social and economic development. Based on investigating and statisticing on the current situation of water resources in Shandong Province, this paper selects 13 factors including per capita water resources, water resources utilization, water supply modulus, rainfall, per capita GDP, population density, per capita water consumption, water consumption per million yuan, The water consumption of industrial output value, the agricultural output value of farmland, the irrigation rate of cultivated land, the water consumption rate of ecological environment and the forest coverage rate were used as the evaluation factors. Then,the fuzzy comprehensive evaluation model was used to analyze the water resources carrying capacity Force status evaluation. The results showed : The comprehensive evaluation results of water resources in Shandong Province were lower than 0.6 in 2001-2009 and higher than 0.6 in 2010-2015, which indicating that the water resources carrying capacity of Shandong Province has been improved.; In addition, most of the years a value of less than 0.6, individual years below 0.4, the interannual changes are relatively large, from that we can see the level of water resources is generally weak, the greater the interannual changes in Shandong Province.

  6. Decentralized Resource Management in Distributed Computer Systems.

    Science.gov (United States)

    1982-02-01

    directly exchanging user state information. Eventcounts and sequencers correspond to semaphores in the sense that synchronization primitives are used to...and techniques are required to achieve synchronization in distributed computers without reliance on any centralized entity such as a semaphore ...known solutions to the access synchronization problem was Dijkstra’s semaphore [12]. The importance of the semaphore is that it correctly addresses the

  7. Physical-resource requirements and the power of quantum computation

    International Nuclear Information System (INIS)

    Caves, Carlton M; Deutsch, Ivan H; Blume-Kohout, Robin

    2004-01-01

    The primary resource for quantum computation is the Hilbert-space dimension. Whereas Hilbert space itself is an abstract construction, the number of dimensions available to a system is a physical quantity that requires physical resources. Avoiding a demand for an exponential amount of these resources places a fundamental constraint on the systems that are suitable for scalable quantum computation. To be scalable, the number of degrees of freedom in the computer must grow nearly linearly with the number of qubits in an equivalent qubit-based quantum computer. These considerations rule out quantum computers based on a single particle, a single atom, or a single molecule consisting of a fixed number of atoms or on classical waves manipulated using the transformations of linear optics

  8. Evaluation of Resources Carrying Capacity in China Based on Remote Sensing and GIS

    Science.gov (United States)

    Liu, K.; Gan, Y. H.; Zhang, T.; Luo, Z. Y.; Wang, J. J.; Lin, F. N.

    2018-04-01

    This paper accurately extracted the information of arable land, grassland (wetland), forest land, water area and construction land, based on 1 : 250000 basic geographic information data. It made model modification of comprehensive CCRR to achieve carrying capacity calculation taking resource quality into consideration. Ultimately it achieved a comprehensive assessment of CCRR status in China. The top ten cities where the status of carrying capacity of resources was overloaded were Wenzhou, Shanghai, Chengdu, Baoding, Shantou, Jieyang, Dongguan, Fuyang, Zhoukou and Handan. The cities were basically distributed in the central and southern areas with convenient transportation and more economically developed areas. Among the cities in surplus status, resources carrying capacity in Hulun Buir was the most abundant, followed by Heihe, Bayingolin Mongol Autonomous Prefecture, Qiqihar, Chifeng and Jiamusi, all of which were located in northeastern China with a small population and plentiful cultivated land.

  9. Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount

    Energy Technology Data Exchange (ETDEWEB)

    Supriyadi, E-mail: supriyadi-uno@yahoo.co.nz [Physics, Faculty of Mathematics and Natural Sciences, University of Jember, Jl. Kalimantan Kampus Bumi Tegal Boto, Jember 68181 (Indonesia); Srigutomo, Wahyu [Complex system and earth physics, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Munandar, Arif [Kelompok Program Penelitian Panas Bumi, PSDG, Badan Geologi, Kementrian ESDM, Jl. Soekarno Hatta No. 444 Bandung 40254 (Indonesia)

    2014-03-24

    Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.

  10. Development of Computer-Based Resources for Textile Education.

    Science.gov (United States)

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  11. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  12. ResourceGate: A New Solution for Cloud Computing Resource Allocation

    OpenAIRE

    Abdullah A. Sheikh

    2012-01-01

    Cloud computing has taken place to be focused by educational and business communities. These concerns include their needs to improve the Quality of Services (QoS) provided, also services such as reliability, performance and reducing costs. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring these benefits is considered to be the major factor in the cloud computing environment. This paper surveys recent research related to cloud computing resource al...

  13. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  14. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  15. TIGER-NET – enabling an Earth Observation capacity for Integrated Water Resource Management in Africa

    DEFF Research Database (Denmark)

    Walli, A.; Tøttrup, C.; Naeimi, V.

    As part of the TIGER initiative [1] the TIGER-NET project aims to support the assessment and monitoring of water resources from watershed to transboundary basin level delivering indispensable information for Integrated Water Resource Management in Africa through: 1. Development of an open......-source Water Observation and Information Systems (WOIS) for monitoring, assessing and inventorying water resources in a cost-effective manner; 2. Capacity building and training of African water authorities and technical centers to fully exploit the increasing observation capacity offered by current...... and upcoming generations of satellites, including the Sentinel missions. Dedicated application case studies have been developed and demonstrated covering all EO products required by and developed with the participating African water authorities for their water resource management tasks, such as water reservoir...

  16. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  17. A Tool and Process that Facilitate Community Capacity Building and Social Learning for Natural Resource Management

    Directory of Open Access Journals (Sweden)

    Christopher M. Raymond

    2013-03-01

    Full Text Available This study presents a self-assessment tool and process that facilitate community capacity building and social learning for natural resource management. The tool and process provide opportunities for rural landholders and project teams both to self-assess their capacity to plan and deliver natural resource management (NRM programs and to reflect on their capacities relative to other organizations and institutions that operate in their region. We first outline the tool and process and then present a critical review of the pilot in the South Australian Arid Lands NRM region, South Australia. Results indicate that participants representing local, organizational, and institutional tiers of government were able to arrive at a group consensus position on the strength, importance, and confidence of a variety of capacities for NRM categorized broadly as human, social, physical, and financial. During the process, participants learned a lot about their current capacity as well as capacity needs. Broad conclusions are discussed with reference to the iterative process for assessing and reflecting on community capacity.

  18. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  19. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  20. Computer Simulations of Developmental Change: The Contributions of Working Memory Capacity and Long-Term Knowledge

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2008-01-01

    Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…

  1. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  2. Quantum computing with incoherent resources and quantum jumps.

    Science.gov (United States)

    Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R

    2012-04-27

    Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.

  3. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  4. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  5. Adaptive resource allocation scheme using sliding window subchannel gain computation: context of OFDMA wireless mobiles systems

    International Nuclear Information System (INIS)

    Khelifa, F.; Samet, A.; Ben Hassen, W.; Afif, M.

    2011-01-01

    Multiuser diversity combined with Orthogonal Frequency Division Multiple Access (OFDMA) are a promising technique for achieving high downlink capacities in new generation of cellular and wireless network systems. The total capacity of OFDMA based-system is maximized when each subchannel is assigned to the mobile station with the best channel to noise ratio for that subchannel with power is uniformly distributed between all subchannels. A contiguous method for subchannel construction is adopted in IEEE 802.16 m standard in order to reduce OFDMA system complexity. In this context, new subchannel gain computation method, can contribute, jointly with optimal assignment subchannel to maximize total system capacity. In this paper, two new methods have been proposed in order to achieve a better trade-off between fairness and efficiency use of resources. Numerical results show that proposed algorithms provide low complexity, higher total system capacity and fairness among users compared to others recent methods.

  6. Building Human Resources Management Capacity for University Research: The Case at Four Leading Vietnamese Universities

    Science.gov (United States)

    Nguyen, T. L.

    2016-01-01

    At research-intensive universities, building human resources management (HRM) capacity has become a key approach to enhancing a university's research performance. However, despite aspiring to become a research-intensive university, many teaching-intensive universities in developing countries may not have created effective research-promoted HRM…

  7. Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ibanez, E.; Milligan, M.

    2014-04-01

    Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprint under different variable generation penetrations.

  8. Application of the Computer Capacity to the Analysis of Processors Evolution

    OpenAIRE

    Ryabko, Boris; Rakitskiy, Anton

    2017-01-01

    The notion of computer capacity was proposed in 2012, and this quantity has been estimated for computers of different kinds. In this paper we show that, when designing new processors, the manufacturers change the parameters that affect the computer capacity. This allows us to predict the values of parameters of future processors. As the main example we use Intel processors, due to the accessibility of detailed description of all their technical characteristics.

  9. Shared-resource computing for small research labs.

    Science.gov (United States)

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  10. Assessing employability capacities and career adaptability in a sample of human resource professionals

    Directory of Open Access Journals (Sweden)

    Melinde Coetzee

    2015-06-01

    Full Text Available Orientation: Employers have come to recognise graduates’ employability capacities and their ability to adapt to new work demands as important human capital resources for sustaining a competitive business advantage. Research purpose: The study sought (1 to ascertain whether a significant relationship exists between a set of graduate employability capacities and a set of career adaptability capacities and (2 to identify the variables that contributed the most to this relationship. Motivation for the study: Global competitive markets and technological advances are increasingly driving the demand for graduate knowledge and skills in a wide variety of jobs. Contemporary career theory further emphasises career adaptability across the lifespan as a critical skill for career management agency. Despite the apparent importance attached to employees’ employability and career adaptability, there seems to be a general lack of research investigating the association between these constructs. Research approach, design and method: A cross-sectional, quantitative research design approach was followed. Descriptive statistics, Pearson product-moment correlations and canonical correlation analysis were performed to achieve the objective of the study. The participants (N = 196 were employed in professional positions in the human resource field and were predominantly early career black people and women. Main findings: The results indicated positive multivariate relationships between the variables and showed that lifelong learning capacities and problem solving, decision-making and interactive skills contributed the most to explaining the participants’ career confidence, career curiosity and career control. Practical/managerial implications: The study suggests that developing professional graduates’ employability capacities may strengthen their career adaptability. These capacities were shown to explain graduates’ active engagement in career management

  11. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  12. Using OSG Computing Resources with (iLC)Dirac

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Petric, Marko

    2017-01-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called 'SiteDirectors', which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional sitespecific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were develo...

  13. Integration of Openstack cloud resources in BES III computing cluster

    Science.gov (United States)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  14. Computer-aided resource planning and scheduling for radiological services

    Science.gov (United States)

    Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.

    1996-05-01

    There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.

  15. Modeling water resources as a constraint in electricity capacity expansion models

    Science.gov (United States)

    Newmark, R. L.; Macknick, J.; Cohen, S.; Tidwell, V. C.; Woldeyesus, T.; Martinez, A.

    2013-12-01

    In the United States, the electric power sector is the largest withdrawer of freshwater in the nation. The primary demand for water from the electricity sector is for thermoelectric power plant cooling. Areas likely to see the largest near-term growth in population and energy usage, the Southwest and the Southeast, are also facing freshwater scarcity and have experienced water-related power reliability issues in the past decade. Lack of water may become a barrier for new conventionally-cooled power plants, and alternative cooling systems will impact technology cost and performance. Although water is integral to electricity generation, it has long been neglected as a constraint in future electricity system projections. Assessing the impact of water resource scarcity on energy infrastructure development is critical, both for conventional and renewable energy technologies. Efficiently utilizing all water types, including wastewater and brackish sources, or utilizing dry-cooling technologies, will be essential for transitioning to a low-carbon electricity system. This work provides the first demonstration of a national electric system capacity expansion model that incorporates water resources as a constraint on the current and future U.S. electricity system. The Regional Electricity Deployment System (ReEDS) model was enhanced to represent multiple cooling technology types and limited water resource availability in its optimization of electricity sector capacity expansion to 2050. The ReEDS model has high geographic and temporal resolution, making it a suitable model for incorporating water resources, which are inherently seasonal and watershed-specific. Cooling system technologies were assigned varying costs (capital, operations and maintenance), and performance parameters, reflecting inherent tradeoffs in water impacts and operating characteristics. Water rights supply curves were developed for each of the power balancing regions in ReEDS. Supply curves include costs

  16. A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking

    Directory of Open Access Journals (Sweden)

    Fang-Yie Leu

    2012-03-01

    Full Text Available In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short, which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET, users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.

  17. Active resources concept of computation for enterprise software

    Directory of Open Access Journals (Sweden)

    Koryl Maciej

    2017-06-01

    Full Text Available Traditional computational models for enterprise software are still to a great extent centralized. However, rapid growing of modern computation techniques and frameworks causes that contemporary software becomes more and more distributed. Towards development of new complete and coherent solution for distributed enterprise software construction, synthesis of three well-grounded concepts is proposed: Domain-Driven Design technique of software engineering, REST architectural style and actor model of computation. As a result new resources-based framework arises, which after first cases of use seems to be useful and worthy of further research.

  18. System dynamics model of Suzhou water resources carrying capacity and its application

    Directory of Open Access Journals (Sweden)

    Li Cheng

    2010-06-01

    Full Text Available A model of Suzhou water resources carrying capacity (WRCC was set up using the method of system dynamics (SD. In the model, three different water resources utilization programs were adopted: (1 continuity of existing water utilization, (2 water conservation/saving, and (3 water exploitation. The dynamic variation of the Suzhou WRCC was simulated with the supply-decided principle for the time period of 2001 to 2030, and the results were characterized based on socio-economic factors. The corresponding Suzhou WRCC values for several target years were calculated by the model. Based on these results, proper ways to improve the Suzhou WRCC are proposed. The model also produced an optimized plan, which can provide a scientific basis for the sustainable utilization of Suzhou water resources and for the coordinated development of the society, economy, and water resources.

  19. GridFactory - Distributed computing on ephemeral resources

    DEFF Research Database (Denmark)

    Orellana, Frederik; Niinimaki, Marko

    2011-01-01

    A novel batch system for high throughput computing is presented. The system is specifically designed to leverage virtualization and web technology to facilitate deployment on cloud and other ephemeral resources. In particular, it implements a security model suited for forming collaborations...

  20. Can the Teachers' Creativity Overcome Limited Computer Resources?

    Science.gov (United States)

    Nikolov, Rumen; Sendova, Evgenia

    1988-01-01

    Describes experiences of the Research Group on Education (RGE) at the Bulgarian Academy of Sciences and the Ministry of Education in using limited computer resources when teaching informatics. Topics discussed include group projects; the use of Logo; ability grouping; and out-of-class activities, including publishing a pupils' magazine. (13…

  1. Recent development of computational resources for new antibiotics discovery

    DEFF Research Database (Denmark)

    Kim, Hyun Uk; Blin, Kai; Lee, Sang Yup

    2017-01-01

    Understanding a complex working mechanism of biosynthetic gene clusters (BGCs) encoding secondary metabolites is a key to discovery of new antibiotics. Computational resources continue to be developed in order to better process increasing volumes of genome and chemistry data, and thereby better...

  2. Construction of an evaluation index system of water resources bearing capacity: An empirical study in Xi’an, China

    Science.gov (United States)

    Qu, X. E.; Zhang, L. L.

    2017-08-01

    In this paper, a comprehensive evaluation of the water resources bearing capacity of Xi’an is performed. By constructing a comprehensive evaluation index system of the water resources bearing capacity that included water resources, economy, society, and ecological environment, we empirically studied the dynamic change and regional differences of the water resources bearing capacities of Xi’an districts through the TOPSIS method (Technique for Order Preference by Similarity to an Ideal Solution). Results show that the water resources bearing capacity of Xi’an significantly increased over time, and the contributions of the subsystems from high to low are as follows: water resources subsystem, social subsystem, ecological subsystem, and economic subsystem. Furthermore, there are large differences between the water resources bearing capacities of the different districts in Xi’an. The water resources bearing capacities from high to low are urban areas, Huxian, Zhouzhi, Gaoling, and Lantian. Overall, the water resources bearing capacity of Xi’an is still at a the lower level, which is highly related to the scarcity of water resources, population pressure, insufficient water saving consciousness, irrational industrial structure, low water-use efficiency, and so on.

  3. Computing Resource And Work Allocations Using Social Profiles

    Directory of Open Access Journals (Sweden)

    Peter Lavin

    2013-01-01

    Full Text Available If several distributed and disparate computer resources exist, many of whichhave been created for different and diverse reasons, and several large scale com-puting challenges also exist with similar diversity in their backgrounds, then oneproblem which arises in trying to assemble enough of these resources to addresssuch challenges is the need to align and accommodate the different motivationsand objectives which may lie behind the existence of both the resources andthe challenges. Software agents are offered as a mainstream technology formodelling the types of collaborations and relationships needed to do this. Asan initial step towards forming such relationships, agents need a mechanism toconsider social and economic backgrounds. This paper explores addressing so-cial and economic differences using a combination of textual descriptions knownas social profiles and search engine technology, both of which are integrated intoan agent technology.

  4. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    OpenAIRE

    Cirasella, Jill

    2009-01-01

    This article is an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news.

  5. Towards minimal resources of measurement-based quantum computation

    International Nuclear Information System (INIS)

    Perdrix, Simon

    2007-01-01

    We improve the upper bound on the minimal resources required for measurement-only quantum computation (M A Nielsen 2003 Phys. Rev. A 308 96-100; D W Leung 2004 Int. J. Quantum Inform. 2 33; S Perdrix 2005 Int. J. Quantum Inform. 3 219-23). Minimizing the resources required for this model is a key issue for experimental realization of a quantum computer based on projective measurements. This new upper bound also allows one to reply in the negative to the open question presented by Perdrix (2004 Proc. Quantum Communication Measurement and Computing) about the existence of a trade-off between observable and ancillary qubits in measurement-only QC

  6. Strengthening Compute and Data intensive Capacities of Armenia

    OpenAIRE

    Astsatryan , Hrachya; Sahakyan , Vladimir; Shoukourian , Yuri; Cros , Pierre-Henri; Daydé , Michel; Dongarra , Jack; Oster , Per

    2015-01-01

    International audience; Traditionally, Armenia has had a leading position within the computer science and Information Technology sectors in the South Caucasus region and beyond. Information Technology (IT) is also one of the fastest growing industries of the Armenian economy [1]. In 2000, the Government of Armenia recognized the IT sector as the primary constituent of the country's economic progress. Armenia is, more than ever, in need of cutting-edge and relevant e-infrastructures and e-serv...

  7. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    International Nuclear Information System (INIS)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  8. a Framework for Capacity Building in Mapping Coastal Resources Using Remote Sensing in the Philippines

    Science.gov (United States)

    Tamondong, A.; Cruz, C.; Ticman, T.; Peralta, R.; Go, G. A.; Vergara, M.; Estabillo, M. S.; Cadalzo, I. E.; Jalbuena, R.; Blanco, A.

    2016-06-01

    Remote sensing has been an effective technology in mapping natural resources by reducing the costs and field data gathering time and bringing in timely information. With the launch of several earth observation satellites, an increase in the availability of satellite imageries provides an immense selection of data for the users. The Philippines has recently embarked in a program which will enable the gathering of LiDAR data in the whole country. The capacity of the Philippines to take advantage of these advancements and opportunities is lacking. There is a need to transfer the knowledge of remote sensing technology to other institutions to better utilize the available data. Being an archipelagic country with approximately 36,000 kilometers of coastline, and most of its people depending on its coastal resources, remote sensing is an optimal choice in mapping such resources. A project involving fifteen (15) state universities and colleges and higher education institutions all over the country headed by the University of the Philippines Training Center for Applied Geodesy and Photogrammetry and funded by the Department of Science and Technology was formed to carry out the task of capacity building in mapping the country's coastal resources using LiDAR and other remotely sensed datasets. This paper discusses the accomplishments and the future activities of the project.

  9. Surgical resource utilization in urban terrorist bombing: a computer simulation.

    Science.gov (United States)

    Hirshberg, A; Stein, M; Walden, R

    1999-09-01

    The objective of this study was to analyze the utilization of surgical staff and facilities during an urban terrorist bombing incident. A discrete-event computer model of the emergency room and related hospital facilities was constructed and implemented, based on cumulated data from 12 urban terrorist bombing incidents in Israel. The simulation predicts that the admitting capacity of the hospital depends primarily on the number of available surgeons and defines an optimal staff profile for surgeons, residents, and trauma nurses. The major bottlenecks in the flow of critical casualties are the shock rooms and the computed tomographic scanner but not the operating rooms. The simulation also defines the number of reinforcement staff needed to treat noncritical casualties and shows that radiology is the major obstacle to the flow of these patients. Computer simulation is an important new tool for the optimization of surgical service elements for a multiple-casualty situation.

  10. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  11. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  12. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    Science.gov (United States)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  13. Building sustainable organizational capacity to deliver HIV programs in resource-constrained settings: stakeholder perspectives.

    Science.gov (United States)

    Sharma, Anjali; Chiliade, Philippe; Michael Reyes, E; Thomas, Kate K; Collens, Stephen R; Rafael Morales, José

    2013-12-13

    In 2008, the US government mandated that HIV/AIDS care and treatment programs funded by the US President's Emergency Plan for AIDS Relief (PEPFAR) should shift from US-based international partners (IPs) to registered locally owned organizations (local partners, or LPs). The US Health Resources and Services Administration (HRSA) developed the Clinical Assessment for Systems Strengthening (ClASS) framework for technical assistance in resource-constrained settings. The ClASS framework involves all stakeholders in the identification of LPs' strengths and needs for technical assistance. This article examines the role of ClASS in building capacity of LPs that can endure and adapt to changing financial and policy environments. All stakeholders (n=68) in Kenya, Zambia, and Nigeria who had participated in the ClASS from LPs and IPs, the US Centers for Disease Control and Prevention (CDC), and, in Nigeria, HIV/AIDS treatment facilities (TFs) were interviewed individually or in groups (n=42) using an open-ended interview guide. Thematic analysis revealed stakeholder perspectives on ClASS-initiated changes and their sustainability. Local organizations were motivated to make changes in internal operations with the ClASS approach, PEPFAR's competitive funding climate, organizational goals, and desired patient health outcomes. Local organizations drew on internal resources and, if needed, technical assistance from IPs. Reportedly, ClASS-initiated changes and remedial action plans made LPs more competitive for PEPFAR funding. LPs also attributed their successful funding applications to their preexisting systems and reputation. Bureaucracy, complex and competing tasks, and staff attrition impeded progress toward the desired changes. Although CDC continues to provide technical assistance through IPs, declining PEPFAR funds threaten the consolidation of gains, smooth program transition, and continuity of treatment services. The well-timed adaptation and implementation of Cl

  14. Demand response and energy efficiency in the capacity resource procurement: Case studies of forward capacity markets in ISO New England, PJM and Great Britain

    International Nuclear Information System (INIS)

    Liu, Yingqi

    2017-01-01

    Demand-side resources like demand response (DR) and energy efficiency (EE) can contribute to the capacity adequacy underpinning power system reliability. Forward capacity markets are established in many liberalised markets to procure capacity, with a strong interest in procuring DR and EE. With case studies of ISO New England, PJM and Great Britain, this paper examines the process and trends of procuring DR and EE in forward capacity markets, and the design for integration mechanisms. It finds that the contribution of DR and EE varies wildly across these three capacity markets, due to a set of factors regarding mechanism design, market conditions and regulatory provisions, and the offering of EE is more heavily influenced by regulatory utility EE obligation. DR and EE are complementary in targeting end-uses and customers for capacity resources, thus highlighting the value of procuring them both. System needs and resources’ market potential need to be considered in defining capacity products. Over the long-term, it is important to ensure the removal of barriers for these demand-side resources and the capability of providers in addressing risks of unstable funding and forward planning. For the EDR Pilot in the UK, better coordination with forward capacity auction needs to be achieved. - Highlights: • Trends of demand response and energy efficiency in capacity markets are analysed. • Integration mechanisms, market conditions and regulatory provisions are key factors. • Participation of energy efficiency is influenced by regulatory utility obligations. • Procuring both demand response and energy efficiency in capacity market is valuable. • Critical analysis of the design of capacity products and integration mechanisms.

  15. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    Science.gov (United States)

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  16. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  17. Building Capacity to Use NASA Earth Observations in the Water Resource Sector

    Science.gov (United States)

    Childs-Gleason, L. M.; Ross, K. W.; Crepps, G.; Clayton, A.; Ruiz, M. L.; Rogers, L.; Allsbrook, K. N.

    2017-12-01

    The NASA DEVELOP National Program builds capacity to use and apply NASA Earth observations to address environmental concerns around the globe. The DEVELOP model builds capacity in both participants (students, recent graduates, and early and transitioning career professionals) who conduct the projects and partners (decision and policy makers) who are recipients of project methodologies and results. Projects focus on a spectrum of thematic topics, including water resource management which made up 30% of the DEVELOP FY2017 portfolio. During this period, DEVELOP conducted water-focused feasibility studies in collaboration with 22 partners across 13 U.S. states and five countries. This presentation will provide an overview of needs identified, DEVELOP's response, data sources, challenges, and lessons learned.

  18. A Multi-Tiered Approach for Building Capacity in Hydrologic Modeling for Water Resource Management in Developing Regions

    Science.gov (United States)

    Markert, K. N.; Limaye, A. S.; Rushi, B. R.; Adams, E. C.; Anderson, E.; Ellenburg, W. L.; Mithieu, F.; Griffin, R.

    2017-12-01

    Water resource management is the process by which governments, businesses and/or individuals reach and implement decisions that are intended to address the future quantity and/or quality of water for societal benefit. The implementation of water resource management typically requires the understanding of the quantity and/or timing of a variety of hydrologic variables (e.g. discharge, soil moisture and evapotranspiration). Often times these variables for management are simulated using hydrologic models particularly in data sparse regions. However, there are several large barriers to entry in learning how to use models, applying best practices during the modeling process, and selecting and understanding the most appropriate model for diverse applications. This presentation focuses on a multi-tiered approach to bring the state-of-the-art hydrologic modeling capabilities and methods to developing regions through the SERVIR program, a joint NASA and USAID initiative that builds capacity of regional partners and their end users on the use of Earth observations for environmental decision making. The first tier is a series of trainings on the use of multiple hydrologic models, including the Variable Infiltration Capacity (VIC) and Ensemble Framework For Flash Flood Forecasting (EF5), which focus on model concepts and steps to successfully implement the models. We present a case study for this in a pilot area, the Nyando Basin in Kenya. The second tier is focused on building a community of practice on applied hydrology modeling aimed at creating a support network for hydrologists in SERVIR regions and promoting best practices. The third tier is a hydrologic inter-comparison project under development in the SERVIR regions. The objective of this step is to understand model performance under specific decision-making scenarios, and to share knowledge among hydrologists in SERVIR regions. The results of these efforts include computer programs, training materials, and new

  19. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  20. Focused attention improves working memory: implications for flexible-resource and discrete-capacity models.

    Science.gov (United States)

    Souza, Alessandra S; Rerko, Laura; Lin, Hsuan-Yu; Oberauer, Klaus

    2014-10-01

    Performance in working memory (WM) tasks depends on the capacity for storing objects and on the allocation of attention to these objects. Here, we explored how capacity models need to be augmented to account for the benefit of focusing attention on the target of recall. Participants encoded six colored disks (Experiment 1) or a set of one to eight colored disks (Experiment 2) and were cued to recall the color of a target on a color wheel. In the no-delay condition, the recall-cue was presented after a 1,000-ms retention interval, and participants could report the retrieved color immediately. In the delay condition, the recall-cue was presented at the same time as in the no-delay condition, but the opportunity to report the color was delayed. During this delay, participants could focus attention exclusively on the target. Responses deviated less from the target's color in the delay than in the no-delay condition. Mixture modeling assigned this benefit to a reduction in guessing (Experiments 1 and 2) and transposition errors (Experiment 2). We tested several computational models implementing flexible or discrete capacity allocation, aiming to explain both the effect of set size, reflecting the limited capacity of WM, and the effect of delay, reflecting the role of attention to WM representations. Both models fit the data better when a spatially graded source of transposition error is added to its assumptions. The benefits of focusing attention could be explained by allocating to this object a higher proportion of the capacity to represent color.

  1. Building sustainable organizational capacity to deliver HIV programs in resource-constrained settings: stakeholder perspectives

    Directory of Open Access Journals (Sweden)

    Anjali Sharma

    2013-12-01

    Full Text Available Background: In 2008, the US government mandated that HIV/AIDS care and treatment programs funded by the US President's Emergency Plan for AIDS Relief (PEPFAR should shift from US-based international partners (IPs to registered locally owned organizations (local partners, or LPs. The US Health Resources and Services Administration (HRSA developed the Clinical Assessment for Systems Strengthening (ClASS framework for technical assistance in resource-constrained settings. The ClASS framework involves all stakeholders in the identification of LPs’ strengths and needs for technical assistance. Objective: This article examines the role of ClASS in building capacity of LPs that can endure and adapt to changing financial and policy environments. Design: All stakeholders (n=68 in Kenya, Zambia, and Nigeria who had participated in the ClASS from LPs and IPs, the US Centers for Disease Control and Prevention (CDC, and, in Nigeria, HIV/AIDS treatment facilities (TFs were interviewed individually or in groups (n=42 using an open-ended interview guide. Thematic analysis revealed stakeholder perspectives on ClASS-initiated changes and their sustainability. Results: Local organizations were motivated to make changes in internal operations with the ClASS approach, PEPFAR's competitive funding climate, organizational goals, and desired patient health outcomes. Local organizations drew on internal resources and, if needed, technical assistance from IPs. Reportedly, ClASS-initiated changes and remedial action plans made LPs more competitive for PEPFAR funding. LPs also attributed their successful funding applications to their preexisting systems and reputation. Bureaucracy, complex and competing tasks, and staff attrition impeded progress toward the desired changes. Although CDC continues to provide technical assistance through IPs, declining PEPFAR funds threaten the consolidation of gains, smooth program transition, and continuity of treatment services

  2. Mobile devices and computing cloud resources allocation for interactive applications

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2017-06-01

    Full Text Available Using mobile devices such as smartphones or iPads for various interactive applications is currently very common. In the case of complex applications, e.g. chess games, the capabilities of these devices are insufficient to run the application in real time. One of the solutions is to use cloud computing. However, there is an optimization problem of mobile device and cloud resources allocation. An iterative heuristic algorithm for application distribution is proposed. The algorithm minimizes the energy cost of application execution with constrained execution time.

  3. Negative quasi-probability as a resource for quantum computation

    International Nuclear Information System (INIS)

    Veitch, Victor; Ferrie, Christopher; Emerson, Joseph; Gross, David

    2012-01-01

    A central problem in quantum information is to determine the minimal physical resources that are required for quantum computational speed-up and, in particular, for fault-tolerant quantum computation. We establish a remarkable connection between the potential for quantum speed-up and the onset of negative values in a distinguished quasi-probability representation, a discrete analogue of the Wigner function for quantum systems of odd dimension. This connection allows us to resolve an open question on the existence of bound states for magic state distillation: we prove that there exist mixed states outside the convex hull of stabilizer states that cannot be distilled to non-stabilizer target states using stabilizer operations. We also provide an efficient simulation protocol for Clifford circuits that extends to a large class of mixed states, including bound universal states. (paper)

  4. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  5. Next Generation Computer Resources: Reference Model for Project Support Environments (Version 2.0)

    National Research Council Canada - National Science Library

    Brown, Alan

    1993-01-01

    The objective of the Next Generation Computer Resources (NGCR) program is to restructure the Navy's approach to acquisition of standard computing resources to take better advantage of commercial advances and investments...

  6. Radiologic total lung capacity measurement. Development and evaluation of a computer-based system

    Energy Technology Data Exchange (ETDEWEB)

    Seeley, G.W.; Mazzeo, J.; Borgstrom, M.; Hunter, T.B.; Newell, J.D.; Bjelland, J.C.

    1986-11-01

    The development of a computer-based radiologic total lung capacity (TLC) measurement system designed to be used by non-physician personnel is detailed. Four operators tested the reliability and validity of the system by measuring inspiratory PA and lateral pediatric chest radiographs with a Graf spark pen interfaced to a DEC VAX 11/780 computer. First results suggest that the ultimate goal of developing an accurate and easy to use TLC measurement system for non-physician personnel is attainable.

  7. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  8. Efficient Buffer Capacity and Scheduler Setting Computation for Soft Real-Time Stream Processing Applications

    NARCIS (Netherlands)

    Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef

    2007-01-01

    Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique

  9. An entropy theorem for computing the capacity of weakly (d, k)-constrained sequences

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Schouhamer Immink, K.A.

    2000-01-01

    We find an analytic expression for the maximum of the normalized entropy -SieTpiln pi/SieTipi where the set T is the disjoint union of sets Sn of positive integers that are assigned probabilities Pn, SnPn =1. This result is applied to the computation of the capacity of weakly (d,k)-constrained

  10. Estimating social carrying capacity through computer simulation modeling: an application to Arches National Park, Utah

    Science.gov (United States)

    Benjamin Wang; Robert E. Manning; Steven R. Lawson; William A. Valliere

    2001-01-01

    Recent research and management experience has led to several frameworks for defining and managing carrying capacity of national parks and related areas. These frameworks rely on monitoring indicator variables to ensure that standards of quality are maintained. The objective of this study was to develop a computer simulation model to estimate the relationships between...

  11. Predicting the Pullout Capacity of Small Ground Anchors Using Nonlinear Integrated Computing Techniques

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2017-01-01

    Full Text Available This study investigates predicting the pullout capacity of small ground anchors using nonlinear computing techniques. The input-output prediction model for the nonlinear Hammerstein-Wiener (NHW and delay inputs for the adaptive neurofuzzy inference system (DANFIS are developed and utilized to predict the pullout capacity. The results of the developed models are compared with previous studies that used artificial neural networks and least square support vector machine techniques for the same case study. The in situ data collection and statistical performances are used to evaluate the models performance. Results show that the developed models enhance the precision of predicting the pullout capacity when compared with previous studies. Also, the DANFIS model performance is proven to be better than other models used to detect the pullout capacity of ground anchors.

  12. Computer calculation of heat capacity of natural gases over a wide range of pressure and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Dranchuk, P.M. (Alberta Univ., Edmonton, AB (Canada)); Abou-Kassem, J.H. (Pennsylvania State Univ., University Park, PA (USA))

    1992-04-01

    A method is presented whereby specific heats or heat capacities of natural gases, both sweet and sour, at elevated pressures and temperatures may be made suitable to modern-day machine calculation. The method involves developing a correlation for ideal isobaric heat capacity as a function of gas gravity and pseudo reduced temperature over the temperature range of 300 to 1500 K, and a mathematical equation for the isobaric heat capacity departure based on accepted thermodynamic principles applied to an equation of state that adequately describes the behavior of gases to which the Standing and Katz Z factor correlation applies. The heat capacity departure equation is applicable over the range of 0.2 {le} Pr {le} 15 and 1.05 {le} Tr {le} 3, where Pr and Tr refer to the reduced pressure and temperature respectively. The significance of the method presented lies in its utility and adaptability to computer applications. 25 refs., 2 figs., 4 tabs.

  13. International Conference on Human Resource Development for Nuclear Power Programmes: Building and Sustaining Capacity. Presentations

    International Nuclear Information System (INIS)

    2014-01-01

    The objectives of the conference are to: • Review developments in the global status of HRD since the 2010 international conference; • Emphasize the role of human resources and capacity building programmes at the national and organizational level for achieving safe, secure and sustainable nuclear power programmes; • Discuss the importance of building competence in nuclear safety and security; • Provide a forum for information exchange on national, as well as international, policies and practices; • Share key elements and best practices related to the experience of Member States that are introducing, operating or expanding nuclear power programmes; • Highlight the practices and issues regarding HRD at the organizational and national level; • Highlight education and training programmes and practices; • Emphasize the role of nuclear knowledge management for knowledge transfer and HRD; and • Elaborate on the role and scope of various knowledge networks

  14. Uranium from Coal Ash: Resource Assessment and Outlook on Production Capacities

    International Nuclear Information System (INIS)

    Monnet, Antoine

    2014-01-01

    Conclusion: Uranium production from coal-ash is technically feasible: in some situations, it could reach commercial development, in such case, fast lead time will be a plus. Technically accessible resources are significant (1.1 to 4.5 MtU). Yet most of those are low grade. Potential reserves don’t exceed 200 ktU (cut-off grade = 200 ppm). • By-product uranium production => constrained production capacities; • Realistic production potential < 700 tU/year; • ~ 1% of current needs. → Coal ash will not be a significant source of uranium for the 21st century – even if production constrains are released (increase in coal consumption

  15. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  16. Resource Planning in Glaucoma: A Tool to Evaluate Glaucoma Service Capacity.

    Science.gov (United States)

    Batra, Ruchika; Sharma, Hannah E; Elaraoud, Ibrahim; Mohamed, Shabbir

    2017-12-28

    The National Patient Safety Agency (2009) publication advising timely follow-up of patients with established glaucoma followed several reported instances of visual loss due to postponed appointments and patients lost to follow-up. The Royal College of Ophthalmologists Quality Standards Development Group stated that all hospital appointments should occur within 15% of the intended follow-up period. To determine whether: 1. Glaucoma follow-up appointments at a teaching hospital occur within the requested time 2. Appointments are requested at appropriate intervals based on the NICE Guidelines 3. The capacity of the glaucoma service is adequate Methods: A two-part audit was undertaken of 98 and 99 consecutive patients respectively attending specialist glaucoma clinics. In the first part, the reasons for delayed appointments were recorded. In the second part the requested follow-up was compared with NICE guidelines where applicable. Based on the findings, changes were implemented and a re-audit of 100 patients was carried out. The initial audit found that although clinical decisions regarding follow-up intervals were 100% compliant with NICE guidelines where applicable, 24% of appointments were delayed beyond 15% of the requested period, due to administrative errors and inadequate capacity, leading to significant clinical deterioration in two patients. Following the introduction of an electronic appointment tracker and increased clinical capacity created by extra clinics and clinicians, the re-audit found a marked decrease in the percentage of appointments being delayed (9%). This audit is a useful tool to evaluate glaucoma service provision, assist in resource planning for the service and bring about change in a non-confrontational way. It can be widely applied and adapted for use in other medical specialities.

  17. Exploring the impact of reduced hydro capacity and lignite resources on the Macedonian power sector development

    Directory of Open Access Journals (Sweden)

    Taseska-Gjorgievskaa Verica

    2014-01-01

    Full Text Available The reference development pathway of the Macedonian energy sector highlights the important role that lignite and hydro power play in the power sector, each accounting for 40% of total capacity in 2021. In 2030, this dominance continues, although hydro has a higher share due to the retirement of some of the existing lignite plants. Three sensitivity runs of the MARKAL-Macedonia energy system model have been undertaken to explore the importance of these technologies to the system, considering that their resource may be reduced with time: (1 Reducing the availability of lignite from domestic mines by 50% in 2030 (with limited capacity of imports, (2 Removing three large hydro options, which account for 310 MW in the business-as-usual case, and (3 Both of the above restrictions. The reduction in lignite availability is estimated to lead to additional overall system costs of 0.7%, compared to hydro restrictions at only 0.1%. With both restrictions applied, the additional costs rise to over 1%, amounting to 348 M€ over the 25 year planning horizon. In particular, costs are driven up by an increasing reliance on electricity imports. In all cases, the total electricity generation decreases, but import increases, which leads to a drop in capacity requirements. In both, the lignite and the hydro restricted cases, it is primarily gas-fired generation and imports that “fill the gap”. This highlights the importance of an increasingly diversified and efficient supply, which should be promoted through initiatives on renewables, energy efficiency, and lower carbon emissions.

  18. Anesthesia Capacity in Ghana: A Teaching Hospital's Resources, and the National Workforce and Education.

    Science.gov (United States)

    Brouillette, Mark A; Aidoo, Alfred J; Hondras, Maria A; Boateng, Nana A; Antwi-Kusi, Akwasi; Addison, William; Hermanson, Alec R

    2017-12-01

    Quality anesthetic care is lacking in low- and middle-income countries (LMICs). Global health leaders call for perioperative capacity reports in limited-resource settings to guide improved health care initiatives. We describe a teaching hospital's resources and the national workforce and education in this LMIC capacity report. A prospective observational study was conducted at Komfo Anokye Teaching Hospital (KATH) in Kumasi, Ghana, during 4 weeks in August 2016. Teaching hospital data were generated from observations of hospital facilities and patient care, review of archival records, and interviews with KATH personnel. National data were obtained from interviews with KATH personnel, correspondence with Ghana's anesthesia society, and review of public records. The practice of anesthesia at KATH incorporated preanesthesia clinics, intraoperative management, and critical care. However, there were not enough physicians to consistently supervise care, especially in postanesthesia care units (PACUs) and the critical care unit (CCU). Clean water and electricity were usually reliable in all 16 operating rooms (ORs) and throughout the hospital. Equipment and drugs were inventoried in detail. While much basic infrastructure, equipment, and medications were present in ORs, patient safety was hindered by hospital-wide oxygen supply failures and shortage of vital signs monitors and working ventilators in PACUs and the CCU. In 2015, there were 10,319 anesthetics administered, with obstetric and gynecologic, general, and orthopedic procedures comprising 62% of surgeries. From 2011 to 2015, all-cause perioperative mortality rate in ORs and PACUs was 0.65% or 1 death per 154 anesthetics, with 99% of deaths occurring in PACUs. Workforce and education data at KATH revealed 10 anesthesia attending physicians, 61 nurse anesthetists (NAs), and 7 anesthesia resident physicians in training. At the national level, 70 anesthesia attending physicians and 565 NAs cared for Ghana's population

  19. Alluvial diamond resource potential and production capacity assessment of the Central African Republic

    Science.gov (United States)

    Chirico, Peter G.; Barthelemy, Francis; Ngbokoto, Francois A.

    2010-01-01

    In May of 2000, a meeting was convened in Kimberley, South Africa, and attended by representatives of the diamond industry and leaders of African governments to develop a certification process intended to assure that rough, exported diamonds were free of conflict concerns. This meeting was supported later in 2000 by the United Nations in a resolution adopted by the General Assembly. By 2002, the Kimberly Process Certification Scheme (KPCS) was ratified and signed by diamond-producing and diamond-importing countries. Over 70 countries were included as members of the KPCS at the end of 2007. To prevent trade in "conflict diamonds" while protecting legitimate trade, the KPCS requires that each country set up an internal system of controls to prevent conflict diamonds from entering any imported or exported shipments of rough diamonds. Every diamond or diamond shipment must be accompanied by a Kimberley Process (KP) certificate and be contained in tamper-proof packaging. The objective of this study was (1) to assess the naturally occurring endowment of diamonds in the Central African Republic (potential resources) based on geological evidence, previous studies, and recent field data and (2) to assess the diamond-production capacity and measure the intensity of mining activity. Several possible methods can be used to estimate the potential diamond resource. However, because there is generally a lack of sufficient and consistent data recording all diamond mining in the Central African Republic and because time to conduct fieldwork and accessibility to the diamond mining areas are limited, two different methodologies were used: the volume and grade approach and the content per kilometer approach. Estimates are that approximately 39,000,000 carats of alluvial diamonds remain in the eastern and western zones of the CAR combined. This amount is roughly twice the total amount of diamonds reportedly exported from the Central African Republic since 1931. Production capacity is

  20. Big Data in Cloud Computing: A Resource Management Perspective

    Directory of Open Access Journals (Sweden)

    Saeed Ullah

    2018-01-01

    Full Text Available The modern day advancement is increasingly digitizing our lives which has led to a rapid growth of data. Such multidimensional datasets are precious due to the potential of unearthing new knowledge and developing decision-making insights from them. Analyzing this huge amount of data from multiple sources can help organizations to plan for the future and anticipate changing market trends and customer requirements. While the Hadoop framework is a popular platform for processing larger datasets, there are a number of other computing infrastructures, available to use in various application domains. The primary focus of the study is how to classify major big data resource management systems in the context of cloud computing environment. We identify some key features which characterize big data frameworks as well as their associated challenges and issues. We use various evaluation metrics from different aspects to identify usage scenarios of these platforms. The study came up with some interesting findings which are in contradiction with the available literature on the Internet.

  1. Weakly and strongly polynomial algorithms for computing the maximum decrease in uniform arc capacities

    Directory of Open Access Journals (Sweden)

    Ghiyasvand Mehdi

    2016-01-01

    Full Text Available In this paper, a new problem on a directed network is presented. Let D be a feasible network such that all arc capacities are equal to U. Given a t > 0, the network D with arc capacities U - t is called the t-network. The goal of the problem is to compute the largest t such that the t-network is feasible. First, we present a weakly polynomial time algorithm to solve this problem, which runs in O(log(nU maximum flow computations, where n is the number of nodes. Then, an O(m2n time approach is presented, where m is the number of arcs. Both the weakly and strongly polynomial algorithms are inspired by McCormick and Ervolina (1994.

  2. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  3. Developing capacity in health informatics in a resource poor setting: lessons from Peru.

    Science.gov (United States)

    Kimball, Ann Marie; Curioso, Walter H; Arima, Yuzo; Fuller, Sherrilynne; Garcia, Patricia J; Segovia-Juarez, Jose; Castagnetto, Jesus M; Leon-Velarde, Fabiola; Holmes, King K

    2009-10-27

    The public sectors of developing countries require strengthened capacity in health informatics. In Peru, where formal university graduate degrees in biomedical and health informatics were lacking until recently, the AMAUTA Global Informatics Research and Training Program has provided research and training for health professionals in the region since 1999. The Fogarty International Center supports the program as a collaborative partnership between Universidad Peruana Cayetano Heredia in Peru and the University of Washington in the United States of America. The program aims to train core professionals in health informatics and to strengthen the health information resource capabilities and accessibility in Peru. The program has achieved considerable success in the development and institutionalization of informatics research and training programs in Peru. Projects supported by this program are leading to the development of sustainable training opportunities for informatics and eight of ten Peruvian fellows trained at the University of Washington are now developing informatics programs and an information infrastructure in Peru. In 2007, Universidad Peruana Cayetano Heredia started offering the first graduate diploma program in biomedical informatics in Peru.

  4. Education and Training Networks as a Tool for Nuclear Security Human Resource Development and Capacity Building

    International Nuclear Information System (INIS)

    Nikonov, D.

    2014-01-01

    Human Resource Development for Capacity Building for Nuclear Security: • Comprehensive Training Programme Objective: To raise awareness, to fill gaps between the actual performance of personnel and the required competencies and skills and, to build-up qualified instructors/trainers. • Promoting Nuclear Security Education Objective: To support the development of teaching material, faculty expertise and preparedness, and the promotion of nuclear security education in collaboration with the academic and scientific community. Ultimate Goal: To develop capabilities for supporting sustainable implementation of the international legal instruments and IAEA guidelines for nuclear security worldwide, and to foster nuclear security culture. Education priorities for the future: • Incorporate feedback from the first pilot program into future academic activities in nuclear security; • Based on feedback from pilot program: • Revise the NSS12 guidance document; • Update educational materials and textbooks. • Support INSEN members, which consider launching MSc programs at their institutions; • Continue promoting nuclear security education as part of existing degree programs (through certificate or concentration options); • Support the use of new forms of teaching and learning in nuclear security education: • Online e-learning degree programmes and modules; • Learning by experience; • Problem-oriented learning tailored to nuclear security functions

  5. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  6. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    Science.gov (United States)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  7. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  8. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  9. Research on elastic resource management for multi-queue under cloud computing environment

    Science.gov (United States)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  10. SYSTEMATIC LITERATURE REVIEW ON RESOURCE ALLOCATION AND RESOURCE SCHEDULING IN CLOUD COMPUTING

    OpenAIRE

    B. Muni Lavanya; C. Shoba Bindu

    2016-01-01

    The objective the work is intended to highlight the key features and afford finest future directions in the research community of Resource Allocation, Resource Scheduling and Resource management from 2009 to 2016. Exemplifying how research on Resource Allocation, Resource Scheduling and Resource management has progressively increased in the past decade by inspecting articles, papers from scientific and standard publications. Survey materialized in three-fold process. Firstly, investigate on t...

  11. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    This paper analyzes the decision-making problem confronting SMEs considering the adoption of cloud computing as an alternative to in-house computing services provision. The economics of choosing between in-house computing and a cloud alternative is analyzed by comparing the total economic costs...... in determining the relative value of cloud computing....

  12. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  13. Multi-scale research of time and space differences about ecological footprint and ecological carrying capacity of the water resources

    Science.gov (United States)

    Li, Jiahong; Lei, Xiaohui; Fu, Qiang; Li, Tianxiao; Qiao, Yu; Chen, Lei; Liao, Weihong

    2018-03-01

    A multi-scale assessment framework for assessing and comparing the water resource sustainability based on the ecological footprint (EF) is introduced. The study aims to manage the water resource from different views in Heilongjiang Province. First of all, from the scale of each city, the water ecological carrying capacity (ECC) was calculated from 2000 to 2011, and map the spatial distribution of the recent 3 years which show that, the water ecological carrying capacity (ECC) is uneven and has a downward trend year by year. Then, from the perspective of the five secondary partition basins in Heilongjiang Province, the paper calculated the ecological carrying capacity (ECC), the ecological footprint (EF) and ecological surplus and deficit (S&D) situation of water resources from 2000 to 2011, which show that the ecological deficit situation is more prominent in Nenjiang and Suifenhe basins which are in an unsustainable development state. Finally, from the perspective of the province, the paper calculated the ecological carrying capacity (ECC), the ecological footprint (EF) and ecological S&D of water resources from 2000 to 2011 in Heilongjiang Province, which show that the ecological footprint (EF) is in the rising trend, and the correlation coefficient between the ecological carrying capacity (ECC) and the precipitation is 0.8. There are 5 years of unsustainable development state in Heilongjiang. The proposed multi-scale assessment of WEF aims to evaluate the complex relationship between water resource supply and consumption in different spatial scales and time series. It also provides more reasonable assessment result which can be used by managers and regulators.

  14. Water Resource Management Mechanisms for Intrastate Violent Conflict Resolution: the Capacity Gap and What To Do About It.

    Science.gov (United States)

    Workman, M.; Veilleux, J. C.

    2014-12-01

    Violent conflict and issues surrounding available water resources are both global problems and are connected. Violent conflict is increasingly intrastate in nature and coupled with increased hydrological variability as a function of climate change, there will be increased pressures on water resource use. The majority of mechanisms designed to secure water resources are often based on the presence of a governance framework or another type of institutional capacity, such as offered through a supra- or sub-national organization like the United Nations or a river basin organization. However, institutional frameworks are not present or loose functionality during violent conflict. Therefore, it will likely be extremely difficult to secure water resources for a significant proportion of populations in Fragile and Conflict Affected States. However, the capacity in Organisation for Economic Co-operation and Development nations for the appropriate interventions to address this problem is reduced by an increasing reluctance to participate in interventionist operations following a decade of expeditionary warfighting mainly in Iraq and Afghanistan, and related defence cuts. Therefore, future interventions in violent conflict and securing water resources may be more indirect in nature. This paper assesses the state of understanding key areas in the present literature and highlights the gap of securing water resources during violent conflict in the absence of institutional capacity. There is a need to close this gap as a matter of urgency by formulating frameworks to assess the lack of institutional oversight / framework for water resources in areas where violent conflict is prevalent; developing inclusive resource management platforms through transparency and reconciliation mechanisms; and developing endogenous confidence-building measures and evaluate how these may be encouraged by exogenous initiatives including those facilitated by the international community. This effort

  15. Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony Optimization Algorithm

    OpenAIRE

    Lingna He; Qingshui Li; Linan Zhu

    2012-01-01

    In order to replace the traditional Internet software usage patterns and enterprise management mode, this paper proposes a new business calculation mode- cloud computing, resources scheduling strategy is the key technology in cloud computing, Based on the study of cloud computing system structure and the mode of operation, The key research for cloud computing the process of the work scheduling and resource allocation problems based on ant colony algorithm , Detailed analysis and design of the...

  16. The ISCB Student Council Internship Program: Expanding computational biology capacity worldwide.

    Science.gov (United States)

    Anupama, Jigisha; Francescatto, Margherita; Rahman, Farzana; Fatima, Nazeefa; DeBlasio, Dan; Shanmugam, Avinash Kumar; Satagopam, Venkata; Santos, Alberto; Kolekar, Pandurang; Michaut, Magali; Guney, Emre

    2018-01-01

    Education and training are two essential ingredients for a successful career. On one hand, universities provide students a curriculum for specializing in one's field of study, and on the other, internships complement coursework and provide invaluable training experience for a fruitful career. Consequently, undergraduates and graduates are encouraged to undertake an internship during the course of their degree. The opportunity to explore one's research interests in the early stages of their education is important for students because it improves their skill set and gives their career a boost. In the long term, this helps to close the gap between skills and employability among students across the globe and balance the research capacity in the field of computational biology. However, training opportunities are often scarce for computational biology students, particularly for those who reside in less-privileged regions. Aimed at helping students develop research and academic skills in computational biology and alleviating the divide across countries, the Student Council of the International Society for Computational Biology introduced its Internship Program in 2009. The Internship Program is committed to providing access to computational biology training, especially for students from developing regions, and improving competencies in the field. Here, we present how the Internship Program works and the impact of the internship opportunities so far, along with the challenges associated with this program.

  17. The ISCB Student Council Internship Program: Expanding computational biology capacity worldwide.

    Directory of Open Access Journals (Sweden)

    Jigisha Anupama

    2018-01-01

    Full Text Available Education and training are two essential ingredients for a successful career. On one hand, universities provide students a curriculum for specializing in one's field of study, and on the other, internships complement coursework and provide invaluable training experience for a fruitful career. Consequently, undergraduates and graduates are encouraged to undertake an internship during the course of their degree. The opportunity to explore one's research interests in the early stages of their education is important for students because it improves their skill set and gives their career a boost. In the long term, this helps to close the gap between skills and employability among students across the globe and balance the research capacity in the field of computational biology. However, training opportunities are often scarce for computational biology students, particularly for those who reside in less-privileged regions. Aimed at helping students develop research and academic skills in computational biology and alleviating the divide across countries, the Student Council of the International Society for Computational Biology introduced its Internship Program in 2009. The Internship Program is committed to providing access to computational biology training, especially for students from developing regions, and improving competencies in the field. Here, we present how the Internship Program works and the impact of the internship opportunities so far, along with the challenges associated with this program.

  18. Discovery of resources using MADM approaches for parallel and distributed computing

    Directory of Open Access Journals (Sweden)

    Mandeep Kaur

    2017-06-01

    Full Text Available Grid, a form of parallel and distributed computing, allows the sharing of data and computational resources among its users from various geographical locations. The grid resources are diverse in terms of their underlying attributes. The majority of the state-of-the-art resource discovery techniques rely on the static resource attributes during resource selection. However, the matching resources based on the static resource attributes may not be the most appropriate resources for the execution of user applications because they may have heavy job loads, less storage space or less working memory (RAM. Hence, there is a need to consider the current state of the resources in order to find the most suitable resources. In this paper, we have proposed a two-phased multi-attribute decision making (MADM approach for discovery of grid resources by using P2P formalism. The proposed approach considers multiple resource attributes for decision making of resource selection and provides the best suitable resource(s to grid users. The first phase describes a mechanism to discover all matching resources and applies SAW method to shortlist the top ranked resources, which are communicated to the requesting super-peer. The second phase of our proposed methodology applies integrated MADM approach (AHP enriched PROMETHEE-II on the list of selected resources received from different super-peers. The pairwise comparison of the resources with respect to their attributes is made and the rank of each resource is determined. The top ranked resource is then communicated to the grid user by the grid scheduler. Our proposed methodology enables the grid scheduler to allocate the most suitable resource to the user application and also reduces the search complexity by filtering out the less suitable resources during resource discovery.

  19. An interactive computer approach to performing resource analysis for a multi-resource/multi-project problem. [Spacelab inventory procurement planning

    Science.gov (United States)

    Schlagheck, R. A.

    1977-01-01

    New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.

  20. Resource-Aware Load Balancing Scheme using Multi-objective Optimization in Cloud Computing

    OpenAIRE

    Kavita Rana; Vikas Zandu

    2016-01-01

    Cloud computing is a service based, on-demand, pay per use model consisting of an interconnected and virtualizes resources delivered over internet. In cloud computing, usually there are number of jobs that need to be executed with the available resources to achieve optimal performance, least possible total time for completion, shortest response time, and efficient utilization of resources etc. Hence, job scheduling is the most important concern that aims to ensure that use’s requirement are ...

  1. Impact of changing computer technology on hydrologic and water resource modeling

    OpenAIRE

    Loucks, D.P.; Fedra, K.

    1987-01-01

    The increasing availability of substantial computer power at relatively low costs and the increasing ease of using computer graphics, of communicating with other computers and data bases, and of programming using high-level problem-oriented computer languages, is providing new opportunities and challenges for those developing and using hydrologic and water resources models. This paper reviews some of the progress made towards the development and application of computer support systems designe...

  2. LHCb Computing Resources: 2011 re-assessment, 2012 request and 2013 forecast

    CERN Document Server

    Graciani, R

    2011-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2011 data taking period, request of computing resource needs for 2012 data taking period and a first forecast of the 2013 needs, when no data taking is foreseen. Estimates are based on 2010 experienced and last updates from LHC schedule, as well as on a new implementation of the computing model simulation tool. Differences in the model and deviations in the estimates from previous presented results are stressed.

  3. LHCb Computing Resources: 2012 re-assessment, 2013 request and 2014 forecast

    CERN Document Server

    Graciani Diaz, Ricardo

    2012-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2012 data-taking period, request of computing resource needs for 2013, and a first forecast of the 2014 needs, when restart of data-taking is foreseen. Estimates are based on 2011 experience, as well as on the results of a simulation of the computing model described in the document. Differences in the model and deviations in the estimates from previous presented results are stressed.

  4. Science and Technology Resources on the Internet: Computer Security.

    Science.gov (United States)

    Kinkus, Jane F.

    2002-01-01

    Discusses issues related to computer security, including confidentiality, integrity, and authentication or availability; and presents a selected list of Web sites that cover the basic issues of computer security under subject headings that include ethics, privacy, kids, antivirus, policies, cryptography, operating system security, and biometrics.…

  5. Participatory monitoring and evaluation to aid investment in natural resource manager capacity at a range of scales.

    Science.gov (United States)

    Brown, Peter R; Jacobs, Brent; Leith, Peat

    2012-12-01

    Natural resource (NR) outcomes at catchment scale rely heavily on the adoption of sustainable practices by private NR managers because they control the bulk of the NR assets. Public funds are invested in capacity building of private landholders to encourage adoption of more sustainable natural resource management (NRM) practices. However, prioritisation of NRM funding programmes has often been top-down with limited understanding of the multiple dimensions of landholder capacity leading to a failure to address the underlying capacity constraints of local communities. We argue that well-designed participatory monitoring and evaluation of landholder capacity can provide a mechanism to codify the tacit knowledge of landholders about the social-ecological systems in which they are embedded. This process enables tacit knowledge to be used by regional NRM bodies and government agencies to guide NRM investment in the Australian state of New South Wales. This paper details the collective actions to remove constraints to improved NRM that were identified by discrete groups of landholders through this process. The actions spanned geographical and temporal scales, and responsibility for them ranged across levels of governance.

  6. Computer Simulation and Digital Resources for Plastic Surgery Psychomotor Education.

    Science.gov (United States)

    Diaz-Siso, J Rodrigo; Plana, Natalie M; Stranix, John T; Cutting, Court B; McCarthy, Joseph G; Flores, Roberto L

    2016-10-01

    Contemporary plastic surgery residents are increasingly challenged to learn a greater number of complex surgical techniques within a limited period. Surgical simulation and digital education resources have the potential to address some limitations of the traditional training model, and have been shown to accelerate knowledge and skills acquisition. Although animal, cadaver, and bench models are widely used for skills and procedure-specific training, digital simulation has not been fully embraced within plastic surgery. Digital educational resources may play a future role in a multistage strategy for skills and procedures training. The authors present two virtual surgical simulators addressing procedural cognition for cleft repair and craniofacial surgery. Furthermore, the authors describe how partnerships among surgical educators, industry, and philanthropy can be a successful strategy for the development and maintenance of digital simulators and educational resources relevant to plastic surgery training. It is our responsibility as surgical educators not only to create these resources, but to demonstrate their utility for enhanced trainee knowledge and technical skills development. Currently available digital resources should be evaluated in partnership with plastic surgery educational societies to guide trainees and practitioners toward effective digital content.

  7. Assessing water resources adaptive capacity to climate change impacts in the Pacific Northwest Region of North America

    Directory of Open Access Journals (Sweden)

    A. F. Hamlet

    2011-05-01

    Full Text Available Climate change impacts in Pacific Northwest Region of North America (PNW are projected to include increasing temperatures and changes in the seasonality of precipitation (increasing precipitation in winter, decreasing precipitation in summer. Changes in precipitation are also spatially varying, with the northwestern parts of the region generally experiencing greater increases in cool season precipitation than the southeastern parts. These changes in climate are projected to cause loss of snowpack and associated streamflow timing shifts which will increase cool season (October–March flows and decrease warm season (April–September flows and water availability. Hydrologic extremes such as the 100 yr flood and extreme low flows are also expected to change, although these impacts are not spatially homogeneous and vary with mid-winter temperatures and other factors. These changes have important implications for natural ecosystems affected by water, and for human systems.

    The PNW is endowed with extensive water resources infrastructure and well-established and well-funded management agencies responsible for ensuring that water resources objectives (such as water supply, water quality, flood control, hydropower production, environmental services, etc. are met. Likewise, access to observed hydrological, meteorological, and climatic data and forecasts is in general exceptionally good in the United States and Canada, and is often supported by federally funded programs that ensure that these resources are freely available to water resources practitioners, policy makers, and the general public.

    Access to these extensive resources support the argument that at a technical level the PNW has high capacity to deal with the potential impacts of natural climate variability on water resources. To the extent that climate change will manifest itself as moderate changes in variability or extremes, we argue that existing water resources

  8. Assessing water resources adaptive capacity to climate change impacts in the Pacific Northwest Region of North America

    Science.gov (United States)

    Hamlet, A. F.

    2011-05-01

    Climate change impacts in Pacific Northwest Region of North America (PNW) are projected to include increasing temperatures and changes in the seasonality of precipitation (increasing precipitation in winter, decreasing precipitation in summer). Changes in precipitation are also spatially varying, with the northwestern parts of the region generally experiencing greater increases in cool season precipitation than the southeastern parts. These changes in climate are projected to cause loss of snowpack and associated streamflow timing shifts which will increase cool season (October-March) flows and decrease warm season (April-September) flows and water availability. Hydrologic extremes such as the 100 yr flood and extreme low flows are also expected to change, although these impacts are not spatially homogeneous and vary with mid-winter temperatures and other factors. These changes have important implications for natural ecosystems affected by water, and for human systems. The PNW is endowed with extensive water resources infrastructure and well-established and well-funded management agencies responsible for ensuring that water resources objectives (such as water supply, water quality, flood control, hydropower production, environmental services, etc.) are met. Likewise, access to observed hydrological, meteorological, and climatic data and forecasts is in general exceptionally good in the United States and Canada, and is often supported by federally funded programs that ensure that these resources are freely available to water resources practitioners, policy makers, and the general public. Access to these extensive resources support the argument that at a technical level the PNW has high capacity to deal with the potential impacts of natural climate variability on water resources. To the extent that climate change will manifest itself as moderate changes in variability or extremes, we argue that existing water resources infrastructure and institutional arrangements

  9. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    Science.gov (United States)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  10. Expanding Capacity and Promoting Inclusion in Introductory Computer Science: A Focus on Near-Peer Mentor Preparation and Code Review

    Science.gov (United States)

    Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey

    2017-01-01

    A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…

  11. The Healthy Aging Research Network: Resources for Building Capacity for Public Health and Aging Practice

    Science.gov (United States)

    Wilcox, Sara; Altpeter, Mary; Anderson, Lynda A.; Belza, Basia; Bryant, Lucinda; Jones, Dina L.; Leith, Katherine H.; Phelan, Elizabeth A.; Satariano, William A.

    2015-01-01

    There is an urgent need to translate science into practice and help enhance the capacity of professionals to deliver evidence-based programming. We describe contributions of the Healthy Aging Research Network in building professional capacity through online modules, issue briefs, monographs, and tools focused on health promotion practice, physical activity, mental health, and environment and policy. We also describe practice partnerships and research activities that helped inform product development and ways these products have been incorporated into real-world practice to illustrate possibilities for future applications. Our work aims to bridge the research-to-practice gap to meet the demands of an aging population. PMID:24000962

  12. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Nan Zhang

    Full Text Available Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  13. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Science.gov (United States)

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  14. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  15. Optimal Computing Resource Management Based on Utility Maximization in Mobile Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Haoyu Meng

    2017-01-01

    Full Text Available Mobile crowdsourcing, as an emerging service paradigm, enables the computing resource requestor (CRR to outsource computation tasks to each computing resource provider (CRP. Considering the importance of pricing as an essential incentive to coordinate the real-time interaction among the CRR and CRPs, in this paper, we propose an optimal real-time pricing strategy for computing resource management in mobile crowdsourcing. Firstly, we analytically model the CRR and CRPs behaviors in form of carefully selected utility and cost functions, based on concepts from microeconomics. Secondly, we propose a distributed algorithm through the exchange of control messages, which contain the information of computing resource demand/supply and real-time prices. We show that there exist real-time prices that can align individual optimality with systematic optimality. Finally, we also take account of the interaction among CRPs and formulate the computing resource management as a game with Nash equilibrium achievable via best response. Simulation results demonstrate that the proposed distributed algorithm can potentially benefit both the CRR and CRPs. The coordinator in mobile crowdsourcing can thus use the optimal real-time pricing strategy to manage computing resources towards the benefit of the overall system.

  16. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    Science.gov (United States)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in

  17. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    Science.gov (United States)

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  18. Energy-efficient cloud computing : autonomic resource provisioning for datacenters

    OpenAIRE

    Tesfatsion, Selome Kostentinos

    2018-01-01

    Energy efficiency has become an increasingly important concern in data centers because of issues associated with energy consumption, such as capital costs, operating expenses, and environmental impact. While energy loss due to suboptimal use of facilities and non-IT equipment has largely been reduced through the use of best-practice technologies, addressing energy wastage in IT equipment still requires the design and implementation of energy-aware resource management systems. This thesis focu...

  19. TOWARDS NEW COMPUTATIONAL ARCHITECTURES FOR MASS-COLLABORATIVE OPENEDUCATIONAL RESOURCES

    OpenAIRE

    Ismar Frango Silveira; Xavier Ochoa; Antonio Silva Sprock; Pollyana Notargiacomo Mustaro; Yosly C. Hernandez Bieluskas

    2011-01-01

    Open Educational Resources offer several benefits mostly in education and training. Being potentially reusable, their use can reduce time and cost of developing educational programs, so that these savings could be transferred directly to students through the production of a large range of open, freely available content, which vary from hypermedia to digital textbooks. This paper discuss this issue and presents a project and a research network that, in spite of being directed to Latin America'...

  20. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  1. Computer System Resource Requirements of Novice Programming Students.

    Science.gov (United States)

    Nutt, Gary J.

    The characteristics of jobs that constitute the mix for lower division FORTRAN classes in a university were investigated. Samples of these programs were also benchmarked on a larger central site computer and two minicomputer systems. It was concluded that a carefully chosen minicomputer system could offer service at least the equivalent of the…

  2. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    Directory of Open Access Journals (Sweden)

    Yonghua Xiong

    2014-01-01

    Full Text Available This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU virtualization and mobile agent for mobile transparent computing (MTC to devise a method of managing shared resources and services management (SRSM. It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user’s requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  3. A novel resource management method of providing operating system as a service for mobile transparent computing.

    Science.gov (United States)

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  4. Institutional capacity for health systems research in East and Central African Schools of Public Health: strengthening human and financial resources

    Science.gov (United States)

    2014-01-01

    Background Despite its importance in providing evidence for health-related policy and decision-making, an insufficient amount of health systems research (HSR) is conducted in low-income countries (LICs). Schools of public health (SPHs) are key stakeholders in HSR. This paper, one in a series of four, examines human and financial resources capacities, policies and organizational support for HSR in seven Africa Hub SPHs in East and Central Africa. Methods Capacity assessment done included document analysis to establish staff numbers, qualifications and publications; self-assessment using a tool developed to capture individual perceptions on the capacity for HSR and institutional dialogues. Key informant interviews (KIIs) were held with Deans from each SPH and Ministry of Health and non-governmental officials, focusing on perceptions on capacity of SPHs to engage in HSR, access to funding, and organizational support for HSR. Results A total of 123 people participated in the self-assessment and 73 KIIs were conducted. Except for the National University of Rwanda and the University of Nairobi SPH, most respondents expressed confidence in the adequacy of staffing levels and HSR-related skills at their SPH. However, most of the researchers operate at individual level with low outputs. The average number of HSR-related publications was only capacity. This study underscores the need to form effective multidisciplinary teams to enhance research of immediate and local relevance. Capacity strengthening in the SPH needs to focus on knowledge translation and communication of findings to relevant audiences. Advocacy is needed to influence respective governments to allocate adequate funding for HSR to avoid donor dependency that distorts local research agenda. PMID:24888371

  5. Logical and physical resource management in the common node of a distributed function laboratory computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-01-01

    A scheme for managing resources required for transaction processing in the common node of a distributed function computer system has been given. The scheme has been found to be satisfactory for all common node services provided so far

  6. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  7. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  8. MCPLOTS: a particle physics resource based on volunteer computing

    CERN Document Server

    Karneyeu, A; Prestel, S; Skands, P Z

    2014-01-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC@HOME platform.

  9. MCPLOTS. A particle physics resource based on volunteer computing

    Energy Technology Data Exchange (ETDEWEB)

    Karneyeu, A. [Joint Inst. for Nuclear Research, Moscow (Russian Federation); Mijovic, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Irfu/SPP, CEA-Saclay, Gif-sur-Yvette (France); Prestel, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Lund Univ. (Sweden). Dept. of Astronomy and Theoretical Physics; Skands, P.Z. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2013-07-15

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  10. MCPLOTS: a particle physics resource based on volunteer computing

    International Nuclear Information System (INIS)

    Karneyeu, A.; Mijovic, L.; Prestel, S.; Skands, P.Z.

    2014-01-01

    The mcplots.cern.ch web site (mcplots) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the hepdata online database of experimental results and on the rivet Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the lhc rate at home 2.0 platform. (orig.)

  11. MCPLOTS. A particle physics resource based on volunteer computing

    International Nuclear Information System (INIS)

    Karneyeu, A.; Mijovic, L.; Prestel, S.

    2013-07-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  12. Are world uranium resources sufficient to fuel global growth in nuclear generating capacity?

    International Nuclear Information System (INIS)

    Cameron, R.; Vance, R.E.

    2012-01-01

    Increased uranium prices since 2003 have produced more activity in the sector than the previous 20 years. Nuclear reactor construction is proceeding in some countries, ambitious expansion plans have been announced in others and several, particularly in the developing world, are considering introducing nuclear power as a means of meeting rising electricity demand without increasing greenhouse gas emissions. Others have recently decided to either withdraw from the use of nuclear power or not proceed with development plans following the accident at the Fukushima Dai-ichi nuclear power plant in Japan in March 2011. Since the mid-1960, the OECD Nuclear Energy Agency and the International Atomic Energy Agency have jointly prepared a comprehensive update of global uranium resources, production and demand (commonly known as the 'Red Book'. The Red Book is based on government responses to a questionnaire that requests information on uranium exploration and mine development activity, resources and plans for nuclear development to 2035. This presentation provides an overview of the global situation based on the recently published 2011 edition. It features a compilation of global uranium resources, projected mine development and production capability in all the countries currently producing uranium or with plans to do so in the near future. This is compared to updated, post-Fukushima demand projections, reflecting nuclear phase-out plans announced in some countries and ambitious expansion plans of others. The 2011 Red Book shows that currently defined uranium resources are sufficient to meet high case projections of nuclear power development to 2035. (authors)

  13. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  14. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  15. Campus Grids: Bringing Additional Computational Resources to HEP Researchers

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Bockelman, Brian; Swanson, David

    2012-01-01

    It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize these resources by forming a lightweight campus grid. The campus grids framework enables jobs that are submitted to one cluster to overflow, when necessary, to other clusters within the campus using whatever authentication mechanisms are available on campus. This framework is currently being used on several campuses to run HEP and other science jobs. Further, the framework has in some cases been expanded beyond the campus boundary by bridging campus grids into a regional grid, and can even be used to integrate resources from a national cyberinfrastructure such as the Open Science Grid. This paper will highlight 18 months of operational experiences creating campus grids in the US, and the different campus configurations that have successfully utilized the campus grid infrastructure.

  16. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  17. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    Science.gov (United States)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  18. Decision making in water resource planning: Models and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Fedra, K; Carlsen, A J [ed.

    1987-01-01

    This paper describes some basic concepts of simulation-based decision support systems for water resources management and the role of symbolic, graphics-based user interfaces. Designed to allow direct and easy access to advanced methods of analysis and decision support for a broad and heterogeneous group of users, these systems combine data base management, system simulation, operations research techniques such as optimization, interactive data analysis, elements of advanced decision technology, and artificial intelligence, with a friendly and conversational, symbolic display oriented user interface. Important features of the interface are the use of several parallel or alternative styles of interaction and display, indlucing colour graphics and natural language. Combining quantitative numerical methods with qualitative and heuristic approaches, and giving the user direct and interactive control over the systems function, human knowledge, experience and judgement are integrated with formal approaches into a tightly coupled man-machine system through an intelligent and easily accessible user interface. 4 drawings, 42 references.

  19. Monitoring of computing resource utilization of the ATLAS experiment

    International Nuclear Information System (INIS)

    Rousseau, David; Vukotic, Ilija; Schaffer, RD; Dimitrov, Gancho; Aidel, Osman; Albrand, Solveig

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  20. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud

  1. Increasing human resource capacity in African countries: A nursing and midwifery Research Summit

    Directory of Open Access Journals (Sweden)

    Carolyn Sun

    2017-01-01

    Conclusions: Evaluations provided favorable feedback regarding the process leading up to as well as the content of the Research Summit. While further long-term evaluations will be needed to determine the sustainability of this initiative, the Summit format afforded the opportunity for regional experts to meet, examine research priorities, and develop strategic action and mentorship plans. This paper describes a replicable method that could be utilized in other regions using available resources to minimize costs and modest grant funding.

  2. Virtual partitioning for robust resource sharing: computational techniques for heterogeneous traffic

    NARCIS (Netherlands)

    Borst, S.C.; Mitra, D.

    1998-01-01

    We consider virtual partitioning (VP), which is a scheme for sharing a resource among several traffic classes in an efficient, fair, and robust manner. In the preliminary design stage, each traffic class is allocated a nominal capacity, which is based on expected offered traffic and required quality

  3. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  4. Sensor and computing resource management for a small satellite

    Science.gov (United States)

    Bhatia, Abhilasha; Goehner, Kyle; Sand, John; Straub, Jeremy; Mohammad, Atif; Korvald, Christoffer; Nervold, Anders Kose

    A small satellite in a low-Earth orbit (e.g., approximately a 300 to 400 km altitude) has an orbital velocity in the range of 8.5 km/s and completes an orbit approximately every 90 minutes. For a satellite with minimal attitude control, this presents a significant challenge in obtaining multiple images of a target region. Presuming an inclination in the range of 50 to 65 degrees, a limited number of opportunities to image a given target or communicate with a given ground station are available, over the course of a 24-hour period. For imaging needs (where solar illumination is required), the number of opportunities is further reduced. Given these short windows of opportunity for imaging, data transfer, and sending commands, scheduling must be optimized. In addition to the high-level scheduling performed for spacecraft operations, payload-level scheduling is also required. The mission requires that images be post-processed to maximize spatial resolution and minimize data transfer (through removing overlapping regions). The payload unit includes GPS and inertial measurement unit (IMU) hardware to aid in image alignment for the aforementioned. The payload scheduler must, thus, split its energy and computing-cycle budgets between determining an imaging sequence (required to capture the highly-overlapping data required for super-resolution and adjacent areas required for mosaicking), processing the imagery (to perform the super-resolution and mosaicking) and preparing the data for transmission (compressing it, etc.). This paper presents an approach for satellite control, scheduling and operations that allows the cameras, GPS and IMU to be used in conjunction to acquire higher-resolution imagery of a target region.

  5. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  6. School nutritional capacity, resources and practices are associated with availability of food/beverage items in schools.

    Science.gov (United States)

    Mâsse, Louise C; de Niet, Judith E

    2013-02-19

    The school food environment is important to target as less healthful food and beverages are widely available at schools. This study examined whether the availability of specific food/beverage items was associated with a number of school environmental factors. Principals from elementary (n=369) and middle/high schools (n=118) in British Columbia (BC), Canada completed a survey measuring characteristics of the school environment. Our measurement framework integrated constructs from the Theories of Organizational Change and elements from Stillman's Tobacco Policy Framework adapted for obesity prevention. Our measurement framework included assessment of policy institutionalization of nutritional guidelines at the district and school levels, climate, nutritional capacity and resources (nutritional resources and participation in nutritional programs), nutritional practices, and school community support for enacting stricter nutritional guidelines. We used hierarchical mixed-effects logistic regression analyses to examine associations with the availability of fruit, vegetables, pizza/hamburgers/hot dogs, chocolate candy, sugar-sweetened beverages, and french fried potatoes. In elementary schools, fruit and vegetable availability was more likely among schools that have more nutritional resources (OR=6.74 and 5.23, respectively). In addition, fruit availability in elementary schools was highest in schools that participated in the BC School Fruit and Vegetable Nutritional Program and the BC Milk program (OR=4.54 and OR=3.05, respectively). In middle/high schools, having more nutritional resources was associated with vegetable availability only (OR=5.78). Finally, middle/high schools that have healthier nutritional practices (i.e., which align with upcoming provincial/state guidelines) were less likely to have the following food/beverage items available at school: chocolate candy (OR= .80) and sugar-sweetened beverages (OR= .76). School nutritional capacity, resources

  7. School nutritional capacity, resources and practices are associated with availability of food/beverage items in schools

    Science.gov (United States)

    2013-01-01

    Background The school food environment is important to target as less healthful food and beverages are widely available at schools. This study examined whether the availability of specific food/beverage items was associated with a number of school environmental factors. Methods Principals from elementary (n = 369) and middle/high schools (n = 118) in British Columbia (BC), Canada completed a survey measuring characteristics of the school environment. Our measurement framework integrated constructs from the Theories of Organizational Change and elements from Stillman’s Tobacco Policy Framework adapted for obesity prevention. Our measurement framework included assessment of policy institutionalization of nutritional guidelines at the district and school levels, climate, nutritional capacity and resources (nutritional resources and participation in nutritional programs), nutritional practices, and school community support for enacting stricter nutritional guidelines. We used hierarchical mixed-effects logistic regression analyses to examine associations with the availability of fruit, vegetables, pizza/hamburgers/hot dogs, chocolate candy, sugar-sweetened beverages, and french fried potatoes. Results In elementary schools, fruit and vegetable availability was more likely among schools that have more nutritional resources (OR = 6.74 and 5.23, respectively). In addition, fruit availability in elementary schools was highest in schools that participated in the BC School Fruit and Vegetable Nutritional Program and the BC Milk program (OR = 4.54 and OR = 3.05, respectively). In middle/high schools, having more nutritional resources was associated with vegetable availability only (OR = 5.78). Finally, middle/high schools that have healthier nutritional practices (i.e., which align with upcoming provincial/state guidelines) were less likely to have the following food/beverage items available at school: chocolate candy (OR = .80) and sugar

  8. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  9. Computational resources for ribosome profiling: from database to Web server and software.

    Science.gov (United States)

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  11. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  12. Open Educational Resources: The Role of OCW, Blogs and Videos in Computer Networks Classroom

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2012-09-01

    Full Text Available This paper analyzes the learning experiences and opinions obtained from a group of undergraduate students in their interaction with several on-line multimedia resources included in a free on-line course about Computer Networks. These new educational resources employed are based on the Web2.0 approach such as blogs, videos and virtual labs which have been added in a web-site for distance self-learning.

  13. AN ENHANCED METHOD FOREXTENDING COMPUTATION AND RESOURCES BY MINIMIZING SERVICE DELAY IN EDGE CLOUD COMPUTING

    OpenAIRE

    B.Bavishna*1, Mrs.M.Agalya2 & Dr.G.Kavitha3

    2018-01-01

    A lot of research has been done in the field of cloud computing in computing domain. For its effective performance, variety of algorithms has been proposed. The role of virtualization is significant and its performance is dependent on VM Migration and allocation. More of the energy is absorbed in cloud; therefore, the utilization of numerous algorithms is required for saving energy and efficiency enhancement in the proposed work. In the proposed work, green algorithm has been considered with ...

  14. Transportation Energy Futures Series: Alternative Fuel Infrastructure Expansion: Costs, Resources, Production Capacity, and Retail Availability for Low-Carbon Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, W. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heath, Garvin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Steward, Darlene [National Renewable Energy Lab. (NREL), Golden, CO (United States); Vimmerstedt, Laura [National Renewable Energy Lab. (NREL), Golden, CO (United States); Warner, Ethan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Webster, Karen W. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-04-01

    The petroleum-based transportation fuel system is complex and highly developed, in contrast to the nascent low-petroleum, low-carbon alternative fuel system. This report examines how expansion of the low-carbon transportation fuel infrastructure could contribute to deep reductions in petroleum use and greenhouse gas (GHG) emissions across the U.S. transportation sector. Three low-carbon scenarios, each using a different combination of low-carbon fuels, were developed to explore infrastructure expansion trends consistent with a study goal of reducing transportation sector GHG emissions to 80% less than 2005 levels by 2050.These scenarios were compared to a business-as-usual (BAU) scenario and were evaluated with respect to four criteria: fuel cost estimates, resource availability, fuel production capacity expansion, and retail infrastructure expansion.

  15. Load/resource matching for period-of-record computer simulation

    International Nuclear Information System (INIS)

    Lindsey, E.D. Jr.; Robbins, G.E. III

    1991-01-01

    The Southwestern Power Administration (Southwestern), an agency of the Department of Energy, is responsible for marketing the power and energy produced at Federal hydroelectric power projects developed by the U.S. Army Corps of Engineers in the southwestern United States. This paper reports that in order to maximize benefits from limited resources, to evaluate proposed changes in the operation of existing projects, and to determine the feasibility and marketability of proposed new projects, Southwestern utilizes a period-of-record computer simulation model created in the 1960's. Southwestern is constructing a new computer simulation model to take advantage of changes in computers, policy, and procedures. Within all hydroelectric power reservoir systems, the ability of the resources to match the load demand is critical and presents complex problems. Therefore, the method used to compare available energy resources to energy load demands is a very important aspect of the new model. Southwestern has developed an innovative method which compares a resource duration curve with a load duration curve, adjusting the resource duration curve to make the most efficient use of the available resources

  16. An integrated system for land resources supervision based on the IoT and cloud computing

    Science.gov (United States)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  17. Efficient Computation of Buffer Capacities for Cyclo-Static Dataflow Graphs

    NARCIS (Netherlands)

    Wiggers, M.H.; Bekooij, Marco Jan Gerrit; Bekooij, Marco J.G.; Smit, Gerardus Johannes Maria

    A key step in the design of cyclo-static real-time systems is the determination of buffer capacities. In our multi-processor system, we apply back-pressure, which means that tasks wait for space in output buffers. Consequently buffer capacities affect the throughput. This requires the derivation of

  18. Efficient Computation of Buffer Capacities for Cyclo-Static Dataflow Graphs

    NARCIS (Netherlands)

    Wiggers, M.H.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2006-01-01

    A key step in the design of cyclo-static real-time systems is the determination of buffer capacities. In our multi-processor system, we apply back-pressure, which means that tasks wait for space in output buffers. Consequently buffer capacities affect the throughput. This requires the derivation of

  19. The use of Minilabs to improve the testing capacity of regulatory authorities in resource limited settings: Tanzanian experience.

    Science.gov (United States)

    Risha, Peter Gasper; Msuya, Zera; Clark, Malcolm; Johnson, Keith; Ndomondo-Sigonda, Margareth; Layloff, Thomas

    2008-08-01

    The Tanzania Food and Drugs Authority piloted the use of Minilab kits, a thin-layer-chromatographic based drug quality testing technique, in a two-tier quality assurance program. The program is intended to improve testing capacity with timely screening of the quality of medicines as they enter the market. After 1 week training of inspectors on Minilab screening techniques, they were stationed at key Ports-of-Entry (POE) to screen the quality of imported medicines. In addition, three non-Ports-of-Entry centres were established to screen samples collected during Post-Marketing-Surveillance. Standard operating procedures (SOPs) were developed to structure and standardize the implementation process. Over 1200 samples were tested using the Minilab outside the central quality control laboratory (QCL), almost doubling the previous testing capacity. The program contributed to increased regulatory reach and visibility of the Authority throughout the country, serving as a deterrent against entry of substandard medicines into market. The use of Minilab for quality screening was inexpensive and provided a high sample throughput. However, it suffers from the limitation that it can reliably detect only grossly substandard or wrong drug samples and therefore, it should not be used as an independent testing resource but in conjunction with a full-service quality control laboratory capable of auditing reported substandard results.

  20. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  1. Possible Climate Change/Variability and Human Impacts, Vulnerability of African Drought Prone Regions, its Water Resources and Capacity Building

    Science.gov (United States)

    Gan, T. Y. Y.; Qin, X.; Ito, M.; Hülsmann, S.; Xixi, L.; Liong, S. Y.; Disse, M.; Koivusalo, H. J.

    2017-12-01

    This review article discusses the climate, water resources and historical droughts of Africa, drought indices, vulnerability, impact of global warming and landuse to drought-prone regions in West, Southern, and Greater Horn of Africa, which have suffered recurrent severe droughts in the past. Recent studies detected warming and drying trends in Africa since the mid-20th century. Based on the 4th Assessment Report of the Intergovernmental Panel of Climate Change, and that of the 5th Coupled Model Intercomparison Project (CMIP5), both northern and southern Africa are projected to experience drying such as decreasing precipitation, runoff and soil moisture in the 21st Century and could become more vulnerable to impact of droughts. The daily maximum temperature is projected to increase up to 8oC (RCP8.5 of CMIP5), precipitation indices such as total wet day precipitation (PRCPTOT) and heavy precipitation days (R10mm) could decrease, while warm spell duration (WSDI) and consecutive dry days (CDD) could increase. Uncertainties of the above long-term projections, teleconnections to climate anomalies such as ENSO and Madden Julian Oscillation which could also affect water resources of Africa, and capacity building in terms of physical infrastructure and non-structural solutions, are also discussed. Given traditional climate and hydrologic data observed in Africa are generally limited, satellite data should also be exploited to fill in the data gap for Africa in future.

  2. A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate

    Science.gov (United States)

    Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.

    2016-09-01

    Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.

  3. Dynamic provisioning of local and remote compute resources with OpenStack

    Science.gov (United States)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  4. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  5. Resource-constrained project scheduling: computing lower bounds by solving minimum cut problems

    NARCIS (Netherlands)

    Möhring, R.H.; Nesetril, J.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen

    1999-01-01

    We present a novel approach to compute Lagrangian lower bounds on the objective function value of a wide class of resource-constrained project scheduling problems. The basis is a polynomial-time algorithm to solve the following scheduling problem: Given a set of activities with start-time dependent

  6. Selecting, Evaluating and Creating Policies for Computer-Based Resources in the Behavioral Sciences and Education.

    Science.gov (United States)

    Richardson, Linda B., Comp.; And Others

    This collection includes four handouts: (1) "Selection Critria Considerations for Computer-Based Resources" (Linda B. Richardson); (2) "Software Collection Policies in Academic Libraries" (a 24-item bibliography, Jane W. Johnson); (3) "Circulation and Security of Software" (a 19-item bibliography, Sara Elizabeth Williams); and (4) "Bibliography of…

  7. The Usage of informal computer based communication in the context of organization’s technological resources

    OpenAIRE

    Raišienė, Agota Giedrė; Jonušauskas, Steponas

    2011-01-01

    Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization's technological resources. Methodology - meta analysis, survey and descriptive analysis. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the ...

  8. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  9. Computer Processing 10-20-30. Teacher's Manual. Senior High School Teacher Resource Manual.

    Science.gov (United States)

    Fisher, Mel; Lautt, Ray

    Designed to help teachers meet the program objectives for the computer processing curriculum for senior high schools in the province of Alberta, Canada, this resource manual includes the following sections: (1) program objectives; (2) a flowchart of curriculum modules; (3) suggestions for short- and long-range planning; (4) sample lesson plans;…

  10. Photonic entanglement as a resource in quantum computation and quantum communication

    OpenAIRE

    Prevedel, Robert; Aspelmeyer, Markus; Brukner, Caslav; Jennewein, Thomas; Zeilinger, Anton

    2008-01-01

    Entanglement is an essential resource in current experimental implementations for quantum information processing. We review a class of experiments exploiting photonic entanglement, ranging from one-way quantum computing over quantum communication complexity to long-distance quantum communication. We then propose a set of feasible experiments that will underline the advantages of photonic entanglement for quantum information processing.

  11. Relationship between human resource ability and market access capacity on business performance. (case study of wood craft micro- and small-scale industries in Gianyar Regency, Bali)

    Science.gov (United States)

    Sukartini, N. W.; Sudarmini, N. M.; Lasmini, N. K.

    2018-01-01

    The aims of this research are to: (1) analyze the influence of Human Resource Ability on market access capacity in Wood Craft Micro and Small Industry; (2) to analyze the effect of market access capacity on business performance; (3) analyze the influence of Human Resources ability on business performance. Data were collected using questionnaires, interviews, observations, and literature studies. The resulting data were analyzed using Struture Equation Modeling (SEM). The results of the analysis show that (1) there is a positive and significant influence of the ability of Human Resources on market access capacity in Wood Craft Micro-and Small-Scale Industries in Gianyar; (2) there is a positive and significant influence of market access capacity on business performance; and (3) there is a positive and significant influence of Human Resource ability on business performance. To improve the ability to access the market and business performance, it is recommended that human resource ability need to be improved through training; government and higher education institutions are expected to play a role in improving the ability of human resources (craftsmen) through provision of training programs

  12. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  13. Universal resources for approximate and stochastic measurement-based quantum computation

    International Nuclear Information System (INIS)

    Mora, Caterina E.; Piani, Marco; Miyake, Akimasa; Van den Nest, Maarten; Duer, Wolfgang; Briegel, Hans J.

    2010-01-01

    We investigate which quantum states can serve as universal resources for approximate and stochastic measurement-based quantum computation in the sense that any quantum state can be generated from a given resource by means of single-qubit (local) operations assisted by classical communication. More precisely, we consider the approximate and stochastic generation of states, resulting, for example, from a restriction to finite measurement settings or from possible imperfections in the resources or local operations. We show that entanglement-based criteria for universality obtained in M. Van den Nest et al. [New J. Phys. 9, 204 (2007)] for the exact, deterministic case can be lifted to the much more general approximate, stochastic case. This allows us to move from the idealized situation (exact, deterministic universality) considered in previous works to the practically relevant context of nonperfect state preparation. We find that any entanglement measure fulfilling some basic requirements needs to reach its maximum value on some element of an approximate, stochastic universal family of resource states, as the resource size grows. This allows us to rule out various families of states as being approximate, stochastic universal. We prove that approximate, stochastic universality is in general a weaker requirement than deterministic, exact universality and provide resources that are efficient approximate universal, but not exact deterministic universal. We also study the robustness of universal resources for measurement-based quantum computation under realistic assumptions about the (imperfect) generation and manipulation of entangled states, giving an explicit expression for the impact that errors made in the preparation of the resource have on the possibility to use it for universal approximate and stochastic state preparation. Finally, we discuss the relation between our entanglement-based criteria and recent results regarding the uselessness of states with a high

  14. Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research.

    Science.gov (United States)

    Erdemir, Ahmet; Hunter, Peter J; Holzapfel, Gerhard A; Loew, Leslie M; Middleton, John; Jacobs, Christopher R; Nithiarasu, Perumal; Löhner, Rainlad; Wei, Guowei; Winkelstein, Beth A; Barocas, Victor H; Guilak, Farshid; Ku, Joy P; Hicks, Jennifer L; Delp, Scott L; Sacks, Michael; Weiss, Jeffrey A; Ateshian, Gerard A; Maas, Steve A; McCulloch, Andrew D; Peng, Grace C Y

    2018-02-01

    The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate

  15. Pediatric emergency care capacity in a low-resource setting: An assessment of district hospitals in Rwanda.

    Directory of Open Access Journals (Sweden)

    Celestin Hategeka

    Full Text Available Health system strengthening is crucial to improving infant and child health outcomes in low-resource countries. While the knowledge related to improving newborn and child survival has advanced remarkably over the past few decades, many healthcare systems in such settings remain unable to effectively deliver pediatric advance life support management. With the introduction of the Emergency Triage, Assessment and Treatment plus Admission care (ETAT+-a locally adapted pediatric advanced life support management program-in Rwandan district hospitals, we undertook this study to assess the extent to which these hospitals are prepared to provide this pediatric advanced life support management. The results of the study will shed light on the resources and support that are currently available to implement ETAT+, which aims to improve care for severely ill infants and children.A cross-sectional survey was undertaken in eight district hospitals across Rwanda focusing on the availability of physical and human resources, as well as hospital services organizations to provide emergency triage, assessment and treatment plus admission care for severely ill infants and children.Many of essential resources deemed necessary for the provision of emergency care for severely ill infants and children were readily available (e.g. drugs and laboratory services. However, only 4/8 hospitals had BVM for newborns; while nebulizer and MDI were not available in 2/8 hospitals. Only 3/8 hospitals had F-75 and ReSoMal. Moreover, there was no adequate triage system across any of the hospitals evaluated. Further, guidelines for neonatal resuscitation and management of malaria were available in 5/8 and in 7/8 hospitals, respectively; while those for child resuscitation and management of sepsis, pneumonia, dehydration and severe malnutrition were available in less than half of the hospitals evaluated.Our assessment provides evidence to inform new strategies to enhance the capacity of

  16. Pediatric emergency care capacity in a low-resource setting: An assessment of district hospitals in Rwanda

    Science.gov (United States)

    Shoveller, Jean; Tuyisenge, Lisine; Kenyon, Cynthia; Cechetto, David F.; Lynd, Larry D.

    2017-01-01

    Background Health system strengthening is crucial to improving infant and child health outcomes in low-resource countries. While the knowledge related to improving newborn and child survival has advanced remarkably over the past few decades, many healthcare systems in such settings remain unable to effectively deliver pediatric advance life support management. With the introduction of the Emergency Triage, Assessment and Treatment plus Admission care (ETAT+)–a locally adapted pediatric advanced life support management program–in Rwandan district hospitals, we undertook this study to assess the extent to which these hospitals are prepared to provide this pediatric advanced life support management. The results of the study will shed light on the resources and support that are currently available to implement ETAT+, which aims to improve care for severely ill infants and children. Methods A cross-sectional survey was undertaken in eight district hospitals across Rwanda focusing on the availability of physical and human resources, as well as hospital services organizations to provide emergency triage, assessment and treatment plus admission care for severely ill infants and children. Results Many of essential resources deemed necessary for the provision of emergency care for severely ill infants and children were readily available (e.g. drugs and laboratory services). However, only 4/8 hospitals had BVM for newborns; while nebulizer and MDI were not available in 2/8 hospitals. Only 3/8 hospitals had F-75 and ReSoMal. Moreover, there was no adequate triage system across any of the hospitals evaluated. Further, guidelines for neonatal resuscitation and management of malaria were available in 5/8 and in 7/8 hospitals, respectively; while those for child resuscitation and management of sepsis, pneumonia, dehydration and severe malnutrition were available in less than half of the hospitals evaluated. Conclusions Our assessment provides evidence to inform new strategies

  17. Integrating GRID tools to build a computing resource broker: activities of DataGrid WP1

    International Nuclear Information System (INIS)

    Anglano, C.; Barale, S.; Gaido, L.; Guarise, A.; Lusso, S.; Werbrouck, A.

    2001-01-01

    Resources on a computational Grid are geographically distributed, heterogeneous in nature, owned by different individuals or organizations with their own scheduling policies, have different access cost models with dynamically varying loads and availability conditions. This makes traditional approaches to workload management, load balancing and scheduling inappropriate. The first work package (WP1) of the EU-funded DataGrid project is addressing the issue of optimizing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of-date (collected in a finite amount of time at a very loosely coupled site). The authors describe the DataGrid approach in integrating existing software components (from Condor, Globus, etc.) to build a Grid Resource Broker, and the early efforts to define a workable scheduling strategy

  18. Resource-poor settings: infrastructure and capacity building: care of the critically ill and injured during pandemics and disasters: CHEST consensus statement.

    Science.gov (United States)

    Geiling, James; Burkle, Frederick M; Amundson, Dennis; Dominguez-Cherit, Guillermo; Gomersall, Charles D; Lim, Matthew L; Luyckx, Valerie; Sarani, Babak; Uyeki, Timothy M; West, T Eoin; Christian, Michael D; Devereaux, Asha V; Dichter, Jeffrey R; Kissoon, Niranjan

    2014-10-01

    Planning for mass critical care (MCC) in resource-poor or constrained settings has been largely ignored, despite their large populations that are prone to suffer disproportionately from natural disasters. Addressing MCC in these settings has the potential to help vast numbers of people and also to inform planning for better-resourced areas. The Resource-Poor Settings panel developed five key question domains; defining the term resource poor and using the traditional phases of disaster (mitigation/preparedness/response/recovery), literature searches were conducted to identify evidence on which to answer the key questions in these areas. Given a lack of data upon which to develop evidence-based recommendations, expert-opinion suggestions were developed, and consensus was achieved using a modified Delphi process. The five key questions were then separated as follows: definition, infrastructure and capacity building, resources, response, and reconstitution/recovery of host nation critical care capabilities and research. Addressing these questions led the panel to offer 33 suggestions. Because of the large number of suggestions, the results have been separated into two sections: part 1, Infrastructure/Capacity in this article, and part 2, Response/Recovery/Research in the accompanying article. Lack of, or presence of, rudimentary ICU resources and limited capacity to enhance services further challenge resource-poor and constrained settings. Hence, capacity building entails preventative strategies and strengthening of primary health services. Assistance from other countries and organizations is needed to mount a surge response. Moreover, planning should include when to disengage and how the host nation can provide capacity beyond the mass casualty care event.

  19. Measuring the impact of computer resource quality on the software development process and product

    Science.gov (United States)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  20. Energetic-economic dynamic computational analysis of plants with small capacity - Gera model

    International Nuclear Information System (INIS)

    Storfer, A.F.; Demanboro, A.C. de; Campello, C.A.G.B.

    1990-01-01

    A methodology and mathematical model for energy and economic analysis of hydroelectric power plants with low and medium capacity are presented. This methodology will be used for isolated or integrated hydroelectric power plants, including plants that part of their produced energy will go to the site market and part will go to regional electric system. (author)

  1. Computation of Buffer Capacities for Throughput Constrained and Data Dependent Inter-Task Communication

    NARCIS (Netherlands)

    Wiggers, M.H.; Bekooij, Marco Jan Gerrit; Bekooij, Marco J.G.; Smit, Gerardus Johannes Maria

    2008-01-01

    Streaming applications are often implemented as task graphs. Currently, techniques exist to derive buffer capacities that guarantee satisfaction of a throughput constraint for task graphs in which the inter-task communication is data-independent, i.e. the amount of data produced and consumed is

  2. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  3. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  4. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  5. The NILE system architecture: fault-tolerant, wide-area access to computing and data resources

    International Nuclear Information System (INIS)

    Ricciardi, Aleta; Ogg, Michael; Rothfus, Eric

    1996-01-01

    NILE is a multi-disciplinary project building a distributed computing environment for HEP. It provides wide-area, fault-tolerant, integrated access to processing and data resources for collaborators of the CLEO experiment, though the goals and principles are applicable to many domains. NILE has three main objectives: a realistic distributed system architecture design, the design of a robust data model, and a Fast-Track implementation providing a prototype design environment which will also be used by CLEO physicists. This paper focuses on the software and wide-area system architecture design and the computing issues involved in making NILE services highly-available. (author)

  6. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    OpenAIRE

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  7. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  8. Piping data bank and erection system of Angra 2: structure, computational resources and systems

    International Nuclear Information System (INIS)

    Abud, P.R.; Court, E.G.; Rosette, A.C.

    1992-01-01

    The Piping Data Bank of Angra 2 called - Erection Management System - Was developed to manage the piping erection of the Nuclear Power Plant of Angra 2. Beyond the erection follow-up of piping and supports, it manages: the piping design, the material procurement, the flow of the fabrication documents, testing of welds and material stocks at the Warehouse. The works developed in the sense of defining the structure of the Data Bank, Computational Resources and System are here described. (author)

  9. Blockchain-Empowered Fair Computational Resource Sharing System in the D2D Network

    Directory of Open Access Journals (Sweden)

    Zhen Hong

    2017-11-01

    Full Text Available Device-to-device (D2D communication is becoming an increasingly important technology in future networks with the climbing demand for local services. For instance, resource sharing in the D2D network features ubiquitous availability, flexibility, low latency and low cost. However, these features also bring along challenges when building a satisfactory resource sharing system in the D2D network. Specifically, user mobility is one of the top concerns for designing a cooperative D2D computational resource sharing system since mutual communication may not be stably available due to user mobility. A previous endeavour has demonstrated and proven how connectivity can be incorporated into cooperative task scheduling among users in the D2D network to effectively lower average task execution time. There are doubts about whether this type of task scheduling scheme, though effective, presents fairness among users. In other words, it can be unfair for users who contribute many computational resources while receiving little when in need. In this paper, we propose a novel blockchain-based credit system that can be incorporated into the connectivity-aware task scheduling scheme to enforce fairness among users in the D2D network. Users’ computational task cooperation will be recorded on the public blockchain ledger in the system as transactions, and each user’s credit balance can be easily accessible from the ledger. A supernode at the base station is responsible for scheduling cooperative computational tasks based on user mobility and user credit balance. We investigated the performance of the credit system, and simulation results showed that with a minor sacrifice of average task execution time, the level of fairness can obtain a major enhancement.

  10. On the computation of the higher-order statistics of the channel capacity over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-12-01

    The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.

  11. On the computation of the higher-order statistics of the channel capacity over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2012-01-01

    The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.

  12. On the computation of the higher order statistics of the channel capacity for amplify-and-forward multihop transmission

    KAUST Repository

    Yilmaz, Ferkan; Tabassum, Hina; Alouini, Mohamed-Slim

    2014-01-01

    Higher order statistics (HOS) of the channel capacity provide useful information regarding the level of reliability of signal transmission at a particular rate. In this paper, we propose a novel and unified analysis, which is based on the moment-generating function (MGF) approach, to efficiently and accurately compute the HOS of the channel capacity for amplify-and-forward (AF) multihop transmission over generalized fading channels. More precisely, our easy-to-use and tractable mathematical formalism requires only the reciprocal MGFs of the transmission hop signal-to-noise ratio (SNR). Numerical and simulation results, which are performed to exemplify the usefulness of the proposed MGF-based analysis, are shown to be in perfect agreement. © 2013 IEEE.

  13. Human Resources Capacity Building as a Strategy in Strengthening Nuclear Knowledge Sustainability in the Experimental Fuel Element Installation of BATAN-Indonesia

    International Nuclear Information System (INIS)

    Ratih Langenati; Bambang, Herutomo; Arief Sasongko Adhi

    2014-01-01

    Strategy in Strengthening Nuclear Knowledge Sustainability: • In order to maintain human resources capacity related to nuclear fuel production technology, a nuclear knowledge preservation program is implemented in the EFEI. • The program includes coaching/training, mentoring and documenting important knowledge. • The program activities are monitored and evaluated quarterly for its improvement in the following year

  14. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  15. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  16. The Trope Tank: A Laboratory with Material Resources for Creative Computing

    Directory of Open Access Journals (Sweden)

    Nick Montfort

    2014-12-01

    Full Text Available http://dx.doi.org/10.5007/1807-9288.2014v10n2p53 Principles for organizing and making use of a laboratory with material computing resources are articulated. This laboratory, the Trope Tank, is a facility for teaching, research, and creative collaboration and offers hardware (in working condition and set up for use from the 1970s, 1980s, and 1990s, including videogame systems, home computers, and an arcade cabinet. To aid in investigating the material history of texts, the lab has a small 19th century letterpress, a typewriter, a print terminal, and dot-matrix printers. Other resources include controllers, peripherals, manuals, books, and software on physical media. These resources are used for teaching, loaned for local exhibitions and presentations, and accessed by researchers and artists. The space is primarily a laboratory (rather than a library, studio, or museum, so materials are organized by platform and intended use. Textual information about the historical contexts of the available systems, and resources are set up to allow easy operation, and even casual use, by researchers, teachers, students, and artists.

  17. Testing a computer-based ostomy care training resource for staff nurses.

    Science.gov (United States)

    Bales, Isabel

    2010-05-01

    Fragmented teaching and ostomy care provided by nonspecialized clinicians unfamiliar with state-of-the-art care and products have been identified as problems in teaching ostomy care to the new ostomate. After conducting a literature review of theories and concepts related to the impact of nurse behaviors and confidence on ostomy care, the author developed a computer-based learning resource and assessed its effect on staff nurse confidence. Of 189 staff nurses with a minimum of 1 year acute-care experience employed in the acute care, emergency, and rehabilitation departments of an acute care facility in the Midwestern US, 103 agreed to participate and returned completed pre- and post-tests, each comprising the same eight statements about providing ostomy care. F and P values were computed for differences between pre- and post test scores. Based on a scale where 1 = totally disagree and 5 = totally agree with the statement, baseline confidence and perceived mean knowledge scores averaged 3.8 and after viewing the resource program post-test mean scores averaged 4.51, a statistically significant improvement (P = 0.000). The largest difference between pre- and post test scores involved feeling confident in having the resources to learn ostomy skills independently. The availability of an electronic ostomy care resource was rated highly in both pre- and post testing. Studies to assess the effects of increased confidence and knowledge on the quality and provision of care are warranted.

  18. Optimal nonlinear information processing capacity in delay-based reservoir computers

    Science.gov (United States)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-09-01

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

  19. Transmitting Information by Propagation in an Ocean Waveguide: Computation of Acoustic Field Capacity

    Science.gov (United States)

    2015-06-17

    and optics literature [6, 7]. Primary motivation for the resurgence of interest in the study of fundamental relationships between (classical) wave...approximated by a finite number of degrees of freedom, and the information-theoretic analysis used for studying MIMO capacity can be applied to determine the...vectors are given by νm,n(ρ ′,z′,φ ′) = am, nun (z′)Jm(knρ ′)e−imφ ′ (42) with m,n corresponding to azimuthal and depth eigenvalue indices. Finally

  20. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    International Nuclear Information System (INIS)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-01-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi–Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources. (paper)

  1. Improving the capacity, reliability & life of mobile devices with cloud computing

    CSIR Research Space (South Africa)

    Nkosi, MT

    2011-05-01

    Full Text Available devices. The approach in this paper is to model the mobile cloud computing process in a 3GPP IMS software development and emulator environment. And show that multimedia and security operations can be performed in the cloud, allowing mobile service...

  2. Runway exit designs for capacity improvement demonstrations. Phase 2: Computer model development

    Science.gov (United States)

    Trani, A. A.; Hobeika, A. G.; Kim, B. J.; Nunna, V.; Zhong, C.

    1992-01-01

    The development is described of a computer simulation/optimization model to: (1) estimate the optimal locations of existing and proposed runway turnoffs; and (2) estimate the geometric design requirements associated with newly developed high speed turnoffs. The model described, named REDIM 2.0, represents a stand alone application to be used by airport planners, designers, and researchers alike to estimate optimal turnoff locations. The main procedures are described in detail which are implemented in the software package and possible applications are illustrated when using 6 major runway scenarios. The main output of the computer program is the estimation of the weighted average runway occupancy time for a user defined aircraft population. Also, the location and geometric characteristics of each turnoff are provided to the user.

  3. An Investigation of the Relationship between College Chinese EFL Students' Autonomous Learning Capacity and Motivation in Using Computer-Assisted Language Learning

    Science.gov (United States)

    Pu, Minran

    2009-01-01

    The purpose of the study was to investigate the relationship between college EFL students' autonomous learning capacity and motivation in using web-based Computer-Assisted Language Learning (CALL) in China. This study included three questionnaires: the student background questionnaire, the questionnaire on student autonomous learning capacity, and…

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  5. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  6. Life satisfaction in 6 European countries: the relationship to health, self-esteem, and social and financial resources among people (Aged 65-89) with reduced functional capacity.

    Science.gov (United States)

    Borg, Christel; Fagerström, Cecilia; Balducci, Cristian; Burholt, Vanessa; Ferring, Dieter; Weber, Germain; Wenger, Clare; Holst, Göran; Hallberg, Ingalill R

    2008-01-01

    The aim of this study was to investigate how overall health, participation in physical activities, self-esteem, and social and financial resources are related to life satisfaction among people aged 65 and older with reduced activities of daily living (ADL) capacity in 6 European countries. A subsample of the European Study of Adults' Well-Being (ESAW), consisting of 2,195 people with reduced ADL capacity from Sweden, the United Kingdom, the Netherlands, Luxembourg, Austria, and Italy, was included. The Older Americans' Resources Schedule (OARS), the Life Satisfaction Index Z, and the Self-Esteem Scale were used. In all national samples, overall health, self-esteem, and feeling worried, rather than ADL capacity, were significantly associated with life satisfaction. The findings indicate the importance of taking not only the reduction in functional capacity into account but also the individual's perception of health and self-esteem when outlining health care and nursing aimed at improving life satisfaction. The study thus suggests that personal rather than environmental factors are important for life satisfaction among people with reduced ADL capacity living in Europe.

  7. A Safety Resource Allocation Mechanism against Connection Fault for Vehicular Cloud Computing

    Directory of Open Access Journals (Sweden)

    Tianpeng Ye

    2016-01-01

    Full Text Available The Intelligent Transportation System (ITS becomes an important component of the smart city toward safer roads, better traffic control, and on-demand service by utilizing and processing the information collected from sensors of vehicles and road side infrastructure. In ITS, Vehicular Cloud Computing (VCC is a novel technology balancing the requirement of complex services and the limited capability of on-board computers. However, the behaviors of the vehicles in VCC are dynamic, random, and complex. Thus, one of the key safety issues is the frequent disconnections between the vehicle and the Vehicular Cloud (VC when this vehicle is computing for a service. More important, the connection fault will disturb seriously the normal services of VCC and impact the safety works of the transportation. In this paper, a safety resource allocation mechanism is proposed against connection fault in VCC by using a modified workflow with prediction capability. We firstly propose the probability model for the vehicle movement which satisfies the high dynamics and real-time requirements of VCC. And then we propose a Prediction-based Reliability Maximization Algorithm (PRMA to realize the safety resource allocation for VCC. The evaluation shows that our mechanism can improve the reliability and guarantee the real-time performance of the VCC.

  8. SELF-HEALING CAPACITY OF CONCRETE - COMPUTER SIMULATION STUDY OF UNHYDRATED CEMENT STRUCTURE

    Directory of Open Access Journals (Sweden)

    Huan He

    2011-05-01

    Full Text Available Aggregate occupies at least three-quarters of the volume of concrete, so its impact on concrete's properties is large. The aggregate's influence on the non-hydrated part of the matured paste is assessed by concurrent algorithm-based computer simulation system SPACE in this paper. A distinction is made between interfacial zones (ITZs and bulk paste. Containers with rigid boundaries were employed for the production of series of cement pastes. They were subjected to quantitative microstructure analysis. Relevant gradient structures in the ITZ and bulk are presented and discussed. The relevance of this structure information for possible selfhealing of cracks is briefly discussed.

  9. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  10. DrugSig: A resource for computational drug repositioning utilizing gene expression signatures.

    Directory of Open Access Journals (Sweden)

    Hongyu Wu

    Full Text Available Computational drug repositioning has been proved as an effective approach to develop new drug uses. However, currently existing strategies strongly rely on drug response gene signatures which scattered in separated or individual experimental data, and resulted in low efficient outputs. So, a fully drug response gene signatures database will be very helpful to these methods. We collected drug response microarray data and annotated related drug and targets information from public databases and scientific literature. By selecting top 500 up-regulated and down-regulated genes as drug signatures, we manually established the DrugSig database. Currently DrugSig contains more than 1300 drugs, 7000 microarray and 800 targets. Moreover, we developed the signature based and target based functions to aid drug repositioning. The constructed database can serve as a resource to quicken computational drug repositioning. Database URL: http://biotechlab.fudan.edu.cn/database/drugsig/.

  11. Exploiting short-term memory in soft body dynamics as a computational resource.

    Science.gov (United States)

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  12. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2011-12-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources.Methodology—meta analysis, survey and descriptive analysis.Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, thatsignificant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  13. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Agota Giedrė Raišienė

    2013-08-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, that significant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  14. A computational fluid dynamics analysis on stratified scavenging system of medium capacity two-stroke internal combustion engines

    Directory of Open Access Journals (Sweden)

    Pitta Srinivasa Rao

    2008-01-01

    Full Text Available The main objective of the present work is to make a computational study of stratified scavenging system in two-stroke medium capacity engines to reduce or to curb the emissions from the two-stroke engines. The 3-D flows within the cylinder are simulated using computational fluid dynamics and the code Fluent 6. Flow structures in the transfer ports and the exhaust port are predicted without the stratification and with the stratification, and are well predicted. The total pressure and velocity map from computation provided comprehensive information on the scavenging and stratification phenomenon. Analysis is carried out for the transfer ports flow and the extra port in the transfer port along with the exhaust port when the piston is moving from the top dead center to the bottom dead center, as the ports are closed, half open, three forth open, and full port opening. An unstructured cell is adopted for meshing the geometry created in CATIA software. Flow is simulated by solving governing equations namely conservation of mass momentum and energy using SIMPLE algorithm. Turbulence is modeled by high Reynolds number version k-e model. Experimental measurements are made for validating the numerical prediction. Good agreement is observed between predicted result and experimental data; that the stratification had significantly reduced the emissions and fuel economy is achieved.

  15. Development of urbanization in arid and semi arid regions based on the water resource carrying capacity -- a case study of Changji, Xinjiang

    Science.gov (United States)

    Xiao, H.; Zhang, L.; Chai, Z.

    2017-07-01

    The arid and semiarid region in China where have a relatively weak economic foundation, independent development capacity, and the low-level of urbanization. The new urbanization within these regions is facing severe challenges brought by the constraints of resources. In this paper, we selected the Changji Hui Autonomous Prefecture, Xinjiang Uyghur Autonomous Region as study area. We found that agricultural planting structure is the key water consumption index based on the research about the main water demands of domestic, agriculture and industry. Finally, we suggest that more attentions should be paid to the rational utilization of water resources, population carrying capacity, and adjust and upgrade the industrial structure, with the purpose of coordination with the Silk Road Economic Belt.

  16. Computer modelling of the UK wind energy resource. Phase 2. Application of the methodology

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Makari, M; Newton, K; Ravenscroft, F; Whittaker, J

    1993-12-31

    This report presents the results of the second phase of a programme to estimate the UK wind energy resource. The overall objective of the programme is to provide quantitative resource estimates using a mesoscale (resolution about 1km) numerical model for the prediction of wind flow over complex terrain, in conjunction with digitised terrain data and wind data from surface meteorological stations. A network of suitable meteorological stations has been established and long term wind data obtained. Digitised terrain data for the whole UK were obtained, and wind flow modelling using the NOABL computer program has been performed. Maps of extractable wind power have been derived for various assumptions about wind turbine characteristics. Validation of the methodology indicates that the results are internally consistent, and in good agreement with available comparison data. Existing isovent maps, based on standard meteorological data which take no account of terrain effects, indicate that 10m annual mean wind speeds vary between about 4.5 and 7 m/s over the UK with only a few coastal areas over 6 m/s. The present study indicates that 28% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. The results will be useful for broad resource studies and initial site screening. Detailed resource evaluation for local sites will require more detailed local modelling or ideally long term field measurements. (12 figures, 14 tables, 21 references). (Author)

  17. An analysis of health system resources in relation to pandemic response capacity in the Greater Mekong Subregion

    Directory of Open Access Journals (Sweden)

    Hanvoravongchai Piya

    2012-12-01

    Full Text Available Abstract Background There is increasing perception that countries cannot work in isolation to militate against the threat of pandemic influenza. In the Greater Mekong Subregion (GMS of Asia, high socio-economic diversity and fertile conditions for the emergence and spread of infectious diseases underscore the importance of transnational cooperation. Investigation of healthcare resource distribution and inequalities can help determine the need for, and inform decisions regarding, resource sharing and mobilisation. Methods We collected data on healthcare resources deemed important for responding to pandemic influenza through surveys of hospitals and district health offices across four countries of the GMS (Cambodia, Lao PDR, Thailand, Vietnam. Focusing on four key resource types (oseltamivir, hospital beds, ventilators, and health workers, we mapped and analysed resource distributions at province level to identify relative shortages, mismatches, and clustering of resources. We analysed inequalities in resource distribution using the Gini coefficient and Theil index. Results Three quarters of the Cambodian population and two thirds of the Laotian population live in relatively underserved provinces (those with resource densities in the lowest quintile across the region in relation to health workers, ventilators, and hospital beds. More than a quarter of the Thai population is relatively underserved for health workers and oseltamivir. Approximately one fifth of the Vietnamese population is underserved for beds and ventilators. All Cambodian provinces are underserved for at least one resource. In Lao PDR, 11 percent of the population is underserved by all four resource items. Of the four resources, ventilators and oseltamivir were most unequally distributed. Cambodia generally showed higher levels of inequalities in resource distribution compared to other countries. Decomposition of the Theil index suggests that inequalities result principally from

  18. AN INVESTIGATION OF RELATIONSHIP BETWEEN LEADERSHIP STYLES OF HUMAN RESOURCES MANAGER, CREATIVE PROBLEM SOLVING CAPACITY AND CAREER SATISFACTION: AN EMPIRICAL STUDY

    OpenAIRE

    Hüseyin YILMAZ

    2016-01-01

    The aim of this study is the creative problem-solving capacity of the organization with leadership behaviors of human resources managers and employees to examine the relationship between career satisfaction and is tested empirically. Research within the scope of the required data structured questionnaire method, operating in the province of Aydin was obtained from 130 employees working in five star hotels. Democratic leadership style according to the factor analysis, easygoing, participants c...

  19. A Conceptual Model for the Sustainable Governance of Integrated Management of National Water Resources with a Focus on Training and Capacity Building

    Directory of Open Access Journals (Sweden)

    Alaleh Ghaemi

    2017-09-01

    Full Text Available The instabilities over the past two decades in governing water resources have led to the need for an integrated approach to the problem. Moreover, the decent and sustainable governance of water resources has come to be recognized as the supplement to the integrated management of water resources. The present study strives to develop a conceptual model of water reources sustainable governance with emphasis on training and capacity-building. For this purpose, expert views presented to different international meetings and world conferences on water were reviewed to develop a comprehensive and all-embracuing conceptual model of sustainable governance for the integrated management of water resources with a focus on training and capacity-building. In a second stage of the study, both internationally published literature and the regulatory documents on water management approved at the national level were consulted to derive appropriate standards, criteria, and indicators for the implementation of the proposed conceptual model. The relevance of these indicators was validated by soliciting expert views while their stability was calculated via the Cronbach’s alpha formula to be 0.94. The third stage of the study involved the ranking and gradation of the indicators using the relevant software in a fuzzy decision-making environment based on interviews with 110 senior water executives, academics working in the field, senior agricultural managers, water experts in local communities, and NGO activists. The emerging model finally consisted of 9 criteria and 52 indicators, amongst which the criterion of public participation and the indicator of training and capacity-building won the highest scores. It may be claimed that the proposed conceptual model is quite relevant and adapted to the sustainable governance presently sought. The key roles in this model are played by public participation as well as training and capacity building that must be on the priority

  20. Building Surgical Research Capacity Globally: Efficacy of a Clinical Research Course for Surgeons in Low-Resource Settings

    OpenAIRE

    Theodore A. Miclau; Kathryn Chomsky-Higgins; Alfredo Ceballos; Roberto Balmaseda; Saam Morshed; Mohit Bhandari; Fernando de la Huerta; Theodore Miclau

    2017-01-01

    Musculoskeletal injury confers an enormous burden of preventable disability and mortality in low- and moderate-income countries (LMICs). Appropriate orthopedic and trauma care services are lacking. Leading international health agencies emphasize the critical need to create and sustain research capacity in the developing world as a strategic factor in the establishment of functional, independent health systems. One aspect of building research capacity is partnership between developing and deve...

  1. Laboratory capacity building for the International Health Regulations (IHR[2005]) in resource-poor countries: the experience of the African Field Epidemiology Network (AFENET).

    Science.gov (United States)

    Masanza, Monica Musenero; Nqobile, Ndlovu; Mukanga, David; Gitta, Sheba Nakacubo

    2010-12-03

    Laboratory is one of the core capacities that countries must develop for the implementation of the International Health Regulations (IHR[2005]) since laboratory services play a major role in all the key processes of detection, assessment, response, notification, and monitoring of events. While developed countries easily adapt their well-organized routine laboratory services, resource-limited countries need considerable capacity building as many gaps still exist. In this paper, we discuss some of the efforts made by the African Field Epidemiology Network (AFENET) in supporting laboratory capacity development in the Africa region. The efforts range from promoting graduate level training programs to building advanced technical, managerial and leadership skills to in-service short course training for peripheral laboratory staff. A number of specific projects focus on external quality assurance, basic laboratory information systems, strengthening laboratory management towards accreditation, equipment calibration, harmonization of training materials, networking and provision of pre-packaged laboratory kits to support outbreak investigation. Available evidence indicates a positive effect of these efforts on laboratory capacity in the region. However, many opportunities exist, especially to support the roll-out of these projects as well as attending to some additional critical areas such as biosafety and biosecuity. We conclude that AFENET's approach of strengthening national and sub-national systems provide a model that could be adopted in resource-limited settings such as sub-Saharan Africa.

  2. AN INVESTIGATION OF RELATIONSHIP BETWEEN LEADERSHIP STYLES OF HUMAN RESOURCES MANAGER, CREATIVE PROBLEM SOLVING CAPACITY AND CAREER SATISFACTION: AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Hüseyin YILMAZ

    2016-12-01

    Full Text Available The aim of this study is the creative problem-solving capacity of the organization with leadership behaviors of human resources managers and employees to examine the relationship between career satisfaction and is tested empirically. Research within the scope of the required data structured questionnaire method, operating in the province of Aydin was obtained from 130 employees working in five star hotels. Democratic leadership style according to the factor analysis, easygoing, participants converter, and releasing autocratic leadership dimensions were determined. According to the analysis, the dependent variable with a significant level of research and positive leadership style has been determined that no relationships. Regression analysis revealed that the leadership of the relationship with the creative problem-solving capacity of democratic leadership in style when found to be stronger than other leadership styles, while the variable describing the career of the employee satisfaction level of the maximum it was concluded that the creative problem-solving capacity of the organization. Research in the context of human resources on the very important for organizations, leadership behavior, creative problem-solving capacity and career satisfaction studies analyzing the relationships between variables it seems to be quite limited. The discovery by analyzing the relationship between the aforementioned variables, can make significant contributions to knowledge in the literature and are expected to form the basis for future research.

  3. Thermodynamic properties of xanthone: Heat capacities, phase-transition properties, and thermodynamic-consistency analyses using computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range (5 to 520) K. • The enthalpy of combustion was measured and the enthalpy of formation was derived. • Thermodynamic-consistency analysis resolved inconsistencies in literature enthalpies of sublimation. • An inconsistency in literature enthalpies of combustion was resolved. • Application of computational chemistry in consistency analysis was demonstrated successfully. - Abstract: Heat capacities and phase-transition properties for xanthone (IUPAC name 9H-xanthen-9-one and Chemical Abstracts registry number [90-47-1]) are reported for the temperature range 5 < T/K < 524. Statistical calculations were performed and thermodynamic properties for the ideal gas were derived based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. These results are combined with sublimation pressures from the literature to allow critical evaluation of inconsistent enthalpies of sublimation for xanthone, also reported in the literature. Literature values for the enthalpy of combustion of xanthone are re-assessed, a revision is recommended for one result, and a new value for the enthalpy of formation of the ideal gas is derived. Comparisons with thermophysical properties reported in the literature are made for all other reported and derived properties, where possible

  4. A Resource Service Model in the Industrial IoT System Based on Transparent Computing.

    Science.gov (United States)

    Li, Weimin; Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-03-26

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system.

  5. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  6. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  7. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  8. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  9. Uranium supply/demand projections to 2030 in the OECD/NEA-IAEA ''Red Book''. Nuclear growth projections, global uranium exploration, uranium resources, uranium production and production capacity

    International Nuclear Information System (INIS)

    Vance, Robert

    2009-01-01

    World demand for electricity is expected to continue to grow rapidly over the next several decades to meet the needs of an increasing population and economic growth. The recognition by many governments that nuclear power can produce competitively priced, base load electricity that is essentially free of greenhouse gas emissions, combined with the role that nuclear can play in enhancing security of energy supplies, has increased the prospects for growth in nuclear generating capacity. Since the mid-1960s, with the co-operation of their member countries and states, the OECD Nuclear Energy Agency (NEA) and the International Atomic Energy Agency (IAEA) have jointly prepared periodic updates (currently every 2 years) on world uranium resources, production and demand. These updates have been published by the OECD/NEA in what is commonly known as the ''Red Book''. The 2007 edition replaces the 2005 edition and reflects information current as of 1 st January 2007. Uranium 2007: Resources, Production and Demand presents, in addition to updated resource figures, the results of a recent review of world uranium market fundamentals and provides a statistical profile of the world uranium industry. It contains official data provided by 40 countries (and one Country Report prepared by the IAEA Secretariat) on uranium exploration, resources, production and reactor-related requirements. Projections of nuclear generating capacity and reactor-related uranium requirements to 2030 as well as a discussion of long-term uranium supply and demand issues are also presented. (orig.)

  10. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  11. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  12. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  13. Developing nursing and midwifery students' capacity for coping with bullying and aggression in clinical settings: Students' evaluation of a learning resource.

    Science.gov (United States)

    Hogan, Rosemarie; Orr, Fiona; Fox, Deborah; Cummins, Allison; Foureur, Maralyn

    2018-03-01

    An innovative blended learning resource for undergraduate nursing and midwifery students was developed in a large urban Australian university, following a number of concerning reports by students on their experiences of bullying and aggression in clinical settings. The blended learning resource included interactive online learning modules, comprising film clips of realistic clinical scenarios, related readings, and reflective questions, followed by in-class role-play practice of effective responses to bullying and aggression. On completion of the blended learning resource 210 participants completed an anonymous survey (65.2% response rate). Qualitative data was collected and a thematic analysis of the participants' responses revealed the following themes: 'Engaging with the blended learning resource'; 'Responding to bullying' and 'Responding to aggression'. We assert that developing nursing and midwifery students' capacity to effectively respond to aggression and bullying, using a self-paced blended learning resource, provides a solution to managing some of the demands of the clinical setting. The blended learning resource, whereby nursing and midwifery students were introduced to realistic portrayals of bullying and aggression in clinical settings, developed their repertoire of effective responding and coping skills for use in their professional practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Examining Extension's Capacity in Community Resource and Economic Development: Viewpoints of Extension Administrators on the Role of Community Resource and Economic Development in the Extension Portfolio

    Science.gov (United States)

    Urbanowitz, Seth C.; Wilcox, Michael D., Jr.

    2013-01-01

    The survey-based research reported here offers insights on community, resource, and economic development (CRED) Extension programming at the national and regional level. The results present a national picture of CRED programming, research, and potential future programming opportunities that Extension could capitalize on. The research shows that…

  15. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    The general potential of computer games for teaching and learning is becoming widely recognized. In particular, within the application contexts of primary and lower secondary education, the relevance and value and computer games seem more accepted, and the possibility and willingness to incorporate...... computer games as a possible resource at the level of other educational resources seem more frequent. For some reason, however, to apply computer games in processes of teaching and learning at the high school level, seems an almost non-existent event. This paper reports on study of incorporating...... the learning game “Global Conflicts: Latin America” as a resource into the teaching and learning of a course involving the two subjects “English language learning” and “Social studies” at the final year in a Danish high school. The study adapts an explorative research design approach and investigates...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  17. Using multiple metaphors and multimodalities as a semiotic resource when teaching year 2 students computational strategies

    Science.gov (United States)

    Mildenhall, Paula; Sherriff, Barbara

    2017-06-01

    Recent research indicates that using multimodal learning experiences can be effective in teaching mathematics. Using a social semiotic lens within a participationist framework, this paper reports on a professional learning collaboration with a primary school teacher designed to explore the use of metaphors and modalities in mathematics instruction. This video case study was conducted in a year 2 classroom over two terms, with the focus on building children's understanding of computational strategies. The findings revealed that the teacher was able to successfully plan both multimodal and multiple metaphor learning experiences that acted as semiotic resources to support the children's understanding of abstract mathematics. The study also led to implications for teaching when using multiple metaphors and multimodalities.

  18. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    Science.gov (United States)

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  19. CLOUD COMPUTING OVERVIEW AND CHALLENGES: A REVIEW PAPER

    OpenAIRE

    Satish Kumar*, Vishal Thakur, Payal Thakur, Ashok Kumar Kashyap

    2017-01-01

    Cloud computing era is the most resourceful, elastic, utilized and scalable period for internet technology to use the computing resources over the internet successfully. Cloud computing did not provide only the speed, accuracy, storage capacity and efficiency for computing but it also lead to propagate the green computing and resource utilization. In this research paper, a brief description of cloud computing, cloud services and cloud security challenges is given. Also the literature review o...

  20. Radiotherapy infrastructure and human resources in Switzerland : Present status and projected computations for 2020.

    Science.gov (United States)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-09-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology "Quantification of Radiation Therapy Infrastructure and Staffing" guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO "Health Economics in Radiation Oncology" (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland.

  1. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    International Nuclear Information System (INIS)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-01-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [de

  2. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction.

    Science.gov (United States)

    Nezarat, Amin; Dastghaibifard, G H

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.

  3. Computer modelling of the UK wind energy resource: UK wind speed data package and user manual

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Ravenscroft, F

    1993-12-31

    A software package has been developed for IBM-PC or true compatibles. It is designed to provide easy access to the results of a programme of work to estimate the UK wind energy resource. Mean wind speed maps and quantitative resource estimates were obtained using the NOABL mesoscale (1 km resolution) numerical model for the prediction of wind flow over complex terrain. NOABL was used in conjunction with digitised terrain data and wind data from surface meteorological stations for a ten year period (1975-1984) to provide digital UK maps of mean wind speed at 10m, 25m and 45m above ground level. Also included in the derivation of these maps was the use of the Engineering Science Data Unit (ESDU) method to model the effect on wind speed of the abrupt change in surface roughness that occurs at the coast. With the wind speed software package, the user is able to obtain a display of the modelled wind speed at 10m, 25m and 45m above ground level for any location in the UK. The required co-ordinates are simply supplied by the user, and the package displays the selected wind speed. This user manual summarises the methodology used in the generation of these UK maps and shows computer generated plots of the 25m wind speeds in 200 x 200 km regions covering the whole UK. The uncertainties inherent in the derivation of these maps are also described, and notes given on their practical usage. The present study indicated that 23% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. (18 figures, 3 tables, 6 references). (author)

  4. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  5. CDMA systems capacity engineering

    CERN Document Server

    Kim, Kiseon

    2004-01-01

    This new hands-on resource tackles capacity planning and engineering issues that are crucial to optimizing wireless communication systems performance. Going beyond the system physical level and investigating CDMA system capacity at the service level, this volume is the single-source for engineering and analyzing systems capacity and resources.

  6. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  7. A comprehensive overview of computational resources to aid in precision genome editing with engineered nucleases.

    Science.gov (United States)

    Periwal, Vinita

    2017-07-01

    Genome editing with engineered nucleases (zinc finger nucleases, TAL effector nucleases s and Clustered regularly inter-spaced short palindromic repeats/CRISPR-associated) has recently been shown to have great promise in a variety of therapeutic and biotechnological applications. However, their exploitation in genetic analysis and clinical settings largely depends on their specificity for the intended genomic target. Large and complex genomes often contain highly homologous/repetitive sequences, which limits the specificity of genome editing tools and could result in off-target activity. Over the past few years, various computational approaches have been developed to assist the design process and predict/reduce the off-target activity of these nucleases. These tools could be efficiently used to guide the design of constructs for engineered nucleases and evaluate results after genome editing. This review provides a comprehensive overview of various databases, tools, web servers and resources for genome editing and compares their features and functionalities. Additionally, it also describes tools that have been developed to analyse post-genome editing results. The article also discusses important design parameters that could be considered while designing these nucleases. This review is intended to be a quick reference guide for experimentalists as well as computational biologists working in the field of genome editing with engineered nucleases. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Demonstration and evaluation of the 20-ton-capacity load-cell-based weighing system, Eldorado Resources, Ltd., Port Hope, Ontario, September 3-4, 1986

    International Nuclear Information System (INIS)

    Cooley, J.N.; Huxford, T.J.

    1986-01-01

    On September 3 and 4, 1986, the prototype 20-ton-capacity load-cell-based weighing system (LCBWS) developed by the US Enrichment Safeguards Program (ESP) at Martin Marietta Energy Systems, Inc., was field tested at the Eldorado Resources, Ltd., (ERL) facility in Port Hope, Ontario. The 20-ton-capacity LCBWS has been designed and fabricated for use by the International Atomic Energy Agency (IAEA) for verifying the masses of large-capacity UF 6 cylinders during IAEA safeguards inspections at UF 6 handling facilities. The purpose of the Canadian field test was to demonstrate and to evaluate with IAEA inspectorates and with UF 6 bulk handling facility operators at Eldorado the principles, procedures, and hardware associated with using the 20-ton-capacity LCBWS as a portable means for verifying the masses of 10- and 14-ton UF 6 cylinders. Session participants included representatives from the IAEA, Martin Marietta Energy Systems, Inc., Eldorado Resources, Ltd., the Atomic Energy Control Board (AECB), and the International Safeguards Project Office (ISPO) at Brookhaven National Laboratory (BNL). Appendix A presents the list of participants and their organization affiliation. The two-day field test involved a formal briefing by ESP staff, two cylinder weighing sessions, IAEA critiques of the LCBWS hardware and software, and concluding discussions on the field performance of the system. Appendix B cites the meeting agenda. Summarized in this report are (1) the technical information presented by the system developers, (2) results from the weighing sessions, and (3) observations, suggestions, and concluding statements from meeting participants

  9. Building Surgical Research Capacity Globally: Efficacy of a Clinical Research Course for Surgeons in Low-Resource Settings

    Directory of Open Access Journals (Sweden)

    Theodore A. Miclau

    2017-11-01

    Full Text Available Musculoskeletal injury confers an enormous burden of preventable disability and mortality in low- and moderate-income countries (LMICs. Appropriate orthopedic and trauma care services are lacking. Leading international health agencies emphasize the critical need to create and sustain research capacity in the developing world as a strategic factor in the establishment of functional, independent health systems. One aspect of building research capacity is partnership between developing and developed countries, and knowledge sharing via these collaborations. This study evaluated the efficacy of a short, intensive course designed to educate surgeons on fundamental aspects of clinical research using evidence-based medicine (EBM principles. Orthopedic surgeons from the United States and Canada presented a one-day course on the fundamentals of clinical research in Havana, Cuba. Knowledge acquisition was assessed on the part of course participants and surveyed current involvement with and attitudes toward clinical research. Questionnaires were presented to participants immediately preceding and following the course. The mean pre-test score was 43.9% (95% CI: 41.1–46.6%. The mean post-test score was 59.3% (95% CI: 56.5–62.1%. There were relative score increases in each subgroup based on professional level, subjective level of familiarity with EBM concepts, and subjective level of experience in research. This study establishes the short-term efficacy of an intensive course designed to impart knowledge in EBM and clinical research. Further study is necessary to determine the long-term benefits of this type of course. This may be a useful part of an overall strategy to build health research capacity in LMICs, ultimately contributing to improved access to high-quality surgical care.

  10. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  11. Hiding Electronic Patient Record (EPR) in medical images: A high capacity and computationally efficient technique for e-healthcare applications.

    Science.gov (United States)

    Loan, Nazir A; Parah, Shabir A; Sheikh, Javaid A; Akhoon, Jahangir A; Bhat, Ghulam M

    2017-09-01

    A high capacity and semi-reversible data hiding scheme based on Pixel Repetition Method (PRM) and hybrid edge detection for scalable medical images has been proposed in this paper. PRM has been used to scale up the small sized image (seed image) and hybrid edge detection ensures that no important edge information is missed. The scaled up version of seed image has been divided into 2×2 non overlapping blocks. In each block there is one seed pixel whose status decides the number of bits to be embedded in the remaining three pixels of that block. The Electronic Patient Record (EPR)/data have been embedded by using Least Significant and Intermediate Significant Bit Substitution (ISBS). The RC4 encryption has been used to add an additional security layer for embedded EPR/data. The proposed scheme has been tested for various medical and general images and compared with some state of art techniques in the field. The experimental results reveal that the proposed scheme besides being semi-reversible and computationally efficient is capable of handling high payload and as such can be used effectively for electronic healthcare applications. Copyright © 2017. Published by Elsevier Inc.

  12. CARRYING CAPACITY MODEL OF FOOD MANUFACTURING SECTORS FOR SUSTAINABLE DEVELOPMENT FROM USING ENVIRONMENTAL AND NATURAL RESOURCES OF THAILAND

    Directory of Open Access Journals (Sweden)

    Pruethsan Sutthichaimethee

    2015-11-01

    Full Text Available The objective of this research is to propose an indicator to assess and rank environmental problems caused by production within the food manufacturing sector of Thailand. The factors used to calculate the real benefit included the costs of natural resources, energy and transportation, fertilizer and pesticides, and sanitary and similar service. The highest environmental cost in terms of both natural resources materials and energy and transportation was ice, while the highest environmental cost for fertilizer and pesticides was coconut and palm oil. Confectionery had the highest environmental cost for sanitary and similar services. Overall, real estate gained the highest real benefit, while repair not classified elsewhere had the lowest real benefit for the company. If Thailand uses an indicator of environmental harm, especially within the food manufacturing sector, it could help to formulate efficient policies and strategies for the country in three areas of development, which are social, economic, and environmental development.

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  14. Modeling of Groundwater Resources Heavy Metals Concentration Using Soft Computing Methods: Application of Different Types of Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Meysam Alizamir

    2017-09-01

    Full Text Available Nowadays, groundwater resources play a vital role as a source of drinking water in arid and semiarid regions and forecasting of pollutants content in these resources is very important. Therefore, this study aimed to compare two soft computing methods for modeling Cd, Pb and Zn concentration in groundwater resources of Asadabad Plain, Western Iran. The relative accuracy of several soft computing models, namely multi-layer perceptron (MLP and radial basis function (RBF for forecasting of heavy metals concentration have been investigated. In addition, Levenberg-Marquardt, gradient descent and conjugate gradient training algorithms were utilized for the MLP models. The ANN models for this study were developed using MATLAB R 2014 Software program. The MLP performs better than the other models for heavy metals concentration estimation. The simulation results revealed that MLP model was able to model heavy metals concentration in groundwater resources favorably. It generally is effectively utilized in environmental applications and in the water quality estimations. In addition, out of three algorithms, Levenberg-Marquardt was better than the others were. This study proposed soft computing modeling techniques for the prediction and estimation of heavy metals concentration in groundwater resources of Asadabad Plain. Based on collected data from the plain, MLP and RBF models were developed for each heavy metal. MLP can be utilized effectively in applications of prediction of heavy metals concentration in groundwater resources of Asadabad Plain.

  15. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  16. Capacity-Approaching Modulation Formats for Optical Transmission Systems: Signal shaping and advanced de/muxing for efficient resource exploitation

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel

    Aiming for efficient fiber-optic data transport, this thesis addresses three scenario-specific modulation and/or multiplexing techniques which, leveraging digital signal processing, can further exploit the available resources.The considered environments are: (i) (ultra) long-haul networks, where we...... focus on improving the receiver sensitivity; (ii) metropolitan area networks, where the target is providing spectral and rate adaptability with fine granularity and easy reconfigurability; and (iii) short-haul networks, where facilitating more affordable throughput scaling is pursued. Functioning...

  17. 农业生产与水资源承载力评价%Agricultural production and evaluation in terms of water resources carrying capacity

    Institute of Scientific and Technical Information of China (English)

    虞祎; 张晖; 胡浩

    2016-01-01

    AbstractBased on the evaluation of water resources carrying capacity, especially taking into account the impact of agricultural pollution on sustainable use of water resources, a comprehensive analysis was conducted on the strains of water resources due to farming and animal production in different regions of China to provide reference for rational estimation of potential agricultural growth and correct approaches for structural adjustments in agriculture. Excess nitrogen and grey water were calculated as indicators to quantify the impact of agricultural pollution on water resources. Following nutrient balance theory, excess nitrogen was the difference between the sum of nitrogen provided by chemical fertilizer, livestock manure and soil, and total nitrogen needed by farming. Grey water was the amount of water required for diluting excessively high concentration of nitrogen in water to a more environmental-friendly level. Agricultural water footprint was the sum of agricultural water and grey water used. The huge quantity of excess nitrogen produced by farming and livestock consequently led to excessive amount of grey water, which more than doubled the amount of water used in agriculture. There was therefore the need to reserve enough environmental space for diluting pollution when estimating water resources carrying capacity based on water sustainability and healthy development. Water surplus were constructed to reflect the potential of water resources to support agricultural production with detailed environmental consideration. Water surplus was the difference between water resources and agricultural water footprint. Using 2003-2012 nationwide samples, a panel data model was constructed to analyze the impact of change in sown area and livestock head on water surplus. The results suggested that the nationwide water in China could carry a maximum of 168.89 million hm2 or 3.57 billion pigs. The water resources carrying capacity model results also showed that the

  18. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE Model of Water Resources and Water Environments

    Directory of Open Access Journals (Sweden)

    Guohua Fang

    2016-09-01

    Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.

  19. Disposal of waste computer hard disk drive: data destruction and resources recycling.

    Science.gov (United States)

    Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming

    2013-06-01

    An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.

  20. Reconnaissance investigation of the rough diamond resource potential and production capacity of Côte d’Ivoire

    Science.gov (United States)

    Chirico, Peter G.; Malpeli, Katherine C.

    2013-01-01

    guéla and a total of 1,100,000 carats remain in Tortiya. Production capacity was calculated for the two study areas for the years 2006–2010 and 2012–2013. Production capacity was found to range from between 38,000 carats and 375,000 carats in Séguéla and from 13,000 carats and 20,000 carats in Tortiya. Further, this study demonstrates that artisanal mining activities can be successfully monitored by using remote sensing and geologic modeling techniques. The production capacity estimates presented here fill a significant data gap and provide policy makers, the UN, and the KP with important information not otherwise available.

  1. Increasing efficiency of job execution with resource co-allocation in distributed computer systems

    OpenAIRE

    Cankar, Matija

    2014-01-01

    The field of distributed computer systems, while not new in computer science, is still the subject of a lot of interest in both industry and academia. More powerful computers, faster and more ubiquitous networks, and complex distributed applications are accelerating the growth of distributed computing. Large numbers of computers interconnected in a single network provide additional computing power to users whenever required. Such systems are, however, expensive and complex to manage, which ca...

  2. The Development of an Individualized Instructional Program in Beginning College Mathematics Utilizing Computer Based Resource Units. Final Report.

    Science.gov (United States)

    Rockhill, Theron D.

    Reported is an attempt to develop and evaluate an individualized instructional program in pre-calculus college mathematics. Four computer based resource units were developed in the areas of set theory, relations and function, algebra, trigonometry, and analytic geometry. Objectives were determined by experienced calculus teachers, and…

  3. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    Science.gov (United States)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  4. Perception and Interpretation of Climate Change among Quechua Farmers of Bolivia: Indigenous Knowledge as a Resource for Adaptive Capacity

    Directory of Open Access Journals (Sweden)

    Sébastien Boillat

    2013-12-01

    Full Text Available We aim to explore how indigenous peoples observe and ascribe meaning to change. The case study involves two Quechua-speaking farmer communities from mountainous areas near Cochabamba, Bolivia. Taking climate change as a starting point, we found that, first, farmers often associate their observations of climate change with other social and environmental changes, such as value change in the community, population growth, out-migration, urbanization, and land degradation. Second, some of the people interpret change as part of a cycle, which includes a belief in the return of some characteristics of ancient or mythological times. Third, environmental change is also perceived as the expression of "extra-human intentionalities," a reaction of natural or spiritual entities that people consider living beings. On the basis of these interpretations of change and their adaptive strategies, we discuss the importance of indigenous knowledge as a component of adaptive capacity. Even in the context of living with modern science and mass media, indigenous patterns of interpreting phenomena tend to be persistent. Our results support the view that indigenous knowledge must be acknowledged as process, emphasizing ways of observing, discussing, and interpreting new information. In this case, indigenous knowledge can help address complex relationships between phenomena, and help design adaptation strategies based on experimentation and knowledge coproduction.

  5. The impact of rationing of health resources on capacity of Australian public sector nurses to deliver nursing care after-hours: a qualitative study.

    Science.gov (United States)

    Henderson, Julie; Willis, Eileen; Toffoli, Luisa; Hamilton, Patricia; Blackman, Ian

    2016-12-01

    Australia, along with other countries, has introduced New Public Management (NPM) into public sector hospitals in an effort to contain healthcare costs. NPM is associated with outsourcing of service provision, the meeting of government performance indicators, workforce flexibility and rationing of resources. This study explores the impact of rationing of staffing and other resources upon delivery of care outside of business hours. Data was collected through semistructured interviews conducted with 21 nurses working in 2 large Australian metropolitan hospitals. Participants identified four strategies associated with NPM which add to workload after-hours and impacted on the capacity to deliver nursing care. These were functional flexibility, vertical substitution of staff, meeting externally established performance indicators and outsourcing. We conclude that cost containment alongside of the meeting of performance indicators has extended work traditionally performed during business hours beyond those hours when less staffing and material resources are available. This adds to nursing workload and potentially contributes to incomplete nursing care. © 2016 John Wiley & Sons Ltd.

  6. Efcient Computation of Buffer Capacities for Cyclo-Static Real-Time Systems with Back-Pressure

    NARCIS (Netherlands)

    Wiggers, M.H.; Bekooij, Marco; Bekooij, Marco Jan Gerrit; Jansen, P.G.; Smit, Gerardus Johannes Maria

    2006-01-01

    A key step in the design of cyclo-static real-time systems is the determination of buffer capacities. In our multiprocessor system, we apply back-pressure, which means that tasks wait for space in output buffers. Consequently buffer capacities affect the throughput. This requires the derivation of

  7. Transportation Energy Futures Series: Alternative Fuel Infrastructure Expansion: Costs, Resources, Production Capacity, and Retail Availability for Low-Carbon Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, M. W.; Heath, G.; Sandor, D.; Steward, D.; Vimmerstedt, L.; Warner, E.; Webster, K. W.

    2013-04-01

    Achieving the Department of Energy target of an 80% reduction in greenhouse gas emissions by 2050 depends on transportation-related strategies combining technology innovation, market adoption, and changes in consumer behavior. This study examines expanding low-carbon transportation fuel infrastructure to achieve deep GHG emissions reductions, with an emphasis on fuel production facilities and retail components serving light-duty vehicles. Three distinct low-carbon fuel supply scenarios are examined: Portfolio: Successful deployment of a range of advanced vehicle and fuel technologies; Combustion: Market dominance by hybridized internal combustion engine vehicles fueled by advanced biofuels and natural gas; Electrification: Market dominance by electric drive vehicles in the LDV sector, including battery electric, plug-in hybrid, and fuel cell vehicles, that are fueled by low-carbon electricity and hydrogen. A range of possible low-carbon fuel demand outcomes are explored in terms of the scale and scope of infrastructure expansion requirements and evaluated based on fuel costs, energy resource utilization, fuel production infrastructure expansion, and retail infrastructure expansion for LDVs. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored transportation-related strategies for abating GHGs and reducing petroleum dependence.

  8. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng

    2018-02-06

    Experimental determination of membrane protein (MP) structures is challenging as they are often too large for nuclear magnetic resonance (NMR) experiments and difficult to crystallize. Currently there are only about 510 non-redundant MPs with solved structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology and secondary structure, two-dimensional (2D) prediction of the contact/distance map, together with three-dimensional (3D) modeling of the MP structure in the lipid bilayer, for each MP target from a given model organism. The precision of the computationally constructed MP structures is leveraged by state-of-the-art deep learning methods as well as cutting-edge modeling strategies. In particular, (i) we annotate 1D property via DeepCNF (Deep Convolutional Neural Fields) that not only models complex sequence-structure relationship but also interdependency between adjacent property labels; (ii) we predict 2D contact/distance map through Deep Transfer Learning which learns the patterns as well as the complex relationship between contacts/distances and protein features from non-membrane proteins; and (iii) we model 3D structure by feeding its predicted contacts and secondary structure to the Crystallography & NMR System (CNS) suite combined with a membrane burial potential that is residue-specific and depth-dependent. PredMP currently contains more than 2,200 multi-pass transmembrane proteins (length<700 residues) from Human. These transmembrane proteins are classified according to IUPHAR/BPS Guide, which provides a hierarchical organization of receptors, channels, transporters, enzymes and other drug targets according to their molecular relationships and physiological functions. Among these MPs, we estimated that our approach could predict correct folds for 1

  9. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); Zwahlen, Daniel [Kantonsspital Graubuenden, Department of Radiotherapy, Chur (Switzerland); Bodis, Stephan [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); University Hospital Zurich, Department of Radiation Oncology, Zurich (Switzerland)

    2016-09-15

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [German] Ziel dieser Studie war es, den aktuellen Stand der Infrastruktur und Personalausstattung der

  10. Computation of groundwater resources and recharge in Chithar River Basin, South India.

    Science.gov (United States)

    Subramani, T; Babu, Savithri; Elango, L

    2013-01-01

    Groundwater recharge and available groundwater resources in Chithar River basin, Tamil Nadu, India spread over an area of 1,722 km(2) have been estimated by considering various hydrological, geological, and hydrogeological parameters, such as rainfall infiltration, drainage, geomorphic units, land use, rock types, depth of weathered and fractured zones, nature of soil, water level fluctuation, saturated thickness of aquifer, and groundwater abstraction. The digital ground elevation models indicate that the regional slope of the basin is towards east. The Proterozoic (Post-Archaean) basement of the study area consists of quartzite, calc-granulite, crystalline limestone, charnockite, and biotite gneiss with or without garnet. Three major soil types were identified namely, black cotton, deep red, and red sandy soils. The rainfall intensity gradually decreases from west to east. Groundwater occurs under water table conditions in the weathered zone and fluctuates between 0 and 25 m. The water table gains maximum during January after northeast monsoon and attains low during October. Groundwater abstraction for domestic/stock and irrigational needs in Chithar River basin has been estimated as 148.84 MCM (million m(3)). Groundwater recharge due to monsoon rainfall infiltration has been estimated as 170.05 MCM based on the water level rise during monsoon period. It is also estimated as 173.9 MCM using rainfall infiltration factor. An amount of 53.8 MCM of water is contributed to groundwater from surface water bodies. Recharge of groundwater due to return flow from irrigation has been computed as 147.6 MCM. The static groundwater reserve in Chithar River basin is estimated as 466.66 MCM and the dynamic reserve is about 187.7 MCM. In the present scenario, the aquifer is under safe condition for extraction of groundwater for domestic and irrigation purposes. If the existing water bodies are maintained properly, the extraction rate can be increased in future about 10% to 15%.

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  13. Developing capacities of community health workers in sexual and reproductive, maternal, newborn, child, and adolescent health: a mapping and review of training resources.

    Science.gov (United States)

    Tran, Nguyen Toan; Portela, Anayda; de Bernis, Luc; Beek, Kristen

    2014-01-01

    Given country demands for support in the training of community health workers (CHWs) to accelerate progress towards reaching the Millennium Development Goals in sexual and reproductive health and maternal, newborn, child, and adolescent health (SR/MNCAH), the United Nations Health Agencies conducted a synthesis of existing training resource packages for CHWs in different components of SR/MNCAH to identify gaps and opportunities and inform efforts to harmonize approaches to developing the capacity of CHWs. A mapping of training resource packages for CHWs was undertaken with documents retrieved online and from key informants. Materials were classified by health themes and analysed using agreed parameters. Ways forward were informed by a subsequent expert consultation. We identified 31 relevant packages. They covered different components of the SR/MNCAH continuum in varying breadth (integrated packages) and depth (focused packages), including family planning, antenatal and childbirth care (mainly postpartum haemorrhage), newborn care, and childhood care, and HIV. There is no or limited coverage of interventions related to safe abortion, adolescent health, and gender-based violence. There is no training package addressing the range of evidence-based interventions that can be delivered by CHWs as per World Health Organization guidance. Gaps include weakness in the assessment of competencies of trainees, in supportive supervision, and in impact assessment of packages. Many packages represent individual programme efforts rather than national programme materials, which could reflect weak integration into national health systems. There is a wealth of training packages on SR/MNCAH for CHWs which reflects interest in strengthening the capacity of CHWs. This offers an opportunity for governments and partners to mount a synergistic response to address the gaps and ensure an evidence-based comprehensive package of interventions to be delivered by CHWs. Packages with defined

  14. Use of Google Earth to strengthen public health capacity and facilitate management of vector-borne diseases in resource-poor environments.

    Science.gov (United States)

    Lozano-Fuentes, Saul; Elizondo-Quiroga, Darwin; Farfan-Ale, Jose Arturo; Loroño-Pino, Maria Alba; Garcia-Rejon, Julian; Gomez-Carro, Salvador; Lira-Zumbardo, Victor; Najera-Vazquez, Rosario; Fernandez-Salas, Ildefonso; Calderon-Martinez, Joaquin; Dominguez-Galera, Marco; Mis-Avila, Pedro; Morris, Natashia; Coleman, Michael; Moore, Chester G; Beaty, Barry J; Eisen, Lars

    2008-09-01

    Novel, inexpensive solutions are needed for improved management of vector-borne and other diseases in resource-poor environments. Emerging free software providing access to satellite imagery and simple editing tools (e.g. Google Earth) complement existing geographic information system (GIS) software and provide new opportunities for: (i) strengthening overall public health capacity through development of information for city infrastructures; and (ii) display of public health data directly on an image of the physical environment. We used freely accessible satellite imagery and a set of feature-making tools included in the software (allowing for production of polygons, lines and points) to generate information for city infrastructure and to display disease data in a dengue decision support system (DDSS) framework. Two cities in Mexico (Chetumal and Merida) were used to demonstrate that a basic representation of city infrastructure useful as a spatial backbone in a DDSS can be rapidly developed at minimal cost. Data layers generated included labelled polygons representing city blocks, lines representing streets, and points showing the locations of schools and health clinics. City blocks were colour-coded to show presence of dengue cases. The data layers were successfully imported in a format known as shapefile into a GIS software. The combination of Google Earth and free GIS software (e.g. HealthMapper, developed by WHO, and SIGEpi, developed by PAHO) has tremendous potential to strengthen overall public health capacity and facilitate decision support system approaches to prevention and control of vector-borne diseases in resource-poor environments.

  15. Workforce capacity to address obesity: a Western Australian cross-sectional study identifies the gap between health priority and human resources needed.

    Science.gov (United States)

    Begley, Andrea; Pollard, Christina Mary

    2016-08-25

    The disease burden due to poor nutrition, physical inactivity and obesity is high and increasing. An adequately sized and skilled workforce is required to respond to this issue. This study describes the public health nutrition and physical activity (NAPA) practice priorities and explores health managers and practitioner's beliefs regarding workforce capacity to deliver on these priorities. A workforce audit was conducted including a telephone survey of all managers and a postal survey of practitioners working in the area of NAPA promotion in Western Australia in 2004. Managers gave their perspective on workforce priorities, current competencies and future needs, with a 70 % response rate. Practitioners reported on public health workforce priorities, qualifications and needs, with a 56 % response rate. The top practice priorities for managers were diabetes (35 %), alcohol and other drugs (33 %), and cardiovascular disease (27 %). Obesity (19 %), poor nutrition (15 %) and inadequate physical activity (10 %) were of lower priority. For nutrition, managers identified lack of staff (60.4 %), organisational and management factors (39.5 %) and insufficient financial resources (30.2 %) as the major barriers to adequate service delivery. For physical activity services, insufficient financial resources (41.7 %) and staffing (35.4 %) and a lack of specific physical activity service specifications (25.0 %) were the main barriers. Practitioners identified inadequate staffing as the main barrier to service delivery for nutrition (42.3 %) and physical activity (23.3 %). Ideally, managers said they required 152 % more specialist nutritionists in the workforce and 131 % specialists for physical activity services to meet health outcomes in addition to other generalist staff. Human and financial resources and organisational factors were the main barriers to meeting obesity, and public health nutrition and physical activity outcomes. Services were being delivered by

  16. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    International Nuclear Information System (INIS)

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered

  17. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Bigeleisen, Jacob; Berne, Bruce J.; Coton, F. Albert; Scheraga, Harold A.; Simmons, Howard E.; Snyder, Lawrence C.; Wiberg, Kenneth B.; Wipke, W. Todd

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  19. National Uranium Resource Evaluation Program. Hydrogeochemical and Stream Sediment Reconnaissance Basic Data Reports Computer Program Requests Manual

    International Nuclear Information System (INIS)

    1980-01-01

    This manual is intended to aid those who are unfamiliar with ordering computer output for verification and preparation of Uranium Resource Evaluation (URE) Project reconnaissance basic data reports. The manual is also intended to help standardize the procedures for preparing the reports. Each section describes a program or group of related programs. The sections are divided into three parts: Purpose, Request Forms, and Requested Information

  20. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  1. A REVIEW ON SECURITY ISSUES AND CHALLENGES IN CLOUD COMPUTING MODEL OF RESOURCE MANAGEMENT

    OpenAIRE

    T. Vaikunth Pai; Dr. P. S. Aithal

    2017-01-01

    Cloud computing services refer to set of IT-enabled services delivered to a customer as services over the Internet on a leased basis and have the capability to extend up or down their service requirements or needs. Usually, cloud computing services are delivered by third party vendors who own the infrastructure. It has several advantages include scalability, elasticity, flexibility, efficiency and outsourcing non-core activities of an organization. Cloud computing offers an innovative busines...

  2. Using Free Computational Resources to Illustrate the Drug Design Process in an Undergraduate Medicinal Chemistry Course

    Science.gov (United States)

    Rodrigues, Ricardo P.; Andrade, Saulo F.; Mantoani, Susimaire P.; Eifler-Lima, Vera L.; Silva, Vinicius B.; Kawano, Daniel F.

    2015-01-01

    Advances in, and dissemination of, computer technologies in the field of drug research now enable the use of molecular modeling tools to teach important concepts of drug design to chemistry and pharmacy students. A series of computer laboratories is described to introduce undergraduate students to commonly adopted "in silico" drug design…

  3. University Students and Ethics of Computer Technology Usage: Human Resource Development

    Science.gov (United States)

    Iyadat, Waleed; Iyadat, Yousef; Ashour, Rateb; Khasawneh, Samer

    2012-01-01

    The primary purpose of this study was to determine the level of students' awareness about computer technology ethics at the Hashemite University in Jordan. A total of 180 university students participated in the study by completing the questionnaire designed by the researchers, named the Computer Technology Ethics Questionnaire (CTEQ). Results…

  4. Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

    2012-07-14

    The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  6. Lung diffusing capacity for nitric oxide and carbon monoxide in relation to morphological changes as assessed by computed tomography in patients with cystic fibrosis

    Directory of Open Access Journals (Sweden)

    Nowak Dennis

    2009-06-01

    Full Text Available Abstract Background Due to large-scale destruction, changes in membrane diffusion (Dm may occur in cystic fibrosis (CF, in correspondence to alterations observed by computed tomography (CT. Dm can be easily quantified via the diffusing capacity for nitric oxide (DLNO, as opposed to the conventional diffusing capacity for carbon monoxide (DLCO. We thus studied the relationship between DLNO as well as DLCO and a CF-specific CT score in patients with stable CF. Methods Simultaneous single-breath determinations of DLNO and DLCO were performed in 21 CF patients (mean ± SD age 35 ± 9 y, FEV1 66 ± 28%pred. Patients also underwent spirometry and bodyplethysmography. CT scans were evaluated via the Brody score and rank correlations (rS with z-scores of functional measures were computed. Results CT scores correlated best with DLNO (rS = -0.83; p S = -0.63; p CO (rS = -0.79; p NO were significantly lower than for DLCO (p 1, IVC or bodyplethysmographic (e.g., SRaw, RV/TLC indices were weaker than for DLNO or DLCO but most of them were also significant (p Conclusion In this cross sectional study in patients with CF, DLNO and DLCO reflected CT-morphological alterations of the lung better than other measures. Thus the combined diffusing capacity for NO and CO may play a future role for the non-invasive, functional assessment of structural alterations of the lung in CF.

  7. The impact of a human resource management intervention on the capacity of supervisors to support and supervise their staff at health facility level.

    Science.gov (United States)

    Uduma, Ogenna; Galligan, Marie; Mollel, Henry; Masanja, Honorati; Bradley, Susan; McAuliffe, Eilish

    2017-08-30

    A systematic and structured approach to the support and supervision of health workers can strengthen the human resource management function at the district and health facility levels and may help address the current crisis in human resources for health in sub-Saharan Africa by improving health workers' motivation and retention. A supportive supervision programme including (a) a workshop, (b) intensive training and (c) action learning sets was designed to improve human resource management in districts and health facilities in Tanzania. We conducted a randomised experimental design to evaluate the impact of the intervention. Data on the same measures were collected pre and post the intervention in order to identify any changes that occurred (between baseline and end of project) in the capacity of supervisors in intervention a + b and intervention a + b + c to support and supervise their staff. These were compared to supervisors in a control group in each of Tanga, Iringa and Tabora regions (n = 9). A quantitative survey of 95 and 108 supervisors and 196 and 187 health workers sampled at baseline and end-line, respectively, also contained open-ended responses which were analysed separately. Supervisors assessed their own competency levels pre- and post-intervention. End-line samples generally scored higher compared to the corresponding baseline in both intervention groups for competence activities. Significant differences between baseline and end-line were observed in the total scores on 'maintaining high levels of performance', 'dealing with performance problems', 'counselling a troubled employee' and 'time management' in intervention a + b. In contrast, for intervention a + b + c, a significant difference in distribution of scores was only found on 'counselling a troubled employee', although the end-line mean scores were higher than their corresponding baseline mean scores in all cases. Similar trends to those in the supervisors' reports are seen in

  8. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  9. Recommendations for protecting National Library of Medicine Computing and Networking Resources

    Energy Technology Data Exchange (ETDEWEB)

    Feingold, R.

    1994-11-01

    Protecting Information Technology (IT) involves a number of interrelated factors. These include mission, available resources, technologies, existing policies and procedures, internal culture, contemporary threats, and strategic enterprise direction. In the face of this formidable list, a structured approach provides cost effective actions that allow the organization to manage its risks. We face fundamental challenges that will persist for at least the next several years. It is difficult if not impossible to precisely quantify risk. IT threats and vulnerabilities change rapidly and continually. Limited organizational resources combined with mission restraints-such as availability and connectivity requirements-will insure that most systems will not be absolutely secure (if such security were even possible). In short, there is no technical (or administrative) {open_quotes}silver bullet.{close_quotes} Protection is employing a stratified series of recommendations, matching protection levels against information sensitivities. Adaptive and flexible risk management is the key to effective protection of IT resources. The cost of the protection must be kept less than the expected loss, and one must take into account that an adversary will not expend more to attack a resource than the value of its compromise to that adversary. Notwithstanding the difficulty if not impossibility to precisely quantify risk, the aforementioned allows us to avoid the trap of choosing a course of action simply because {open_quotes}it`s safer{close_quotes} or ignoring an area because no one had explored its potential risk. Recommendations for protecting IT resources begins with discussing contemporary threats and vulnerabilities, and then procedures from general to specific preventive measures. From a risk management perspective, it is imperative to understand that today, the vast majority of threats are against UNIX hosts connected to the Internet.

  10. Tracking the Flow of Resources in Electronic Waste - The Case of End-of-Life Computer Hard Disk Drives.

    Science.gov (United States)

    Habib, Komal; Parajuly, Keshav; Wenzel, Henrik

    2015-10-20

    Recovery of resources, in particular, metals, from waste flows is widely seen as a prioritized option to reduce their potential supply constraints in the future. The current waste electrical and electronic equipment (WEEE) treatment system is more focused on bulk metals, where the recycling rate of specialty metals, such as rare earths, is negligible compared to their increasing use in modern products, such as electronics. This study investigates the challenges in recovering these resources in the existing WEEE treatment system. It is illustrated by following the material flows of resources in a conventional WEEE treatment plant in Denmark. Computer hard disk drives (HDDs) containing neodymium-iron-boron (NdFeB) magnets were selected as the case product for this experiment. The resulting output fractions were tracked until their final treatment in order to estimate the recovery potential of rare earth elements (REEs) and other resources contained in HDDs. The results further show that out of the 244 kg of HDDs treated, 212 kg comprising mainly of aluminum and steel can be finally recovered from the metallurgic process. The results further demonstrate the complete loss of REEs in the existing shredding-based WEEE treatment processes. Dismantling and separate processing of NdFeB magnets from their end-use products can be a more preferred option over shredding. However, it remains a technological and logistic challenge for the existing system.

  11. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  12. Mobile clusters of single board computers: an option for providing resources to student projects and researchers.

    Science.gov (United States)

    Baun, Christian

    2016-01-01

    Clusters usually consist of servers, workstations or personal computers as nodes. But especially for academic purposes like student projects or scientific projects, the cost for purchase and operation can be a challenge. Single board computers cannot compete with the performance or energy-efficiency of higher-value systems, but they are an option to build inexpensive cluster systems. Because of the compact design and modest energy consumption, it is possible to build clusters of single board computers in a way that they are mobile and can be easily transported by the users. This paper describes the construction of such a cluster, useful applications and the performance of the single nodes. Furthermore, the clusters' performance and energy-efficiency is analyzed by executing the High Performance Linpack benchmark with a different number of nodes and different proportion of the systems total main memory utilized.

  13. Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key

    Science.gov (United States)

    Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.

    2017-01-01

    Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.

  14. Computational modeling as a tool for water resources management: an alternative approach to problems of multiple uses

    Directory of Open Access Journals (Sweden)

    Haydda Manolla Chaves da Hora

    2012-04-01

    Full Text Available Today in Brazil there are many cases of incompatibility regarding use of water and its availability. Due to the increase in required variety and volume, the concept of multiple uses was created, as stated by Pinheiro et al. (2007. The use of the same resource to satisfy different needs with several restrictions (qualitative and quantitative creates conflicts. Aiming to minimize these conflicts, this work was applied to the particular cases of Hydrographic Regions VI and VIII of Rio de Janeiro State, using computational modeling techniques (based on MOHID software – Water Modeling System as a tool for water resources management.

  15. The Rise of Computing Research in East Africa: The Relationship between Funding, Capacity and Research Community in a Nascent Field

    Science.gov (United States)

    Harsh, Matthew; Bal, Ravtosh; Wetmore, Jameson; Zachary, G. Pascal; Holden, Kerry

    2018-01-01

    The emergence of vibrant research communities of computer scientists in Kenya and Uganda has occurred in the context of neoliberal privatization, commercialization, and transnational capital flows from donors and corporations. We explore how this funding environment configures research culture and research practices, which are conceptualized as…

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  17. [Inhibition and resource capacity during normal aging: a confrontation of the dorsal-ventral and frontal models in a modified version of negative priming].

    Science.gov (United States)

    Martin, S; Brouillet, D; Guerdoux, E; Tarrago, R

    2006-01-01

    -be-ignored properties' responsiveness. In contrast, information matching subjects' goal is enhanced through an automatic excitatory imbalance. The accurate functioning of the Match/Mismatch field requires efficient executive functioning responsible for the uphold of goals and correct responses. In the case of negative priming, manipulating the efficiency of working memory is of interest as it should affect the triggering of slowing, ie, an indirect inhibitory deficit, when the task is resource demanding [Conwayet al. (6)]. Moreover, if inhibition, as reflected by negative priming, is mediated by individual resource capacity, then NP should disappear during aging only when individuals are engaged in a resource-demanding task. To address this issue, we examine whether cognitive control load in a gender decision task contributed to the presence or absence of NP during aging. According to the dorsal-ventral model, task complexity should not have any impact on performance, since gender decision task relies on a conceptual analysis of information. In turn, the frontal model predicts that age differences in performance profile will only differ when individual resource capacity is overloaded. Sixty-four participants (32 young and 32 older adults) performed a gender categorisation task through two experiments. Trials involved two stimuli presented successively at the same location. A word served as a prime and a word as a target. Both prime and target could be male or female. When prime and target matched on gender, we talked about VALID pairs (or compatible). When prime and target mismatched on the manipulated features, we talked about INVALID pairs (or incompatible). Participants' task was to identify the gender of the target. They were explicitly instructed not to respond to primes but to read them silently. Our interest was in response latencies for valid versus invalid pairs. We manipulated task complexity by the absence (experiment 1) or presence (experiment 2) of a distractor during

  18. Attentional Resource Allocation and Cultural Modulation in a Computational Model of Ritualized Behavior

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Sørensen, Jesper

    2016-01-01

    studies have tried to answer by focusing on ritualized behavior instead of ritual. Ritualized behavior (i.e., a set of behavioral features embedded in rituals) increases attention to detail and induces cognitive resource depletion, which together support distinct modes of action categorization. While......How do cultural and religious rituals influence human perception and cognition, and what separates the highly patterned behaviors of communal ceremonies from perceptually similar precautionary and compulsive behaviors? These are some of the questions that recent theoretical models and empirical...... patterns and the simulation data were subjected to linear and non-linear analysis. The results are used to exemplify how action perception of ritualized behavior a) might influence allocation of attentional resources; and b) can be modulated by cultural priors. Further explorations of the model show why...

  19. Computer and Video Games in Family Life: The Digital Divide as a Resource in Intergenerational Interactions

    Science.gov (United States)

    Aarsand, Pal Andre

    2007-01-01

    In this ethnographic study of family life, intergenerational video and computer game activities were videotaped and analysed. Both children and adults invoked the notion of a digital divide, i.e. a generation gap between those who master and do not master digital technology. It is argued that the digital divide was exploited by the children to…

  20. Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.

    Science.gov (United States)

    Beltrametti, Monica; English, Will

    1994-01-01

    Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…

  1. Computer modelling of the UK wind energy resource: final overview report

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Ravenscroft, F

    1993-12-31

    This report describes the results of a programme of work to estimate the UK wind energy resource. Mean wind speed maps and quantitative resource estimates were obtained using the NOABL mesoscale (1 km resolution) numerical model for the prediction of wind flow over complex terrain. NOABL was used in conjunction with digitised terrain data and wind data from surface meteorological stations for a ten year period (1975-1984) to provide digital UK maps of mean wind speed at 10m, 25m and 45m above ground level. Also included in the derivation of these maps was the use of the Engineering Science Data Unit (ESDU) method to model the effect on wind speed of the abrupt change in surface roughness that occurs at the coast. Existing isovent maps, based on standard meteorological data which take no account of terrain effects, indicate that 10m annual mean wind speeds vary between about 4.5 and 7 m/s over the UK with only a few coastal areas over 6 m/s. The present study indicated that 23% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. (20 figures, 7 tables, 10 references). (author)

  2. Evaluation of the level of skill required of operators of a computer-assisted radiologic total lung capacity measurement system

    International Nuclear Information System (INIS)

    Mazzeo, J.

    1985-01-01

    This research was conducted to obtain information regarding the feasibility of using non-medical personnel to obtain measurements of radiologic total lung capacity (TLC). Operators from each of four groups (general undergraduates, nursing students, medical students, radiologists) differing in the amount of medical training and/or experience reading x-rays, performed each of two tasks. The first task was the measurement of radiologic TLC for a set of twenty x-rays. The second task consisted of tracing the outline of the anatomical structures that must be identified in the execution of the radiologic TLC measurement task. Data from the radiologic TLC measurement task were used to identify possible group differences in the reliability and validity of the measures. The reliability analyses were performed within the framework of Generalizability Theory. While the results are not conclusive, due to small sizes, the analyses suggest that group differences in reliability of the measures, if they exist, are small

  3. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    International Nuclear Information System (INIS)

    Kirk, Bernadette Lugue

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries. An important activity of RSICC is its participation in international efforts on computational and experimental benchmarks. An example is the Shielding Integral Benchmarks Archival Database (SINBAD), which includes shielding benchmarks for fission, fusion and accelerators. RSICC is funded by the United States Department of Energy, Department of Homeland Security and Nuclear Regulatory Commission.

  4. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries.

  5. A resource letter CSSMD-1: computer simulation studies by the method of molecular dynamics

    International Nuclear Information System (INIS)

    Goel, S.P.; Hockney, R.W.

    1974-01-01

    A comprehensive bibliography on computer simulation studies by the method of Molecular Dynamics is presented. The bibliography includes references to relevant literature published up to mid 1973, starting from the first paper of Alder and Wainwright, published in 1957. The procedure of the method of Molecular Dynamics, the main fields of study in which it has been used, its limitations and how these have been overcome in some cases are also discussed [pt

  6. 岩溶地区资源环境承载力分析——以贵州省为例%Research on the Resource and Environmental Carrying Capacity of Karst Region

    Institute of Scientific and Technical Information of China (English)

    王金凤; 代稳; 马士彬; 王立威

    2017-01-01

    With the rapid development of social economy, the contradiction between population, resources and environment is becoming more and more serious, and the carrying capacity of resources and environment is under great pressure, especially in karst area. Using the analytic hierarchy process and the state space method, constructing evaluation index system of resources and environment carrying capacity of the karst area from three aspects:resource carrying capacity,environmental carrying capacity and social economic coordination,selecting water resources, land resources, tourism resources, water environment, atmospheric environment, population, economy and society as the evaluation index layer,this paper will evaluate quantitatively the resources and environment carrying capacity in Guizhou Province in 2000, 2004 and 2008 and 2013 four phase. The results showed that resources and environment carrying capacity in Guizhou Province showed a low level state and have to the middle level towards the trend in 2013 because the resources and environment capacity index in 2000, 2004, 2008 and 2013 is 0.09364, 0.08957, 0.09230, 0.0113.Zunyi City, in Qiandongnan Prefecture, Guizhou southwest Prefecture three administrative region of resources and environment carrying capacity is in medium state and the rest of the six administrative regions in the lower level in nine administrative regions.%随着社会经济的飞速发展,人口、资源与环境之间的矛盾日趋加剧,资源环境承载力面临着巨大的压力,尤其是岩溶地区.本文利用层次分析与状态空间法,从资源承载力、环境承载力和社会经济协调力三个方面构建岩溶地区资源环境承载力评价体系,选取水资源、土地资源、旅游资源、水环境、大气环境、人口、经济及社会为评价指标层,对贵州省2000、2004、2008和2013年4个时相的资源环境承载力进行了定量评价.结果表明:贵州省2000、2004、2008、2013年的资源环

  7. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    Science.gov (United States)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  8. Resource utilization and costs during the initial years of lung cancer screening with computed tomography in Canada.

    Science.gov (United States)

    Cressman, Sonya; Lam, Stephen; Tammemagi, Martin C; Evans, William K; Leighl, Natasha B; Regier, Dean A; Bolbocean, Corneliu; Shepherd, Frances A; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R; Mayo, John R; McWilliams, Annette; Couture, Christian; English, John C; Goffin, John; Hwang, David M; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J; Goss, Glenwood D; Nicholas, Garth; Seely, Jean M; Sekhon, Harmanjatinder S; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D; Tan, Wan C; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J

    2014-10-01

    It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer's perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400-$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553-$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254-$52,200; p = 0.061). In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure.

  9. The ESTHER hospital partnership initiative: a powerful levy for building capacities to combat the HIV pandemic in low-resource countries

    OpenAIRE

    Raguin, Gilles

    2016-01-01

    Partnerships between hospitals in high income countries and low resource countries are uniquely capable of fulfilling the tripartite needs of care, training, and research required to address health care crises in low resource countries. Of particular interest, at a time when the EBOLA crisis highlights the weaknesses of health systems in resource-poor settings, the institutional resources and expertise of hospitals can also contribute to strengthening health systems with long-term sustainabil...

  10. Capacity Expansion and Reliability Evaluation on the Networks Flows with Continuous Stochastic Functional Capacity

    Directory of Open Access Journals (Sweden)

    F. Hamzezadeh

    2014-01-01

    Full Text Available In many systems such as computer network, fuel distribution, and transportation system, it is necessary to change the capacity of some arcs in order to increase maximum flow value from source s to sink t, while the capacity change incurs minimum cost. In real-time networks, some factors cause loss of arc’s flow. For example, in some flow distribution systems, evaporation, erosion or sediment in pipes waste the flow. Here we define a real capacity, or the so-called functional capacity, which is the operational capacity of an arc. In other words, the functional capacity of an arc equals the possible maximum flow that may pass through the arc. Increasing the functional arcs capacities incurs some cost. There is a certain resource available to cover the costs. First, we construct a mathematical model to minimize the total cost of expanding the functional capacities to the required levels. Then, we consider the loss of flow on each arc as a stochastic variable and compute the system reliability.

  11. Computer-assisted CI fitting: Is the learning capacity of the intelligent agent FOX beneficial for speech understanding?

    Science.gov (United States)

    Meeuws, Matthias; Pascoal, David; Bermejo, Iñigo; Artaso, Miguel; De Ceulaer, Geert; Govaerts, Paul J

    2017-07-01

    The software application FOX ('Fitting to Outcome eXpert') is an intelligent agent to assist in the programing of cochlear implant (CI) processors. The current version utilizes a mixture of deterministic and probabilistic logic which is able to improve over time through a learning effect. This study aimed at assessing whether this learning capacity yields measurable improvements in speech understanding. A retrospective study was performed on 25 consecutive CI recipients with a median CI use experience of 10 years who came for their annual CI follow-up fitting session. All subjects were assessed by means of speech audiometry with open set monosyllables at 40, 55, 70, and 85 dB SPL in quiet with their home MAP. Other psychoacoustic tests were executed depending on the audiologist's clinical judgment. The home MAP and the corresponding test results were entered into FOX. If FOX suggested to make MAP changes, they were implemented and another speech audiometry was performed with the new MAP. FOX suggested MAP changes in 21 subjects (84%). The within-subject comparison showed a significant median improvement of 10, 3, 1, and 7% at 40, 55, 70, and 85 dB SPL, respectively. All but two subjects showed an instantaneous improvement in their mean speech audiometric score. Persons with long-term CI use, who received a FOX-assisted CI fitting at least 6 months ago, display improved speech understanding after MAP modifications, as recommended by the current version of FOX. This can be explained only by intrinsic improvements in FOX's algorithms, as they have resulted from learning. This learning is an inherent feature of artificial intelligence and it may yield measurable benefit in speech understanding even in long-term CI recipients.

  12. I - Detector Simulation for the LHC and beyond: how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  13. II - Detector simulation for the LHC and beyond : how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  14. Menu-driven cloud computing and resource sharing for R and Bioconductor.

    Science.gov (United States)

    Bolouri, Hamid; Dulepet, Rajiv; Angerman, Michael

    2011-08-15

    We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. hbolouri@fhcrc.org.

  15. Energy efficiency models and optimization algoruthm to enhance on-demand resource delivery in a cloud computing environment / Thusoyaone Joseph Moemi

    OpenAIRE

    Moemi, Thusoyaone Joseph

    2013-01-01

    Online hosed services are what is referred to as Cloud Computing. Access to these services is via the internet. h shifts the traditional IT resource ownership model to renting. Thus, high cost of infrastructure cannot limit the less privileged from experiencing the benefits that this new paradigm brings. Therefore, c loud computing provides flexible services to cloud user in the form o f software, platform and infrastructure as services. The goal behind cloud computing is to provi...

  16. Computational resources to filter gravitational wave data with P-approximant templates

    International Nuclear Information System (INIS)

    Porter, Edward K

    2002-01-01

    The prior knowledge of the gravitational waveform from compact binary systems makes matched filtering an attractive detection strategy. This detection method involves the filtering of the detector output with a set of theoretical waveforms or templates. One of the most important factors in this strategy is knowing how many templates are needed in order to reduce the loss of possible signals. In this study, we calculate the number of templates and computational power needed for a one-step search for gravitational waves from inspiralling binary systems. We build on previous works by first expanding the post-Newtonian waveforms to 2.5-PN order and second, for the first time, calculating the number of templates needed when using P-approximant waveforms. The analysis is carried out for the four main first-generation interferometers, LIGO, GEO600, VIRGO and TAMA. As well as template number, we also calculate the computational cost of generating banks of templates for filtering GW data. We carry out the calculations for two initial conditions. In the first case we assume a minimum individual mass of 1 M o-dot and in the second, we assume a minimum individual mass of 5 M o-dot . We find that, in general, we need more P-approximant templates to carry out a search than if we use standard PN templates. This increase varies according to the order of PN-approximation, but can be as high as a factor of 3 and is explained by the smaller span of the P-approximant templates as we go to higher masses. The promising outcome is that for 2-PN templates, the increase is small and is outweighed by the known robustness of the 2-PN P-approximant templates

  17. SuperB R&D computing program: HTTP direct access to distributed resources

    Science.gov (United States)

    Fella, A.; Bianchi, F.; Ciaschini, V.; Corvo, M.; Delprete, D.; Diacono, D.; Di Simone, A.; Franchini, P.; Donvito, G.; Giacomini, F.; Gianoli, A.; Longo, S.; Luitz, S.; Luppi, E.; Manzali, M.; Pardi, S.; Perez, A.; Rama, M.; Russo, G.; Santeramo, B.; Stroili, R.; Tomassetti, L.

    2012-12-01

    The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 1036cm-2s-1. The increasing network performance also in the Wide Area Network environment and the capability to read data remotely with good efficiency are providing new possibilities and opening new scenarios in the data access field. Subjects like data access and data availability in a distributed environment are key points in the definition of the computing model for an HEP experiment like SuperB. R&D efforts in such a field have been brought on during the last year in order to release the Computing Technical Design Report within 2013. WAN direct access to data has been identified as one of the more interesting viable option; robust and reliable protocols as HTTP/WebDAV and xrootd are the subjects of a specific R&D line in a mid-term scenario. In this work we present the R&D results obtained in the study of new data access technologies for typical HEP use cases, focusing on specific protocols such as HTTP and WebDAV in Wide Area Network scenarios. Reports on efficiency, performance and reliability tests performed in a data analysis context have been described. Future R&D plan includes HTTP and xrootd protocols comparison tests, in terms of performance, efficiency, security and features available.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  19. Lean computing for the cloud

    CERN Document Server

    Bauer, Eric

    2016-01-01

    Applies lean manufacturing principles across the cloud service delivery chain to enable application and infrastructure service providers to sustainably achieve the shortest lead time, best quality, and value This book focuses on lean in the context of cloud computing capacity management of applications and the physical and virtual cloud resources that support them. Lean Computing for the Cloud considers business, architectural and operational aspects of efficiently delivering valuable services to end users via cloud-based applications hosted on shared cloud infrastructure. The work also focuses on overall optimization of the service delivery chain to enable both application service and infrastructure service providers to adopt leaner, demand driven operations to serve end users more efficiently. The book’s early chapters analyze how capacity management morphs with cloud computing into interlocked physical infrastructure capacity management, virtual resou ce capacity management, and application capacity ma...

  20. Internet resources for dentistry: computer, Internet, reference, and sites for enhancing personal productivity of the dental professional.

    Science.gov (United States)

    Guest, G F

    2000-08-15

    At the onset of the new millennium the Internet has become the new standard means of distributing information. In the last two to three years there has been an explosion of e-commerce with hundreds of new web sites being created every minute. For most corporate entities, a web site is as essential as the phone book listing used to be. Twenty years ago technologist directed how computer-based systems were utilized. Now it is the end users of personal computers that have gained expertise and drive the functionality of software applications. The computer, initially invented for mathematical functions, has transitioned from this role to an integrated communications device that provides the portal to the digital world. The Web needs to be used by healthcare professionals, not only for professional activities, but also for instant access to information and services "just when they need it." This will facilitate the longitudinal use of information as society continues to gain better information access skills. With the demand for current "just in time" information and the standards established by Internet protocols, reference sources of information may be maintained in dynamic fashion. News services have been available through the Internet for several years, but now reference materials such as online journals and digital textbooks have become available and have the potential to change the traditional publishing industry. The pace of change should make us consider Will Rogers' advice, "It isn't good enough to be moving in the right direction. If you are not moving fast enough, you can still get run over!" The intent of this article is to complement previous articles on Internet Resources published in this journal, by presenting information about web sites that present information on computer and Internet technologies, reference materials, news information, and information that lets us improve personal productivity. Neither the author, nor the Journal endorses any of the

  1. THE VALUE OF CLOUD COMPUTING IN THE BUSINESS ENVIRONMENT

    OpenAIRE

    Mircea GEORGESCU; Marian MATEI

    2013-01-01

    Without any doubt, cloud computing has become one of the most significant trends in any enterprise, not only for IT businesses. Besides the fact that the cloud can offer access to low cost, considerably flexible computing resources, cloud computing also provides the capacity to create a new relationship between business entities and corporate IT departments. The value added to the business environment is given by the balanced use of resources, offered by cloud computing. The cloud mentality i...

  2. A Two-Tier Energy-Aware Resource Management for Virtualized Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2016-01-01

    Full Text Available The economic costs caused by electric power take the most significant part in total cost of data center; thus energy conservation is an important issue in cloud computing system. One well-known technique to reduce the energy consumption is the consolidation of Virtual Machines (VMs. However, it may lose some performance points on energy saving and the Quality of Service (QoS for dynamic workloads. Fortunately, Dynamic Frequency and Voltage Scaling (DVFS is an efficient technique to save energy in dynamic environment. In this paper, combined with the DVFS technology, we propose a cooperative two-tier energy-aware management method including local DVFS control and global VM deployment. The DVFS controller adjusts the frequencies of homogenous processors in each server at run-time based on the practical energy prediction. On the other hand, Global Scheduler assigns VMs onto the designate servers based on the cooperation with the local DVFS controller. The final evaluation results demonstrate the effectiveness of our two-tier method in energy saving.

  3. Reconfiguration of Computation and Communication Resources in Multi-Core Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pezzarossa, Luca

    -core platform. Our approach is to associate reconfiguration with operational mode changes where the system, during normal operation, changes a subset of the executing tasks to adapt its behaviour to new conditions. Reconfiguration is therefore used during a mode change to modify the real-time guaranteed services...... of the communication channels between the tasks that are affected by the reconfiguration. This thesis investigates the use of reconfiguration in the context of multicore realtime systems targeting embedded applications. We address the reconfiguration of both the computation and the communication resources of a multi...... by the communication fabric between the cores of the platform. To support this, we present a new network on chip architecture, named Argo 2, that allows instantaneous and time-predictable reconfiguration of the communication channels. Our reconfiguration-capable architecture is prototyped using the existing time...

  4. Building capacity for co-operative governance as a basis for integrated water resource managing in the Inkomati and Mvoti catchments, South Africa

    OpenAIRE

    Colvin, J; Ballim, F; Chimbuya, S; Everard, M; Goss, J; Klarenberg, G; Ndlovu, S; Ncala, D; Weston, D

    2008-01-01

    South Africa's National Water Act and National Water Resource Strategy set out an ambitious vision for Integrated Water Resources Management including a strong focus on the redistribution of water resources towards the poor and on empowering historically disadvantaged communities. To achieve this vision the Department of Water Affairs & Forestry (DWAF) has been pursuing a programme for devolving powers to 19 stakeholder-led catchment management agencies (CMAs) and more locally, transforming i...

  5. Computing System Construction of Water Environment Carrying Capacity in Huaihe River Basin%淮河流域水环境承载能力计算系统的构建

    Institute of Scientific and Technical Information of China (English)

    严子奇; 夏军; 左其亭; 张永勇

    2009-01-01

    本文介绍了淮河流域水环境承载能力计算系统的总体设计、承载力数学模型的建立、软件系统的结构及应用.研究中通过计算在维系"良好水环境状况"目标下流域所能承载的社会经济规模上限来得到流域水环境承载能力.在耦合了社会经济系统模型、水资源转化关系模型、水环境系统模拟模型的基础上利用数据库、地理信息系统、可视化编程技术将"社会经济-水资源-水环境"系统模型、生态需水量计算模型、以及水环境承载能力计量模型中众多的边界条件、控制条件、可变参数及成果分析进行系统集成.所建立的淮河流域水环境承载能力计算系统可有效地实现流域现状水环境承裁能力的计算以及对未来不同时刻、不同频率来水条件下的承载力进行情景分析计算,为流域水环境承载能力管理提供决策支持.%In this paper, the overall designs and mathematical model of water environment carrying capacity, as well as the system structure and application of water environment carrying capacity computing system in Huaihe River Basin are presented. The carrying capacity of water environment in river basin is reached by calculating the upper limit of the socio-economic scale that watershed can carry under the "good status of water environment". The system is based on an integrated "socio-economic-water resource-water environment" model which is coupled with the socio-economic system model, the water resource transformation relation model and the simulation model of water environment system. The population size is a key indicator to reflect the water environment carrying capacity of the basin, and iterative algorithm is used to get the maximum population size of different water environment carrying capacities. This system also contains a subroutine for ecological water requirement model. The mean value of the results from the Tennant method, base flow method and

  6. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    Science.gov (United States)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  7. Capacitated dynamic lot sizing with capacity acquisition

    DEFF Research Database (Denmark)

    Li, Hongyan; Meissner, Joern

    2011-01-01

    One of the fundamental problems in operations management is determining the optimal investment in capacity. Capacity investment consumes resources and the decision, once made, is often irreversible. Moreover, the available capacity level affects the action space for production and inventory...

  8. ForWarn: A Cross-Cutting Forest Resource Management and Decision Support System Providing the Capacity to Identify and Track Forest Disturbances Nationally

    Science.gov (United States)

    Hargrove, W. W.; Spruce, J.; Norman, S.; Christie, W.; Hoffman, F. M.

    2012-12-01

    web through the ForWarn Change Assessment Viewer at http://forwarn.forestthreats.org/fcav. No user id or password is required, and there is no cost. The Assessment Viewer operates within any popular web browser using nearly any type of computer. It lets users pan, zoom, and scroll around within ForWarn maps, and also contains an up-to-date library of co-registered, near real-time ancillary maps from diverse sources that allows users to assess the nature of particular forest disturbances and ascribe their most-likely causes. Users can check the current week's U.S. Drought Monitor, USGS VegDRI maps, FHM Historical Aerial Disturbance Surveys, MODIS Cumulative Current Year Fire Detections, and many others. A "Share this map" feature lets users save the current map view and extent into a web URL, so that users can easily share what they are looking at inside the Assessment Viewer with others via an email, a document, or a web page. The ForWarn Rapid National Assessment Team examined more than 60 ForWarn forest disturbance events in 2011-2012, and issued over 30 alerts. We hope to automate forest disturbance alerts and supply them through various subscription services. Forest owners and managers would only be alerted to disturbances occurring near their own forest resources.

  9. Building Capacity Through Hands-on Computational Internships to Assure Reproducible Results and Implementation of Digital Documentation in the ICERT REU Program

    Science.gov (United States)

    Gomez, R.; Gentle, J.

    2015-12-01

    Modern data pipelines and computational processes require that meticulous methodologies be applied in order to insure that the source data, algorithms, and results are properly curated, managed and retained while remaining discoverable, accessible, and reproducible. Given the complexity of understanding the scientific problem domain being researched, combined with the overhead of learning to use advanced computing technologies, it becomes paramount that the next generation of scientists and researchers learn to embrace best-practices. The Integrative Computational Education and Research Traineeship (ICERT) is a National Science Foundation (NSF) Research Experience for Undergraduates (REU) Site at the Texas Advanced Computing Center (TACC). During Summer 2015, two ICERT interns joined the 3DDY project. 3DDY converts geospatial datasets into file types that can take advantage of new formats, such as natural user interfaces, interactive visualization, and 3D printing. Mentored by TACC researchers for ten weeks, students with no previous background in computational science learned to use scripts to build the first prototype of the 3DDY application, and leveraged Wrangler, the newest high performance computing (HPC) resource at TACC. Test datasets for quadrangles in central Texas were used to assemble the 3DDY workflow and code. Test files were successfully converted into a stereo lithographic (STL) format, which is amenable for use with a 3D printers. Test files and the scripts were documented and shared using the Figshare site while metadata was documented for the 3DDY application using OntoSoft. These efforts validated a straightforward set of workflows to transform geospatial data and established the first prototype version of 3DDY. Adding the data and software management procedures helped students realize a broader set of tangible results (e.g. Figshare entries), better document their progress and the final state of their work for the research group and community

  10. "Climate change impact on water resources - a challenge for IWRM". BRAHMATWINN - Twinning European and South Asian River Basins to enhance capacity and implement adaptive management approaches

    Science.gov (United States)

    Bartosch, A.; Pechstädt, J.; Müller Schmied, H.; Flügel, W.-A.

    2009-04-01

    BRAHMATWINN addresses climate change impact of the hydrology of two macro-scale river basins having headwaters in alpine mountain massifs. The project will elaborate on the consequential vulnerability of present IWRM and river basin management that have been persistent in these basins during the past decades and will develop tested approaches and technologies for adaptive IWRM and resilience. The overall objective of BRAHMATWINN is to enhance and improve capacity to carry out a harmonized integrated water resources management (IWRM) approach as addressed by the European Water Initiative (EWI) in headwater river systems of alpine mountain massifs in respect to impacts from likely climate change, and to transfer professional IWRM expertise, approaches and tools based on case studies carried out in twinning European and Asian river basins, the Upper Danube River Basin (UDRB) and the Upper Brahmaputra River Basin (UBRB). Sustainable IWRM in river basins of such kind face common problems: (i) floods e.g. during spring melt or heavy storms and droughts during summer; (ii) competing water demands for agriculture, hydropower, rural, urban and industrial development, and the environment; (iii) pollution from point as well as diffuse sources; and (iv) socio-economic and legal issues related to water allocation. Besides those common topics both basins also differ in other issues requiring the adaptation of the IWRM tools; these are for example climate conditions, the density of monitoring network, political framework and trans-boundary conflicts. An IWRM has to consider all water-related issues like the securing of water supply for the population in sufficient quantity and quality, the protection of the ecological function of water bodies and it has to consider the probability of natural hazards like floods and droughts. Furthermore the resource water should be threatened in a way that the needs of future generations can be satisfied. Sustainable development is one of the

  11. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  12. Multimedia messages in genetics: design, development, and evaluation of a computer-based instructional resource for secondary school students in a Tay Sachs disease carrier screening program.

    Science.gov (United States)

    Gason, Alexandra A; Aitken, MaryAnne; Delatycki, Martin B; Sheffield, Edith; Metcalfe, Sylvia A

    2004-01-01

    Tay Sachs disease is a recessively inherited neurodegenerative disorder, for which carrier screening programs exist worldwide. Education for those offered a screening test is essential in facilitating informed decision-making. In Melbourne, Australia, we have designed, developed, and evaluated a computer-based instructional resource for use in the Tay Sachs disease carrier screening program for secondary school students attending Jewish schools. The resource entitled "Genetics in the Community: Tay Sachs disease" was designed on a platform of educational learning theory. The development of the resource included formative evaluation using qualitative data analysis supported by descriptive quantitative data. The final resource was evaluated within the screening program and compared with the standard oral presentation using a questionnaire. Knowledge outcomes were measured both before and after either of the educational formats. Data from the formative evaluation were used to refine the content and functionality of the final resource. The questionnaire evaluation of 302 students over two years showed the multimedia resource to be equally effective as an oral educational presentation in facilitating participants' knowledge construction. The resource offers a large number of potential benefits, which are not limited to the Tay Sachs disease carrier screening program setting, such as delivery of a consistent educational message, short delivery time, and minimum financial and resource commitment. This article outlines the value of considering educational theory and describes the process of multimedia development providing a framework that may be of value when designing genetics multimedia resources in general.

  13. Evaluation of Marine Resource Carrying Capacity in the Construction of Qingdao Blue Economy Zone%青岛市蓝色经济区建设的海洋资源承载力评价

    Institute of Scientific and Technical Information of China (English)

    李京梅; 许玲

    2013-01-01

    From the standpoints of marine resources supply and marine industry demands ,this article built a comprehensive evaluation indicator system and measured marine resource carrying capacity in Qing-dao from 2001 to 2010 by the fuzzy comprehensive evaluation method .The results showed that the devel-opment of marine industry was beyond the marine resource carrying capacity in 2001 to 2006 and 2008 ,and was within the carrying capacity in the rest three years .The construction of the seaports has increased the supply ability of marine resources to some extent ,but the pollution caused by traditional marine industries is still huge ,and is a major cause for the bad performance of marine resource carrying capacity .It is sug-gested that traditional aquaculture should be reformed ,and environment friendly industries like recreation fishery and tourism should be developed in the process of constructing the blue economy zone .%从胶州湾海域资源环境供给和青岛市海洋产业增长需求的角度,构建了青岛市海洋资源承载力综合评价指标体系,利用模糊综合评价方法,对青岛市2001-2010年间海洋资源承载状况进行测度。结果表明,2001-2006年及2008年青岛市海洋资源承载力处于超载状态,其余年份处于适载状态。海岸带开发的港口建设在一定程度上提高了海洋资源的供给能力,但传统海洋产业的排污需求依然较大,是海洋资源承载力超载的重要原因。建议在蓝色经济区建设中,加快对传统养殖业的改造,大力发展休闲渔业、滨海旅游业等能耗低、排污少的产业。

  14. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    Science.gov (United States)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  15. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    International Nuclear Information System (INIS)

    Seager, M.

    2007-01-01

    The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vast number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as

  16. A case of mydriatic fixed pupil with diabetes mellitus, about limited resolution capacity of computed tomography and twin light reflex (Nozaki)

    International Nuclear Information System (INIS)

    Nozaki, Hisashi; Ayakawa, Yoshio; Okamoto, Toshiko; Awaya, Shinobu.

    1982-01-01

    A 56 year old house wife with a ten history of diabetes mellitus was admitted with visual impairment of both eyes. The pupil of the right eye larger than that of the lefteye, and did not react to light. Examination revealed diabetic retinopathy of both eyes with the right dilatedandfixed pupil and ocular movements were not abnormal except convergence. Computed Tomography did not show abnormal findings. It is necessary, however, to keep in mind that normal apperance of CT-scan does not always mean normal conditions, because of it's limited resolution capacity. From clinical signs and symptoms, the mydriatic fixed pupil might be diagnosed as diabetic origin. Thus, despite of outstanding technical advances such as a CT-scan, it should be emphasized that the most important diagnostic procedure is clinical signs, symptoms, accurate history, and clinical examinations. It seems to be a useful procedure that twin light reflex is applied to the fixed pupil with retina or optic nerve involvement as well as direct, consensual light reaction, and swing flashlight test. (author)

  17. Telemedicine Based on Mobile Devices and Mobile Cloud Computing

    OpenAIRE

    Lidong Wang; Cheryl Ann Alexander

    2014-01-01

    Mobile devices such as smartphones and tablets support kinds of mobile computing and services. They can access to the cloud or offload the computation-intensive part to the cloud computing resources. Mobile cloud computing (MCC) integrates the cloud computing into the mobile environment, which extends mobile devices’ battery lifetime, improves their data storage capacity and processing power, and improves their reliability and information security. In this paper, the applications of smartphon...

  18. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cole, Wesley J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Richards, James [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-01

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss common modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges

  19. Uncertainty in adaptive capacity

    International Nuclear Information System (INIS)

    Neil Adger, W.; Vincent, K.

    2005-01-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. (authors)

  20. Why Are We Talking About Capacity Markets?

    Energy Technology Data Exchange (ETDEWEB)

    Frew, Bethany

    2017-06-28

    Revenue sufficiency or 'missing money' concerns in wholesale electricity markets are important because they could lead to resource (or capacity) adequacy shortfalls. Capacity markets or other capacity-based payments are among the proposed solutions to remedy these challenges. This presentation provides a high-level overview of the importance of and process for ensuring resource adequacy, and then discusses considerations for capacity markets under futures with high penetrations of variable resources such as wind and solar.

  1. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  2. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  3. The actual status of uranium ore resources at Eko Remaja Sector: the need of verification of resources computation and geometrical form of mineralization zone by mining test

    International Nuclear Information System (INIS)

    Johan Baratha; Muljono, D.S.; Agus Sumaryanto; Handoko Supalal

    1996-01-01

    Uranium ore resources calculation was done after ending all of geological work step. Estimation process of ore resources was started from evaluation drilling, continued with borehole logging. From logging, the result has presented in anomaly graphs, then was processed to determine thickness and grade value of ore. Those mineralization points were correlated one another to form mineralization zones which have direction of N 270 degree to N 285 degree with 70 degree dip to North. From Grouping the mineralization distribution, 19 mineralization planes was constructed which contain 553 ton of U 3 O 8 measured. It is suggested that before expanding measured ore deposit area, mining test should be done first at certain mineralization planes to prove the method applied to calculate the reserve. Results form mining test could be very useful to reevaluate all the work-step done. (author); 4 refs; 2 tabs; 8 figs

  4. Petrol filling workers as biomonitor of PAH exposure and functional health capacity in resource-limited settings of city Rawalpindi, Pakistan.

    Science.gov (United States)

    Rashid, Audil; Tao, Shu; Uddin, Ikhtiar; Kamal, Atif

    2017-07-01

    This is the first study from Pakistan to report the exposure of petrol filling workers (n = 120) to naphthalene (Nap) and pyrene (Pyr) in relation to their functional capacities and health outcome. A group of non-exposed subjects (controls n = 46) was also recruited for comparison. The perceived health risk of the exposed workers was monitored using a questionnaire based on the self-reporting survey. The observed physical anomalies related to the health disorder included the acidity after meals, eye redness, appetite loss, skin lesions, and dryness of oral cavity, while those related to neurasthenic symptoms included the body aches, energy loss, twitching, fatigue, sleeplessness, fainting, and irritability. Mean Nap level observed in the exposed group (106 μg L -1 ) was significantly correlated (r = 0.49; p Workers exposed for 6 h per day or more had significantly high prevalence of physical disorders (OR = 2.79, 95% CI = 1.28-6.09). Neurasthenic symptoms were found in 65% of the subjects and were associated with years of involvement in job. Ten years or more work duration at petrol pumps could be associated with a substantial development of neurasthenic effects (OR = 2.80, 95% CI = 1.23-6.34). In conclusion, the subjects ascribed the disturbances in physical and neurological behavior to their occupation (petrol filling) and also rated their overall health and functional capacity as poor. To promote health of petrol pump workers, reduction in work hours and provision of masks and gloves could be introduced as occupational health interventions.

  5. A computer software system for integration and analysis of grid-based remote sensing data with other natural resource data. Remote Sensing Project

    Science.gov (United States)

    Tilmann, S. E.; Enslin, W. R.; Hill-Rowley, R.

    1977-01-01

    A computer-based information system is described designed to assist in the integration of commonly available spatial data for regional planning and resource analysis. The Resource Analysis Program (RAP) provides a variety of analytical and mapping phases for single factor or multi-factor analyses. The unique analytical and graphic capabilities of RAP are demonstrated with a study conducted in Windsor Township, Eaton County, Michigan. Soil, land cover/use, topographic and geological maps were used as a data base to develope an eleven map portfolio. The major themes of the portfolio are land cover/use, non-point water pollution, waste disposal, and ground water recharge.

  6. Station Capacity

    DEFF Research Database (Denmark)

    Landex, Alex

    2011-01-01

    the probability of conflicts and the minimum headway times into account. The last method analyzes how optimal platform tracks are used by examining the arrival and departure pattern of the trains. The developed methods can either be used separately to analyze specific characteristics of the capacity of a station......Stations are often limiting the capacity of railway networks. This is due to extra need of tracks when trains stand still, trains turning around, and conflicting train routes. Although stations are often the capacity bottlenecks, most capacity analysis methods focus on open line capacity. Therefore...... for platform tracks and the probability that arriving trains will not get a platform track immediately at arrival. The third method is a scalable method that analyzes the conflicts in the switch zone(s). In its simplest stage, the method just analyzes the track layout while the more advanced stages also take...

  7. On library information resources construction under network environment

    International Nuclear Information System (INIS)

    Guo Huifang; Wang Jingjing

    2014-01-01

    Information resources construction is the primary task and critical measures for libraries. In the 2lst century, the knowledge economy era, with the continuous development of computer network technology, information resources have become an important part of libraries which have been a significant indicator of its capacity construction. The development of socialized Information, digitalization and internalization has put forward new requirements for library information resources construction. This paper describes the impact of network environment on construction of library information resources and proposes the measures of library information resources. (authors)

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  10. Capacity building for sustainable development. One of the five key areas to sustainable development where progress is possible with the resources and technologies at our disposal today

    International Nuclear Information System (INIS)

    2002-01-01

    Today, approximately one third of the world's population lack access to modern energy services. Poverty eradication and sustainable development will require not just access, but also clean and affordable energy services. Expanding access to such services requires careful planning. The International Atomic Energy Agency (IAEA) helps developing countries and economies in transition build their energy planning capabilities with respect to all three pillars of sustainable development - economic, environmental, and social. The Agency develops and transfers planning models tailored to their special circumstances. It transfers the latest data on technologies, resources, and economics. It trains local experts. It jointly analyzes national options and interprets results. And the IAEA helps establish the continuing local planning expertise needed to independently chart national paths to sustainable development

  11. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  15. 云计算环境下的DPSO资源负载均衡算法%DPSO resource load balancing in cloud computing

    Institute of Scientific and Technical Information of China (English)

    冯小靖; 潘郁

    2013-01-01

    Load balancing problem is one of the hot issues in cloud computing. Discrete particle swarm optimization algoritm is used to research load balancing on cloud computing environment. According to dynamic change of resources demand and low require of servers, each resource management node servers as node of the topological structure, and this paper establishes appropriate resource-task model which is resolved by DPSO. Verification results show that the algorithm enhances the utilization ratio and load balancing of resources.%负载均衡问题是云计算研究的热点问题之一.运用离散粒子群算法对云计算环境下的负载均衡问题进行研究,根据云计算环境下资源需求动态变化,并且对资源节点服务器的要求较低的特点,把各个资源节点当做网络拓扑结构中的各个节点,建立相应的资源-任务分配模型,运用离散粒子群算法实现资源负载均衡.验证表明,该算法提高了资源利用率和云计算资源的负载均衡.

  16. Capacity Maximizing Constellations

    Science.gov (United States)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  17. Developmental memory capacity resources of typical children retrieving picture communication symbols using direct selection and visual linear scanning with fixed communication displays.

    Science.gov (United States)

    Wagner, Barry T; Jackson, Heather M

    2006-02-01

    This study examined the cognitive demands of 2 selection techniques in augmentative and alternative communication (AAC), direct selection, and visual linear scanning, by determining the memory retrieval abilities of typically developing children when presented with fixed communication displays. One hundred twenty typical children from kindergarten, 1st, and 3rd grades were randomly assigned to either a direct selection or visual linear scanning group. Memory retrieval was assessed through word span using Picture Communication Symbols (PCSs). Participants were presented various numbers and arrays of PCSs and asked to retrieve them by placing identical graphic symbols on fixed communication displays with grid layouts. The results revealed that participants were able to retrieve more PCSs during direct selection than scanning. Additionally, 3rd-grade children retrieved more PCSs than kindergarten and 1st-grade children. An analysis on the type of errors during retrieval indicated that children were more successful at retrieving the correct PCSs than the designated location of those symbols on fixed communication displays. AAC practitioners should consider using direct selection over scanning whenever possible and account for anticipatory monitoring and pulses when scanning is used in the service delivery of children with little or no functional speech. Also, researchers should continue to investigate AAC selection techniques in relationship to working memory resources.

  18. Patient flow based allocation of hospital resources.

    Science.gov (United States)

    Vissers, J M

    1995-01-01

    The current practice of allocating resources within a hospital introduces peaks and troughs in the workloads of departments and leads therefore to loss of capacity. This happens when requirements for capacity coordination are not adequately taken into account in the decision making process of allocating resources to specialties. The first part of this research involved an analysis of the hospital's production system on dependencies between resources, resulting in a number of capacity coordination requirements that need to be fulfilled for optimized resource utilization. The second, modelling, part of the study involved the development of a framework for resource management decision making, of a set of computer models to support hospital managerial decision making on resource allocation issues in various parts of the hospital, and of an implementation strategy for the application of the models to concrete hospital settings. The third part of the study was devoted to a number of case-studies, illustrating the use of the models when applied in various resource management projects, such as a reorganization of an operating theatre timetable, or the development of a master plan for activities of a group of general surgeons serving two locations of a merged hospital system. The paper summarizes the main findings of the study and concludes with a discussion of results obtained with the new allocation procedure and with recommendations for future research.

  19. Reclaimed water as a main resource to enhance the adaptive capacity to climate change in semi-arid Mediterranean agricultural areas using Earth Observation products

    Science.gov (United States)

    Pavia Rico, Ana; Lopez-Baeza, Ernesto; Matieu, Pierre-Philippe; Hernandez Sancho, Francesc; Loarte, Edwin

    Lack of water is being a big problem in semi-arid areas to make agricultural profits. Most of Mediterranean countries like Spain, Italy, Greece or Cyprus and other countries like Morocco, the Arab United Emirates, South-American countries or China are starting to reuse wastewater as adaptation to climate change water scarcity. Drought areas are nowadays increasing, thus making fertile areas unproductive. For this reason, the European trend is to work on reusing wastewater as a solution to water scarcity in agriculture. Moreover, since population is growing fast, wastewater production is increasing as well as drinkable water demand, thus making reclaimed water as the water guarantee for irrigation and better agricultural management. This work represents a preliminary initiative to check, analyse and monitor the land by using remote sensing techniques to identify and determine the potential lands that used to be productive in the past, are now abandoned, and we want to recuperate to obtain socio-economic benefits. On top of this, this initiative will clearly enhance the adaption capacity of rural/agricultural lands to climate change. Alternatively to reclaimed water, greenhouses, desalination plants or transboarding water do not really eliminate the problem but only offer a temporary solution, make spending plenty of money and always provoking irreversible damages to the environment. The pilot area to first develop this research is the Valencia and Murcia Autonomous Communities located in the Spanish Mediterranean Coastline. An added value of this work will be to develop a methodology transferable to other potential countries with similar climatic characteristics and difficulties for irrigation, by using remote sensing methods and techniques. The remote sensing products obtained provide full information about the current state of the potential lands to grow crops. Potential areas are then being selected to carry out a socio-economic analysis leading to: (i

  20. Mobile cloud computing for computation offloading: Issues and challenges

    Directory of Open Access Journals (Sweden)

    Khadija Akherfi

    2018-01-01

    Full Text Available Despite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC integrates mobile computing and Cloud Computing (CC in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs such as limited battery lifetime, limited processing capabilities, and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition, it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research.

  1. Building operative care capacity in a resource limited setting: The Mongolian model of the expansion of sustainable laparoscopic cholecystectomy.

    Science.gov (United States)

    Wells, Katie M; Lee, Yu-Jin; Erdene, Sandag; Erdene, Sarnai; Sanchin, Urjin; Sergelen, Orgoi; Zhang, Chong; Rodriguez, Brandon P; deVries, Catherine R; Price, Raymond R

    2016-08-01

    The benefits of laparoscopic cholecystectomy, including rapid recovery and fewer infections, have been largely unavailable to the majority of people in developing countries. Compared to other countries, Mongolia has an extremely high incidence of gallbladder disease. In 2005, only 2% of cholecystectomies were performed laparoscopically. This is a retrospective review of the transition from open to laparoscopic cholecystectomy throughout Mongolia. A cross-sectional, retrospective review was conducted of demographic patient data, diagnosis type, and operation performed (laparoscopic versus open cholecystectomy) from 2005-2013. Trends were analyzed from 6 of the 21 provinces (aimags) throughout Mongolia, and data were culled from 7 regional diagnostic referral and treatment centers and 2 tertiary academic medical centers. The data were analyzed by individual training center and by year before being compared between rural and urban centers. We analyzed and compared 14,522 cholecystectomies (n = 4,086 [28%] men, n = 10,436 [72%] women). Men and women were similar in age (men 52.2, standard deviation 14.8; women 49.4, standard deviation 15.7) and in the percentage undergoing laparoscopic cholecystectomy (men 39%, women 42%). By 2013, 58% of gallbladders were removed laparoscopically countrywide compared with only 2% in 2005. In 2011, laparoscopic cholecystectomy surpassed open cholecystectomy as the primary method for gallbladder removal countrywide. More than 315 Mongolian health care practitioners received laparoscopic training in 19 of the country's 21 aimags (states). By 2013, 58% of cholecystectomies countrywide were performed laparoscopically, a dramatic increase over 9 years. The expansion of laparoscopic cholecystectomy has transformed the care of biliary tract disease in Mongolia despite the country's limited resources. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  3. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  5. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  6. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  8. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    Science.gov (United States)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  10. 面向业务对象的计算资源动态分配方法%DYNAMIC ALLOCATION OF COMPUTING RESOURCES FOR BUSINESS-ORIENTED OBJECT

    Institute of Scientific and Technical Information of China (English)

    尚海鹰

    2017-01-01

    This paper aims to summarize the development trend of computer system infrastructure.In view of the current era Internet plus information system business scenarios,we analyze the mainstream method of computing resources allocation and load balancing.Meanwhile,to further improve transaction processing efficiency and meet the demand of service level agreement flexibility,we introduce a dynamic allocation method of computing resources for business objects.According to the reference value of the processing performance of the actual application system,the computing resources allocation plan and dynamic adjustment strategy ofeach business object were obtained.The experiment achieved the desired effect through large amount of data in the actual clearing business of the city card.%概述计算机系统基础架构的发展趋势.针对当前互联网+时代事务处理系统的业务场景,分析研究了计算资源分配与负载均衡的基本方法.为满足事务处理系统对业务对象的差异化服务需求,并充分发挥事务处理系统的整体处理能力,提出面向业务对象的计算资源动态分配方法.方法根据实际应用系统平台的处理性能基准值,确定各业务对象的计算资源分配计划及动态调整策略.通过城市一卡通实际清算业务大数据量的测试达到预期效果.

  11. Computing challenges of the CMS experiment

    International Nuclear Information System (INIS)

    Krammer, N.; Liko, D.

    2017-01-01

    The success of the LHC experiments is due to the magnificent performance of the detector systems and the excellent operating computing systems. The CMS offline software and computing system is successfully fulfilling the LHC Run 2 requirements. For the increased data rate of future LHC operation, together with high pileup interactions, improvements of the usage of the current computing facilities and new technologies became necessary. Especially for the challenge of the future HL-LHC a more flexible and sophisticated computing model is needed. In this presentation, I will discuss the current computing system used in the LHC Run 2 and future computing facilities for the HL-LHC runs using flexible computing technologies like commercial and academic computing clouds. The cloud resources are highly virtualized and can be deployed for a variety of computing tasks providing the capacities for the increasing needs of large scale scientific computing.

  12. Carrying Capacity

    DEFF Research Database (Denmark)

    Schroll, Henning; Andersen, Jan; Kjærgård, Bente

    2012-01-01

    A spatial planning act was introduced inIndonesia 1992 and renewed in 2008. It emphasised the planning role of decentralised authorities. The spatial planning act covers both spatial and environmental issues. It defines the concept of carrying capacity and includes definitions of supportive....../cities. Four different sectors (water, food production, waste, and forests) were selected as core areas for decentralised spatial planning. Indicators for SCC and ACC were identified and assessed with regard to relevance and quantifiability. For each of the indicators selected, a legal threshold or guiding...... was introduced inIndonesia 1992 and renewed in 2008. It emphasised the planning role of decentralised authorities. The spatial planning act covers both spatial and environmental issues. It defines the concept of carrying capacity and includes definitions of supportive carrying capacity (SCC) and assimilative...

  13. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    Science.gov (United States)

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  14. Graduate Enrollment Increases in Science and Engineering Fields, Especially in Engineering and Computer Sciences. InfoBrief: Science Resources Statistics.

    Science.gov (United States)

    Burrelli, Joan S.

    This brief describes graduate enrollment increases in the science and engineering fields, especially in engineering and computer sciences. Graduate student enrollment is summarized by enrollment status, citizenship, race/ethnicity, and fields. (KHR)

  15. The Relation between Acquisition of a Theory of Mind and the Capacity to Hold in Mind.

    Science.gov (United States)

    Gordon, Anne C. L.; Olson, David R.

    1998-01-01

    Tested hypothesized relationship between development of a theory of mind and increasing computational resources in 3- to 5-year olds. Found that the correlations between performance on theory of mind tasks and dual processing tasks were as high as r=.64, suggesting that changes in working memory capacity allow the expression of, and arguably the…

  16. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    OpenAIRE

    Buyya, Rajkumar; Beloglazov, Anton; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational cos...

  17. Computational simulation and lean thinking as tools of process management: an assessment of different alternatives to increase capacity in a manufacturing company of aluminum electrical cables

    Directory of Open Access Journals (Sweden)

    Tiago Augusto Amarante de Souza

    2014-12-01

    Full Text Available In this study, the production chain of a manufacturing company of aluminum electrical conductors is analyzed, in order to select the best strategy for increasing its production capacity. The company´s production systems and flows are highly complex and have high variability in their production flows. A quantitative modeling methodology was proposed to simulate those systems, and to analyze them in a simpler manner. The simulation model considered two different strategies related to production increase: “Lean Thinking” and Machinery/Equipment Purchase. From the current context and the results obtained from the simulation study, it was possible to conclude that the best scenario for increasing production capacity for the company was using the Lean Thinking strategy on the critical processes.The gains in capacity are higher and the implementing costs involved are lower than the ones observed in the other strategy considered.

  18. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Science.gov (United States)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  19. Hospitals Capability in Response to Disasters Considering Surge Capacity Approach

    Directory of Open Access Journals (Sweden)

    Gholamreza Khademipour

    2016-01-01

    Full Text Available Background: The man-made and natural disasters have adverse effects with sound, apparent, and unknown consequences. Among various components of disaster management in health sector, the most important role is performed by health-treatment systems, especially hospitals. Therefore, the present study aimed to evaluate the surge capacity of hospitals of Kerman Province in disaster in 2015. Materials and Methods: This is a quantitative study, conducted on private, military, and medical sciences hospitals of Kerman Province. The sampling method was total count and data collection for the research was done by questionnaire. The first section of the questionnaire included demographic information of the studied hospitals and second part examined the hospital capacity in response to disasters in 4 fields of equipment, physical space, human resources, and applied programs. The extracted data were analyzed by descriptive statistics. Results: The mean capability of implementing the surge capacity programs by hospitals of Kerman Province in disasters and in 4 fields of equipment, physical space, human resources, and applied programs was evaluated as 7.33% (weak. The surge capacity capability of state hospitals in disasters was computed as 8% and compared to private hospitals (6.07% had a more suitable condition. Conclusion: Based on the results of study and significance of preparedness of hospitals in response to disasters, it is proposed that managers of studied hospitals take measures to promote the hospital response capacity to disasters based on 4 components of increasing hospital capacity.

  20. The new technologies and the use of telematics resources in Scientific Education: a computational simulation in Physics Teaching

    Directory of Open Access Journals (Sweden)

    Antonio Jorge Sena dos Anjos

    2009-01-01

    Full Text Available This study presents a brief and panoramic critical view on the use of Information and Communication Technologies in Education, specifically in Science Education. The focus is centred in the resources of technology, emphasizing the use and adequate programs for Physics Teaching.

  1. Offloading Method for Efficient Use of Local Computational Resources in Mobile Location-Based Services Using Clouds

    Directory of Open Access Journals (Sweden)

    Yunsik Son

    2017-01-01

    Full Text Available With the development of mobile computing, location-based services (LBSs have been developed to provide services based on location information through communication networks or the global positioning system. In recent years, LBSs have evolved into smart LBSs, which provide many services using only location information. These include basic services such as traffic, logistic, and entertainment services. However, a smart LBS may require relatively complicated operations, which may not be effectively performed by the mobile computing system. To overcome this problem, a computation offloading technique can be used to perform certain tasks on mobile devices in cloud and fog environments. Furthermore, mobile platforms exist that provide smart LBSs. The smart cross-platform is a solution based on a virtual machine (VM that enables compatibility of content in various mobile and smart device environments. However, owing to the nature of the VM-based execution method, the execution performance is degraded compared to that of the native execution method. In this paper, we introduce a computation offloading technique that utilizes fog computing to improve the performance of VMs running on mobile devices. We applied the proposed method to smart devices with a smart VM (SVM and HTML5 SVM to compare their performances.

  2. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    Kearney David

    2007-01-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  3. Research on uranium resource models. Part IV. Logic: a computer graphics program to construct integrated logic circuits for genetic-geologic models. Progress report

    International Nuclear Information System (INIS)

    Scott, W.A.; Turner, R.M.; McCammon, R.B.

    1981-01-01

    Integrated logic circuits were described as a means of formally representing genetic-geologic models for estimating undiscovered uranium resources. The logic circuits are logical combinations of selected geologic characteristics judged to be associated with particular types of uranium deposits. Each combination takes on a value which corresponds to the combined presence, absence, or don't know states of the selected characteristic within a specified geographic cell. Within each cell, the output of the logic circuit is taken as a measure of the favorability of occurrence of an undiscovered deposit of the type being considered. In this way, geological, geochemical, and geophysical data are incorporated explicitly into potential uranium resource estimates. The present report describes how integrated logic circuits are constructed by use of a computer graphics program. A user's guide is also included

  4. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  5. Winning the Popularity Contest: Researcher Preference When Selecting Resources for Civil Engineering, Computer Science, Mathematics and Physics Dissertations

    Science.gov (United States)

    Dotson, Daniel S.; Franks, Tina P.

    2015-01-01

    More than 53,000 citations from 609 dissertations published at The Ohio State University between 1998-2012 representing four science disciplines--civil engineering, computer science, mathematics and physics--were examined to determine what, if any, preferences or trends exist. This case study seeks to identify whether or not researcher preferences…

  6. A Framework for Safe Composition of Heterogeneous SOA Services in a Pervasive Computing Environment with Resource Constraints

    Science.gov (United States)

    Reyes Alamo, Jose M.

    2010-01-01

    The Service Oriented Computing (SOC) paradigm, defines services as software artifacts whose implementations are separated from their specifications. Application developers rely on services to simplify the design, reduce the development time and cost. Within the SOC paradigm, different Service Oriented Architectures (SOAs) have been developed.…

  7. Becoming Technosocial Change Agents: Intersectionality and Culturally Responsive Pedagogies as Vital Resources for Increasing Girls' Participation in Computing

    Science.gov (United States)

    Ashcraft, Catherine; Eger, Elizabeth K.; Scott, Kimberly A.

    2017-01-01

    Drawing from our two-year ethnography, we juxtapose the experiences of two cohorts in one culturally responsive computing program, examining how the program fostered girls' emerging identities as technosocial change agents. In presenting this in-depth and up-close exploration, we simultaneously identify conditions that both facilitated and limited…

  8. Linear equations and rap battles: how students in a wired classroom utilized the computer as a resource to coordinate personal and mathematical positional identities in hybrid spaces

    Science.gov (United States)

    Langer-Osuna, Jennifer

    2015-03-01

    This paper draws on the constructs of hybridity, figured worlds, and cultural capital to examine how a group of African-American students in a technology-driven, project-based algebra classroom utilized the computer as a resource to coordinate personal and mathematical positional identities during group work. Analyses of several vignettes of small group dynamics highlight how hybridity was established as the students engaged in multiple on-task and off-task computer-based activities, each of which drew on different lived experiences and forms of cultural capital. The paper ends with a discussion on how classrooms that make use of student-led collaborative work, and where students are afforded autonomy, have the potential to support the academic engagement of students from historically marginalized communities.

  9. Exerting Capacity.

    Science.gov (United States)

    Leger, J Michael; Phillips, Carolyn A

    2017-05-01

    Patient safety has been at the forefront of nursing research since the release of the Institute of Medicine's report estimating the number of preventable adverse events in hospital settings; yet no research to date has incorporated the perspectives of bedside nurses using classical grounded theory (CGT) methodology. This CGT study explored the perceptions of bedside registered nurses regarding patient safety in adult acute care hospitals. Data analysis used three techniques unique to CGT-the constant comparative method, coding, and memoing-to explore the values, realities, and beliefs of bedside nurses about patient safety. The analysis resulted in a substantive theory, Exerting Capacity, which explained how bedside nurses balance the demands of keeping their patients safe. Exerting Capacity has implications for health care organization leaders, nursing leaders, and bedside nurses; it also has indications for future research into the concept of patient safety.

  10. Recursos y capacidades de servicios de emergencia para atención de lesiones por traumas en Perú Resources and capacity of emergency trauma care services in Peru

    Directory of Open Access Journals (Sweden)

    Edmundo Rosales-Mayor

    2011-09-01

    Full Text Available Los objetivos de este trabajo fueron determinar la percepción de los recursos y capacidades de los servicios de emergencia en tres ciudades del Perú, utilizando las guías publicadas por la Organización Mundial de la Salud: Guidelines for Essential Trauma Care. Estudio transversal, realizado en 8 establecimientos de salud públicos y privados, en las ciudades de Lima, Ayacucho y Pucallpa. Se aplicaron cuestionarios semi-estructurados a los responsables de los servicios calificando, de acuerdo a su percepción, diversos aspectos de recursos y capacidades. Teniendo en consideración los perfiles y volúmenes de atención en el servicio de emergencia de los establecimientos de salud, la mayoría de los entrevistados, en las tres ciudades, considera que sus recursos disponibles son inadecuados. Al comparar los establecimientos de salud, se observó un déficit de los recursos en los públicos y en los de Provincia (Ayacucho y Pucallpa. Existe una amplia percepción de que los recursos tanto humanos, como físicos, son inadecuados, especialmente, en los establecimientos de salud públicos y en los de provincias.The objectives of this study were to evaluate the resources and capacity of emergency trauma care services in three Peruvian cities using the WHO report Guidelines for Essential Trauma Care. This was a cross-sectional study in eight public and private healthcare facilities in Lima, Ayacucho, and Pucallpa. Semi-structured questionnaires were applied to the heads of emergency departments with managerial responsibility for resources and capabilities. Considering the profiles and volume of care in each emergency service, most respondents in all three cities classified their currently available resources as inadequate. Comparison of the health facilities showed a shortage in public services and in the provinces (Ayacucho and Pucallpa. There was a widespread perception that both human and physical resources were insufficient, especially in public

  11. Options on capacity imbalance

    International Nuclear Information System (INIS)

    Roggen, M.

    2002-01-01

    Since the start of this year, the Dutch energy company Nuon has been using a computer system to formulate real-time responses to national capacity imbalances in the electricity supply market. The work earns Nuon a fixed fee from TenneT (Dutch Transmission System Operator) and ensures a more stable imbalance price for everyone. The key to success has been the decision to start the project from scratch [nl

  12. Strengthening Research Capacity to Enhance Natural Resources ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    L'Initiative des conseils subventionnaires de la recherche scientifique en Afrique subsaharienne remporte le prix de la diplomatie scientifique. L'Initiative des conseils ... Un supplément de revue présente 10 années de recherche sur les systèmes de santé en Afrique de l'Ouest. À la suite de l'épidémie de virus ébolique ...

  13. Capacity Building

    International Nuclear Information System (INIS)

    Molloy, Brian; Mallick, Shahid

    2014-01-01

    Outcomes & Recommendations: • Significant increase needed in the nuclear workforce both to replace soon-to-retire current generation and to staff large numbers of new units planned • Key message, was the importance of an integrated approach to workforce development. • IAEA and other International Organisations were asked to continue to work on Knowledge Management, Networks and E&T activities • IAEA requested to conduct Global Survey of HR needs – survey initiated but only 50% of operating countries (30% of capacity) took part, so results inconclusive

  14. Capacitated Dynamic Lot Sizing with Capacity Acquisition

    DEFF Research Database (Denmark)

    Li, Hongyan; Meissner, Joern

    One of the fundamental problems in operations management is to determine the optimal investment in capacity. Capacity investment consumes resources and the decision is often irreversible. Moreover, the available capacity level affects the action space for production and inventory planning decisions...

  15. Direct FEM-computation of load carrying capacity of highly loaded passive components; Direkte FEM - Berechnung der Tragfaehigkeit hochbeanspruchter passiver Komponenten

    Energy Technology Data Exchange (ETDEWEB)

    Staat, M; Heitzer, M [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Sicherheitsforschung und Reaktortechnik

    1998-11-01

    Detailed, inelastic FEM analyses yield accurate information about the stresses and deformations in passive components. The local loading conditions, however, cannot be directly compared with a limit load in terms of structural mechanics. Concentration on the load carrying capacity is an approach simplifying the analysis. Based on the plasticity theory, limit and shakedown analyses calculate the load carrying capacities directly and exactly. The paper explains the implementation of the limit and shakedown data sets in a general FEM program and the direct calculation of the load carrying capacities of passive components. The concepts used are explained with respect to common structural analysis. Examples assuming high local stresses illustrate the application of FEM-based limit and shakedown analyses. The calculated interaction diagrams present a good insight into the applicable operational loads of individual passive components. The load carrying analysis also opens up a structure mechanics-based approach to assessing the load-to-collapse of cracked components made of highly ductile fracture-resistant material. (orig./CB) [Deutsch] Genaue Kenntnis der Spannungen und Verformungen in passiven Komponenten gewinnt man mit detailierten inelastischen FEM Analysen. Die lokale Beanspruchung laesst sich aber nicht direkt mit einer Beanspruchbarkeit im strukturmechanischen Sinne vergleichen. Konzentriert man sich auf die Frage nach der Tragfaehigkeit, dann vereinfacht sich die Analyse. Im Rahmen der Plastizitaetstheorie berechnen Traglast- und Einspielanalyse die tragbaren Lasten direkt und exakt. In diesem Beitrag wird eine Implementierung der Traglast- und Einspielsaetze in ein allgemeines FEM Programm vorgestellt, mit der die Tragfaehigkeit passiver Komponenten direkt berechnet wird. Die benutzten Konzepte werden in Bezug auf die uebliche Strukturanalyse erlaeutert. Beispiele mit lokal hoher Beanspruchung verdeutlichen die Anwendung der FEM basierten Traglast- und

  16. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    International Nuclear Information System (INIS)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-01-01

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads per MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications

  17. Computational models can predict response to HIV therapy without a genotype and may reduce treatment failure in different resource-limited settings.

    Science.gov (United States)

    Revell, A D; Wang, D; Wood, R; Morrow, C; Tempelman, H; Hamers, R L; Alvarez-Uria, G; Streinu-Cercel, A; Ene, L; Wensing, A M J; DeWolf, F; Nelson, M; Montaner, J S; Lane, H C; Larder, B A

    2013-06-01

    Genotypic HIV drug-resistance testing is typically 60%-65% predictive of response to combination antiretroviral therapy (ART) and is valuable for guiding treatment changes. Genotyping is unavailable in many resource-limited settings (RLSs). We aimed to develop models that can predict response to ART without a genotype and evaluated their potential as a treatment support tool in RLSs. Random forest models were trained to predict the probability of response to ART (≤400 copies HIV RNA/mL) using the following data from 14 891 treatment change episodes (TCEs) after virological failure, from well-resourced countries: viral load and CD4 count prior to treatment change, treatment history, drugs in the new regimen, time to follow-up and follow-up viral load. Models were assessed by cross-validation during development, with an independent set of 800 cases from well-resourced countries, plus 231 cases from Southern Africa, 206 from India and 375 from Romania. The area under the receiver operating characteristic curve (AUC) was the main outcome measure. The models achieved an AUC of 0.74-0.81 during cross-validation and 0.76-0.77 with the 800 test TCEs. They achieved AUCs of 0.58-0.65 (Southern Africa), 0.63 (India) and 0.70 (Romania). Models were more accurate for data from the well-resourced countries than for cases from Southern Africa and India (P < 0.001), but not Romania. The models identified alternative, available drug regimens predicted to result in virological response for 94% of virological failures in Southern Africa, 99% of those in India and 93% of those in Romania. We developed computational models that predict virological response to ART without a genotype with comparable accuracy to genotyping with rule-based interpretation. These models have the potential to help optimize antiretroviral therapy for patients in RLSs where genotyping is not generally available.

  18. Capacity Markets and Market Stability

    International Nuclear Information System (INIS)

    Stauffer, Hoff

    2006-01-01

    The good news is that market stability can be achieved through a combination of longer-term contracts, auctions for far enough in the future to permit new entry, a capacity management system, and a demand curve. The bad news is that if and when stable capacity markets are designed, the markets may seem to be relatively close to where we started - with integrated resource planning. Market ideologues will find this anathema. (author)

  19. [Follow-up of patients with good exercise capacity in stress test with myocardial single-photon emission computed tomography (SPECT)].

    Science.gov (United States)

    González, Javiera; Prat, Hernán; Swett, Eduardo; Berrocal, Isabel; Fernández, René; Zhindon, Juan Pablo; Castro, Ariel; Massardo, Teresa

    2015-11-01

    The evaluation of coronary artery disease (CAD) can be performed with stress test and myocardial SPECT tomography. To assess the predictive value of myocardial SPECT using stress test for cardiovascular events in patients with good exercise capacity. We included 102 males aged 56 ± 10 years and 19 females aged 52 ± 10 years, all able to achieve 10 METs and ≥ 85% of the theoretical maximum heart rate and at least 8 min in their stress test with gated 99mTc-sestamibi SPECT. Eighty two percent of patients were followed clinically for 33 ± 17 months. Sixty seven percent of patients were studied for CAD screening and the rest for known disease assessment. Treadmill stress test was negative in 75.4%; 37% of patients with moderate to severe Duke Score presented ischemia. Normal myocardial perfusion SPECT was observed in 70.2%. Reversible defects appeared in 24.8% of cases, which were of moderate or severe degree (> 10% left ventricular extension) in 56.6%. Only seven cases had coronary events after the SPECT. Two major (myocardial infarction and emergency coronary revascularization) and 5 minor events (elective revascularization) ere observed in the follow-up. In a multivariate analysis, SPECT ischemia was the only statistically significant parameter that increased the probability of having a major or minor event. Nearly a quarter of our patients with good exercise capacity demonstrated reversible defects in their myocardial perfusion SPECT. In the intermediate-term follow-up, a low rate of cardiac events was observed, being the isotopic ischemia the only significant predictive parameter.

  20. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  1. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  2. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M

    2007-03-22

    The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vast number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07

  3. The Development of Educational and/or Training Computer Games for Students with Disabilities

    Science.gov (United States)

    Kwon, Jungmin

    2012-01-01

    Computer and video games have much in common with the strategies used in special education. Free resources for game development are becoming more widely available, so lay computer users, such as teachers and other practitioners, now have the capacity to develop games using a low budget and a little self-teaching. This article provides a guideline…

  4. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  5. Uranium resources

    International Nuclear Information System (INIS)

    1976-01-01

    This is a press release issued by the OECD on 9th March 1976. It is stated that the steep increases in demand for uranium foreseen in and beyond the 1980's, with doubling times of the order of six to seven years, will inevitably create formidable problems for the industry. Further substantial efforts will be needed in prospecting for new uranium reserves. Information is given in tabular or graphical form on the following: reasonably assured resources, country by country; uranium production capacities, country by country; world nuclear power growth; world annual uranium requirements; world annual separative requirements; world annual light water reactor fuel reprocessing requirements; distribution of reactor types (LWR, SGHWR, AGR, HWR, HJR, GG, FBR); and world fuel cycle capital requirements. The information is based on the latest report on Uranium Resources Production and Demand, jointly issued by the OECD's Nuclear Energy Agency (NEA) and the International Atomic Energy Agency. (U.K.)

  6. Installed capacity in New York

    International Nuclear Information System (INIS)

    Charlton, J.

    2006-01-01

    This presentation discussed capacity issues related to the New York Independent System Operator (NYISO). The NYISO's market volume was approximately $11 billion in 2005, and it was responsible for providing 32,075 MW of electricity at peak load to its users. Regulatory uncertainty is currently discouraging investment in new generating resources. All load serving entities are required to contract for sufficient capacity in order to meet their capacity obligations. Market participants currently determine capacity and energy revenues. The NYISO market allows suppliers to recover variable costs for providing ancillary services, and the economic value of the revenue source governs decisions made in the wholesale electricity market. The installed capacity market was designed as a spot auction deficiency auction. Phased-in demand curves are used to modify the installed capacity market's design. A sloped demand curve mechanism is used to value capacity above the minimum requirement for both reliability and competition. Participation in the day-ahead market enhances competition and exerts downward pressure on energy and ancillary service market prices. It was concluded that the market structures and design features of the installed capacity markets recognize the need for system reliability in addition to encouraging robust competition and recognizing energy price caps and regulatory oversights. tabs., figs

  7. Free energy and heat capacity

    International Nuclear Information System (INIS)

    Kurata, M.; Devanathan, R.

    2015-01-01

    Free energy and heat capacity of actinide elements and compounds are important properties for the evaluation of the safety and reliable performance of nuclear fuel. They are essential inputs for models that describe complex phenomena that govern the behaviour of actinide compounds during nuclear fuels fabrication and irradiation. This chapter introduces various experimental methods to measure free energy and heat capacity to serve as inputs for models and to validate computer simulations. This is followed by a discussion of computer simulation of these properties, and recent simulations of thermophysical properties of nuclear fuel are briefly reviewed. (authors)

  8. Relationships between diffusing capacity for carbon monoxide (D{sub L}CO), and quantitative computed tomography measurements and visual assessment for chronic obstructive pulmonary disease

    Energy Technology Data Exchange (ETDEWEB)

    Nambu, Atsushi, E-mail: nambu-a@gray.plala.or.jp [Department of Radiology, National Jewish Health, 1400 Jackson Street, Denver, CO 80206 (United States); Department of Radiology, Teikyo University Mizonokuchi Hospital (Japan); Zach, Jordan, E-mail: ZachJ@NJHealth.org [Department of Radiology, National Jewish Health, 1400 Jackson Street, Denver, CO 80206 (United States); Schroeder, Joyce, E-mail: Joyce.schroeder@stanfordalumni.org [Department of Radiology, National Jewish Health, 1400 Jackson Street, Denver, CO 80206 (United States); Jin, Gong Yong, E-mail: gyjin@chonbuk.ac.kr [Department of Radiology, National Jewish Health, 1400 Jackson Street, Denver, CO 80206 (United States); Department of Radiology, Chonbuk National University Hospital (Korea, Republic of); Kim, Song Soo, E-mail: haneul88@hanmail.net [Department of Radiology, National Jewish Health, 1400 Jackson Street, Denver, CO 80206 (United States); Department of Radiology, Chungnam National Hospital, Chungnam National University School of Medicine (Korea, Republic of); Kim, Yu-IL, E-mail: kyionly@chonnam.ac.kr [Department of Medicine, National Jewish Health, Denver, CO (United States); Department of Internal Medicine, Chonnam National University Hospital, Gwangju (Korea, Republic of); Schnell, Christina, E-mail: SchnellC@NJHealth.org [Department of Medicine, National Jewish Health, Denver, CO (United States); Bowler, Russell, E-mail: BowlerR@NJHealth.org [Division of Pulmonary Medicine, Department of Medicine, National Jewish Health (United States); Lynch, David A., E-mail: LynchD@NJHealth.org [Department of Radiology, National Jewish Health, 1400 Jackson Street, Denver, CO 80206 (United States)

    2015-05-15

    Highlights: • Quantitative CT measurements significantly correlated with D{sub L}CO/V{sub A}. • 15{sup th} percentile HU had the strongest correlation with D{sub L}CO/V{sub A}. • Visual scoring of emphysema had independent significant correlations with D{sub L}CO/V{sub A}. - Abstract: Purpose: To evaluate the relationships between D{sub L}CO, and Quantitative CT (QCT) measurements and visual assessment of pulmonary emphysema and to test the relative roles of visual and quantitative assessment of emphysema. Materials and methods: The subjects included 199 current and former cigarette smokers from the COPDGene cohort who underwent inspiratory and expiratory CT and also had diffusing capacity for carbon monoxide corrected for alveolar volume (D{sub L}CO/V{sub A}). Quantitative CT measurements included % low attenuation areas (%LAA−950ins = voxels ≤−950 Hounsfield unit (HU), % LAA{sub −910ins}, and % LAA{sub −856ins}), mean CT attenuation and 15th percentile HU value on inspiratory CT, and %LAA{sub −856exp} (voxels ≤−856 HU on expiratory CT). The extent of emphysema was visually assessed using a 5-point grading system. Univariate and multiple variable linear regression analyses were employed to evaluate the correlations between D{sub L}CO/V{sub A} and QCT parameters and visual extent of emphysema. Results: The D{sub L}CO/V{sub A} correlated most strongly with 15th percentile HU (R{sup 2} = 0.440, p < 0.001) closely followed by % LAA{sub −950ins} (R{sup 2} = 0.417, p < 0.001) and visual extent of emphysema (R{sup 2} = 0.411, p < 0.001). Multiple variable analysis showed that visual extent of emphysema and 15th percentile HU were independent significant predictors of D{sub L}CO/V{sub A} at an R{sup 2} of 0.599. Conclusions: 15th percentile HU seems the best parameter to represent the respiratory condition of COPD. Visual and Quantitative CT assessment of emphysema provide complementary information to QCT analysis.

  9. Summary and Conclusions by the Conference President Marta Žiaková [International Conference on Human Resource Development for Nuclear Power Programmes: Building and Sustaining Capacity, Vienna (Austria), 12-16 May 2014

    International Nuclear Information System (INIS)

    Žiaková, Marta

    2014-01-01

    • Many drivers for capacity building exist - Mature and newcomer countries, Action Plan for Nuclear Safety; • Since 2010 nuclear world has changed - Countries are very active in capacity building and IAEA responded; • e.g. new IAEA Capacity Building Self-Assessment Methodology. • Capacity building should cover full nuclear programme;• All levels important: • Individual: staff development, new curricula; • Corporate: supportive to young generation and new employees; • National: comprehensive approach needs government support; • Global: internationalization of education and careers

  10. Comparison of capacity for diagnosis and visuality of auditory ossicles at different scanning angles in the computed tomography of temporal bone

    International Nuclear Information System (INIS)

    Ogura, Akio; Nakayama, Yoshiki

    1992-01-01

    Computed tomographic (CT) scanning has made significant contributions to the diagnosis and evaluation of temporal bone lesions by the thin-section, high-resolution techniques. However, these techniques involve greater radiation exposure to the lens of patients. A mean was thus sought for reducing the radiation exposure at different scanning angles such as +15 degrees and -10 degrees to the Reid's base line. Purposes of this study were to measure radiation exposure to the lens using the two tomographic planes and to compare the ability to visualize auditory ossicles and labyrinthine structures. Visual evaluation of tomographic images on auditory ossicles was made by blinded methods using four rankings by six radiologists. The statistical significance of the intergroup difference in the visualization of tomographic planes was assessed for a significance level of 0.01. Thermoluminescent dosimeter chips were placed on the cornea of tissue equivalent to the skull phantom to evaluate radiation exposure for two separate tomographic planes. As the result, tomographic plane at an angle of -10 degrees to Reid's base line allowed better visualization than the other plane for the malleus, incus, facial nerve canal, and tuba auditiva (p<0.01). Scannings at an angle of -10 degrees to Reid's base line reduced radiation exposure to approximately one-fiftieth (1/50) that with the scans at the other angle. (author)

  11. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  12. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  13. Consolidation of cloud computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  14. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  15. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  16. Future computing needs for Fermilab

    International Nuclear Information System (INIS)

    1983-12-01

    The following recommendations are made: (1) Significant additional computing capacity and capability beyond the present procurement should be provided by 1986. A working group with representation from the principal computer user community should be formed to begin immediately to develop the technical specifications. High priority should be assigned to providing a large user memory, software portability and a productive computing environment. (2) A networked system of VAX-equivalent super-mini computers should be established with at least one such computer dedicated to each reasonably large experiment for both online and offline analysis. The laboratory staff responsible for mini computers should be augmented in order to handle the additional work of establishing, maintaining and coordinating this system. (3) The laboratory should move decisively to a more fully interactive environment. (4) A plan for networking both inside and outside the laboratory should be developed over the next year. (5) The laboratory resources devoted to computing, including manpower, should be increased over the next two to five years. A reasonable increase would be 50% over the next two years increasing thereafter to a level of about twice the present one. (6) A standing computer coordinating group, with membership of experts from all the principal computer user constituents of the laboratory, should be appointed by and report to the director. This group should meet on a regularly scheduled basis and be charged with continually reviewing all aspects of the laboratory computing environment

  17. Exploring Tradeoffs in Demand-Side and Supply-Side Management of Urban Water Resources Using Agent-Based Modeling and Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Lufthansa Kanta

    2015-11-01

    Full Text Available Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger: (1 increases in the volume of water pumped through inter-basin transfers from an external reservoir; and (2 drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  18. Rotina computacional e equação simplificada para modelar o transporte de sedimentos num Latossolo Vermelho Distrófico Computational routine and simplified equation for modeling sediment transport capacity in a Dystrophic Hapludox

    Directory of Open Access Journals (Sweden)

    Gilmar E. Cerquetani

    2006-08-01

    Full Text Available Os objetivos do presente trabalho foram desenvolver rotina computacional para a solução da equação de Yalin e do diagrama de Shields e avaliar uma equação simplificada para modelar a capacidade de transporte de sedimento num Latossolo Vermelho Distrófico que possa ser utilizada no Water Erosion Prediction Project - WEPP, assim como em outros modelos de predição da erosão do solo. A capacidade de transporte de sedimento para o fluxo superficial foi representada como função-potência da tensão cisalhante, a qual revelou ser aproximação da equação de Yalin. Essa equação simplificada pôde ser aplicada em resultados experimentais oriundos de topografia complexa. A equação simplificada demonstrou acuracidade em relação à equação de Yalin, quando calibrada utilizando-se da tensão média cisalhante. Testes de validação com dados independentes demonstraram que a equação simplificada foi eficiente para estimar a capacidade de transporte de sedimento.The objectives of the present work were to develop a computational routine to solve Yalin equation and Shield diagram and to evaluate a simplified equation for modeling sediment transport capacity in a Dystrophic Hapludox that could be used in the Water Erosion Prediction Project - WEPP, as well as other soil erosion models. Sediment transport capacity for shallow overland flow was represented as a power function of the hydraulic shear stress and which showed to be an approximation to the Yalin equation for sediment transport capacity. The simplified equation for sediment transport could be applied to experimental data from a complex topography. The simplified equation accurately approximated the Yalin equation when calibrated using the mean hydraulic shear stress. Validation tests using independent data showed that the simplified equation had a good performance in predicting sediment transport capacity.

  19. Capacity market design and renewable energy: Performance incentives, qualifying capacity, and demand curves

    Energy Technology Data Exchange (ETDEWEB)

    Botterud, Audun; Levin, Todd; Byers, Conleigh

    2018-01-01

    A review of capacity markets in the United States in the context of increasing levels of variable renewable energy finds substantial differences with respect to incentives for operational performance, methods to calculate qualifying capacity for variable renewable energy and energy storage, and demand curves for capacity. The review also reveals large differences in historical capacity market clearing prices. The authors conclude that electricity market design must continue to evolve to achieve cost-effective policies for resource adequacy.

  20. Interference and memory capacity limitations.

    Science.gov (United States)

    Endress, Ansgar D; Szabó, Szilárd

    2017-10-01

    Working memory (WM) is thought to have a fixed and limited capacity. However, the origins of these capacity limitations are debated, and generally attributed to active, attentional processes. Here, we show that the existence of interference among items in memory mathematically guarantees fixed and limited capacity limits under very general conditions, irrespective of any processing assumptions. Assuming that interference (a) increases with the number of interfering items and (b) brings memory performance to chance levels for large numbers of interfering items, capacity limits are a simple function of the relative influence of memorization and interference. In contrast, we show that time-based memory limitations do not lead to fixed memory capacity limitations that are independent of the timing properties of an experiment. We show that interference can mimic both slot-like and continuous resource-like memory limitations, suggesting that these types of memory performance might not be as different as commonly believed. We speculate that slot-like WM limitations might arise from crowding-like phenomena in memory when participants have to retrieve items. Further, based on earlier research on parallel attention and enumeration, we suggest that crowding-like phenomena might be a common reason for the 3 major cognitive capacity limitations. As suggested by Miller (1956) and Cowan (2001), these capacity limitations might arise because of a common reason, even though they likely rely on distinct processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Exploratory Experimentation and Computation

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2010-02-25

    We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

  2. Linear-programming-based heuristics for project capacity planning

    NARCIS (Netherlands)

    Gademann, A.J.R.M.; Schutten, J.M.J.

    2005-01-01

    Many multi-project organizations are capacity driven, which means that their operations are constrained by various scarce resources. An important planning aspect in a capacity driven multi-project organization is capacity planning. By capacity planning, we mean the problem of matching demand for

  3. The Fermilab computing farms in 2000

    International Nuclear Information System (INIS)

    Troy Dawson

    2001-01-01

    The year 2000 was a year of evolutionary change for the Fermilab computer farms. Additional compute capacity was acquired by the addition of PCs for the CDF, D0 and CMS farms. This was done in preparation for Run 2 production and for CMS Monte Carlo production. Additional I/O capacity was added for all the farms. This continues the trend to standardize the I/O systems on the SGI O2x00 architecture. Strong authentication was installed on the CDF and D0 farms. The farms continue to provide large CPU resources for experiments and those users whose calculations benefit from large CPU/low IO resources. The user community will change in 2001 now that the 1999 fixed-target experiments have almost finished processing and Run 2, SDSS, miniBooNE, MINOS, BTeV, and other future experiments and projects will be the major users in the future

  4. Self managing experiment resources

    International Nuclear Information System (INIS)

    Stagni, F; Ubeda, M; Charpentier, P; Tsaregorodtsev, A; Romanovskiy, V; Roiser, S; Graciani, R

    2014-01-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  5. Proceedings Papers of the AFSC (Air Force Systems Command) Avionics Standardization Conference (2nd) Held at Dayton, Ohio on 30 November-2 December 1982. Volume 3. Embedded Computer Resources Governing Documents.

    Science.gov (United States)

    1982-11-01

    1. Validation of computer resource requirements, including soft - ware, risk analyses, planning, preliminary design, security where applicable (DoD...Technology Base Program for soft - ware basic research, exploratory development, advanced devel- opment, and technology demonstrations addressing critical... chancres including agement Procedures (O/S CMP). The basic alose iact of Cr other clu configuration management approach con- tained in the CRISP will be

  6. Quantum reading capacity

    International Nuclear Information System (INIS)

    Pirandola, Stefano; Braunstein, Samuel L; Lupo, Cosmo; Mancini, Stefano; Giovannetti, Vittorio

    2011-01-01

    The readout of a classical memory can be modelled as a problem of quantum channel discrimination, where a decoder retrieves information by distinguishing the different quantum channels encoded in each cell of the memory (Pirandola 2011 Phys. Rev. Lett. 106 090504). In the case of optical memories, such as CDs and DVDs, this discrimination involves lossy bosonic channels and can be remarkably boosted by the use of nonclassical light (quantum reading). Here we generalize these concepts by extending the model of memory from single-cell to multi-cell encoding. In general, information is stored in a block of cells by using a channel-codeword, i.e. a sequence of channels chosen according to a classical code. Correspondingly, the readout of data is realized by a process of ‘parallel’ channel discrimination, where the entire block of cells is probed simultaneously and decoded via an optimal collective measurement. In the limit of a large block we define the quantum reading capacity of the memory, quantifying the maximum number of readable bits per cell. This notion of capacity is nontrivial when we suitably constrain the physical resources of the decoder. For optical memories (encoding bosonic channels), such a constraint is energetic and corresponds to fixing the mean total number of photons per cell. In this case, we are able to prove a separation between the quantum reading capacity and the maximum information rate achievable by classical transmitters, i.e. arbitrary classical mixtures of coherent states. In fact, we can easily construct nonclassical transmitters that are able to outperform any classical transmitter, thus showing that the advantages of quantum reading persist in the optimal multi-cell scenario. (paper)

  7. Online Resources

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics; Online Resources. Journal of Genetics. Online Resources. Volume 97. 2018 | Online resources. Volume 96. 2017 | Online resources. Volume 95. 2016 | Online resources. Volume 94. 2015 | Online resources. Volume 93. 2014 | Online resources. Volume 92. 2013 | Online resources ...

  8. A Constraint programming-based genetic algorithm for capacity output optimization

    Directory of Open Access Journals (Sweden)

    Kate Ean Nee Goh

    2014-10-01

    Full Text Available Purpose: The manuscript presents an investigation into a constraint programming-based genetic algorithm for capacity output optimization in a back-end semiconductor manufacturing company.Design/methodology/approach: In the first stage, constraint programming defining the relationships between variables was formulated into the objective function. A genetic algorithm model was created in the second stage to optimize capacity output. Three demand scenarios were applied to test the robustness of the proposed algorithm.Findings: CPGA improved both the machine utilization and capacity output once the minimum requirements of a demand scenario were fulfilled. Capacity outputs of the three scenarios were improved by 157%, 7%, and 69%, respectively.Research limitations/implications: The work relates to aggregate planning of machine capacity in a single case study. The constraints and constructed scenarios were therefore industry-specific.Practical implications: Capacity planning in a semiconductor manufacturing facility need to consider multiple mutually influenced constraints in resource availability, process flow and product demand. The findings prove that CPGA is a practical and an efficient alternative to optimize the capacity output and to allow the company to review its capacity with quick feedback.Originality/value: The work integrates two contemporary computational methods for a real industry application conventionally reliant on human judgement.

  9. A Computational Model of Spatial Visualization Capacity

    Science.gov (United States)

    2008-03-07

    GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Don Lyon; Glenn Gunzelmann; Kevin Gluck 5d. PROJECT NUMBER 2313 5e. TASK NUMBER AS...specialization in prefrontal cortex: Effects of verbalizability, imageability and meaning. Journal of Neurolinguistics , 16, 361–382. Diwadkar, V. A

  10. Capacity Building in Land Management

    DEFF Research Database (Denmark)

    Enemark, Stig; Ahene, Rexford

    2003-01-01

    There is a significant need for capacity building in the interdisciplinary area of land management especially in developing countries and countries in transition, to deal with the complex issues of building efficient land information systems and sustainable institutional infrastructures. Capacity...... building in land management is not only a question of establishing a sufficient technological level or sufficient economic resources. It is mainly a question of understanding the interdisciplinary and cross-sectoral nature of land administration systems, and understanding the need for human resource...... and professionals for implementing the new land policy. The curriculum combines the diploma and the bachelor level and it combines the key areas of land surveying, land management and physical planning....

  11. Application of microarray analysis on computer cluster and cloud platforms.

    Science.gov (United States)

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  12. Computer Labs | College of Engineering & Applied Science

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  13. Computer Science | Classification | College of Engineering & Applied

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  14. Higher education and capacity building in Africa

    DEFF Research Database (Denmark)

    Higher education has recently been recognised as a key driver for societal growth in the Global South and capacity building of African universities is now widely included in donor policies. The question is; how do capacity-building projects affect African universities, researchers and students? U...... is a valuable resource for researchers and postgraduate students in education, development studies, African studies and human geography, as well as anthropology and history.......? Universities and their scientific knowledges are often seen to have universal qualities; therefore, capacity building may appear straightforward. Higher Education and Capacity Building in Africa contests such universalistic notions. Inspired by ideas about the ‘geography of scientific knowledge’ it explores...

  15. Herpes - resources

    Science.gov (United States)

    Genital herpes - resources; Resources - genital herpes ... following organizations are good resources for information on genital herpes : March of Dimes -- www.marchofdimes.org/complications/sexually- ...

  16. 76 FR 39470 - Integrated Resource Plan

    Science.gov (United States)

    2011-07-06

    ... in the form of hydro-electric pump storage capacity. Increased load demands above the capacity of..., biomass, and wind energy, and energy storage resources. Each portfolio was optimized for the lowest net...

  17. LHCb: Self managing experiment resources

    CERN Multimedia

    Stagni, F

    2013-01-01

    Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System ( Resource Status System ) delivering real time informatio...

  18. 78 FR 77161 - Grant Program To Build Tribal Energy Development Capacity

    Science.gov (United States)

    2013-12-20

    ... Feasibility studies and energy resource assessments; Purchase of resource assessment data; Research and... used to eliminate capacity gaps or obtain the development of energy resource development capacity... eliminate any identified capacity gaps; (f) Objectives of the proposal describing how the proposed project...

  19. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  20. Genome-Wide Study of Percent Emphysema on Computed Tomography in the General Population. The Multi-Ethnic Study of Atherosclerosis Lung/SNP Health Association Resource Study

    Science.gov (United States)

    Manichaikul, Ani; Hoffman, Eric A.; Smolonska, Joanna; Gao, Wei; Cho, Michael H.; Baumhauer, Heather; Budoff, Matthew; Austin, John H. M.; Washko, George R.; Carr, J. Jeffrey; Kaufman, Joel D.; Pottinger, Tess; Powell, Charles A.; Wijmenga, Cisca; Zanen, Pieter; Groen, Harry J. M.; Postma, Dirkje S.; Wanner, Adam; Rouhani, Farshid N.; Brantly, Mark L.; Powell, Rhea; Smith, Benjamin M.; Rabinowitz, Dan; Raffel, Leslie J.; Hinckley Stukovsky, Karen D.; Crapo, James D.; Beaty, Terri H.; Hokanson, John E.; Silverman, Edwin K.; Dupuis, Josée; O’Connor, George T.; Boezen, H. Marike; Rich, Stephen S.

    2014-01-01

    Rationale: Pulmonary emphysema overlaps partially with spirometrically defined chronic obstructive pulmonary disease and is heritable, with moderately high familial clustering. Objectives: To complete a genome-wide association study (GWAS) for the percentage of emphysema-like lung on computed tomography in the Multi-Ethnic Study of Atherosclerosis (MESA) Lung/SNP Health Association Resource (SHARe) Study, a large, population-based cohort in the United States. Methods: We determined percent emphysema and upper-lower lobe ratio in emphysema defined by lung regions less than −950 HU on cardiac scans. Genetic analyses were reported combined across four race/ethnic groups: non-Hispanic white (n = 2,587), African American (n = 2,510), Hispanic (n = 2,113), and Chinese (n = 704) and stratified by race and ethnicity. Measurements and Main Results: Among 7,914 participants, we identified regions at genome-wide significance for percent emphysema in or near SNRPF (rs7957346; P = 2.2 × 10−8) and PPT2 (rs10947233; P = 3.2 × 10−8), both of which replicated in an additional 6,023 individuals of European ancestry. Both single-nucleotide polymorphisms were previously implicated as genes influencing lung function, and analyses including lung function revealed independent associations for percent emphysema. Among Hispanics, we identified a genetic locus for upper-lower lobe ratio near the α-mannosidase–related gene MAN2B1 (rs10411619; P = 1.1 × 10−9; minor allele frequency [MAF], 4.4%). Among Chinese, we identified single-nucleotide polymorphisms associated with upper-lower lobe ratio near DHX15 (rs7698250; P = 1.8 × 10−10; MAF, 2.7%) and MGAT5B (rs7221059; P = 2.7 × 10−8; MAF, 2.6%), which acts on α-linked mannose. Among African Americans, a locus near a third α-mannosidase–related gene, MAN1C1 (rs12130495; P = 9.9 × 10−6; MAF, 13.3%) was associated with percent emphysema. Conclusions: Our results suggest that some genes previously identified as