WorldWideScience

Sample records for computing capacity resource

  1. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  2. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  3. Exploitation of heterogeneous resources for ATLAS Computing

    CERN Document Server

    Chudoba, Jiri; The ATLAS collaboration

    2018-01-01

    LHC experiments require significant computational resources for Monte Carlo simulations and real data processing and the ATLAS experiment is not an exception. In 2017, ATLAS exploited steadily almost 3M HS06 units, which corresponds to about 300 000 standard CPU cores. The total disk and tape capacity managed by the Rucio data management system exceeded 350 PB. Resources are provided mostly by Grid computing centers distributed in geographically separated locations and connected by the Grid middleware. The ATLAS collaboration developed several systems to manage computational jobs, data files and network transfers. ATLAS solutions for job and data management (PanDA and Rucio) were generalized and now are used also by other collaborations. More components are needed to include new resources such as private and public clouds, volunteers' desktop computers and primarily supercomputers in major HPC centers. Workflows and data flows significantly differ for these less traditional resources and extensive software re...

  4. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  5. Preliminary research on quantitative methods of water resources carrying capacity based on water resources balance sheet

    Science.gov (United States)

    Wang, Yanqiu; Huang, Xiaorong; Gao, Linyun; Guo, Biying; Ma, Kai

    2018-06-01

    Water resources are not only basic natural resources, but also strategic economic resources and ecological control factors. Water resources carrying capacity constrains the sustainable development of regional economy and society. Studies of water resources carrying capacity can provide helpful information about how the socioeconomic system is both supported and restrained by the water resources system. Based on the research of different scholars, major problems in the study of water resources carrying capacity were summarized as follows: the definition of water resources carrying capacity is not yet unified; the methods of carrying capacity quantification based on the definition of inconsistency are poor in operability; the current quantitative research methods of water resources carrying capacity did not fully reflect the principles of sustainable development; it is difficult to quantify the relationship among the water resources, economic society and ecological environment. Therefore, it is necessary to develop a better quantitative evaluation method to determine the regional water resources carrying capacity. This paper proposes a new approach to quantifying water resources carrying capacity (that is, through the compilation of the water resources balance sheet) to get a grasp of the regional water resources depletion and water environmental degradation (as well as regional water resources stock assets and liabilities), figure out the squeeze of socioeconomic activities on the environment, and discuss the quantitative calculation methods and technical route of water resources carrying capacity which are able to embody the substance of sustainable development.

  6. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  7. Civil Service Human Resource Capacity and Information Technology

    African Journals Online (AJOL)

    Tesfaye

    2009-01-01

    Jan 1, 2009 ... had no impact on the size of jobs that require high-level of human resource capacity. Furthermore ... level human resource capacity has an effect on the size of supervisors, which is the main ...... depreciation. 5 This indicates ...

  8. Two-period resource duopoly with endogenous intertemporal capacity constraints

    International Nuclear Information System (INIS)

    Berk, Istemi

    2014-01-01

    This paper analyzes the strategic firm behavior within the context of a two-period resource duopoly model in which firms face endogenous intertemporal capacity constraints. Firms are allowed to invest in capacity in between two periods in order to increase their initial endowment of exhaustible resource stocks. Using this setup, we nd that the equilibrium price weakly decreases over time. Moreover, asymmetric distribution of initial resource stocks leads to a significant change in equilibrium outcome, provided that firms do not have the same cost structure in capacity additions. It is also verified that if only one company is capable of investment in capacity, the market moves to a more concentrated structure in the second period.

  9. Two-period resource duopoly with endogenous intertemporal capacity constraints

    Energy Technology Data Exchange (ETDEWEB)

    Berk, Istemi

    2014-07-15

    This paper analyzes the strategic firm behavior within the context of a two-period resource duopoly model in which firms face endogenous intertemporal capacity constraints. Firms are allowed to invest in capacity in between two periods in order to increase their initial endowment of exhaustible resource stocks. Using this setup, we nd that the equilibrium price weakly decreases over time. Moreover, asymmetric distribution of initial resource stocks leads to a significant change in equilibrium outcome, provided that firms do not have the same cost structure in capacity additions. It is also verified that if only one company is capable of investment in capacity, the market moves to a more concentrated structure in the second period.

  10. Classification of CO2 Geologic Storage: Resource and Capacity

    Science.gov (United States)

    Frailey, S.M.; Finley, R.J.

    2009-01-01

    The use of the term capacity to describe possible geologic storage implies a realistic or likely volume of CO2 to be sequestered. Poor data quantity and quality may lead to very high uncertainty in the storage estimate. Use of the term "storage resource" alleviates the implied certainty of the term "storage capacity". This is especially important to non- scientists (e.g. policy makers) because "capacity" is commonly used to describe the very specific and more certain quantities such as volume of a gas tank or a hotel's overnight guest limit. Resource is a term used in the classification of oil and gas accumulations to infer lesser certainty in the commercial production of oil and gas. Likewise for CO2 sequestration, a suspected porous and permeable zone can be classified as a resource, but capacity can only be estimated after a well is drilled into the formation and a relatively higher degree of economic and regulatory certainty is established. Storage capacity estimates are lower risk or higher certainty compared to storage resource estimates. In the oil and gas industry, prospective resource and contingent resource are used for estimates with less data and certainty. Oil and gas reserves are classified as Proved and Unproved, and by analogy, capacity can be classified similarly. The highest degree of certainty for an oil or gas accumulation is Proved, Developed Producing (PDP) Reserves. For CO2 sequestration this could be Proved Developed Injecting (PDI) Capacity. A geologic sequestration storage classification system is developed by analogy to that used by the oil and gas industry. When a CO2 sequestration industry emerges, storage resource and capacity estimates will be considered a company asset and consequently regulated by the Securities and Exchange Commission. Additionally, storage accounting and auditing protocols will be required to confirm projected storage estimates and assignment of credits from actual injection. An example illustrates the use of

  11. Adaptive capacity and community-based natural resource management.

    Science.gov (United States)

    Armitage, Derek

    2005-06-01

    Why do some community-based natural resource management strategies perform better than others? Commons theorists have approached this question by developing institutional design principles to address collective choice situations, while other analysts have critiqued the underlying assumptions of community-based resource management. However, efforts to enhance community-based natural resource management performance also require an analysis of exogenous and endogenous variables that influence how social actors not only act collectively but do so in ways that respond to changing circumstances, foster learning, and build capacity for management adaptation. Drawing on examples from northern Canada and Southeast Asia, this article examines the relationship among adaptive capacity, community-based resource management performance, and the socio-institutional determinants of collective action, such as technical, financial, and legal constraints, and complex issues of politics, scale, knowledge, community and culture. An emphasis on adaptive capacity responds to a conceptual weakness in community-based natural resource management and highlights an emerging research and policy discourse that builds upon static design principles and the contested concepts in current management practice.

  12. The state of human dimensions capacity for natural resource management: needs, knowledge, and resources

    Science.gov (United States)

    Sexton, Natalie R.; Leong, Kirsten M.; Milley, Brad J.; Clarke, Melinda M.; Teel, Tara L.; Chase, Mark A.; Dietsch, Alia M.

    2013-01-01

    The social sciences have become increasingly important in understanding natural resource management contexts and audiences, and are essential in design and delivery of effective and durable management strategies. Yet many agencies and organizations do not have the necessary resource management. We draw on the textbook definition of HD: how and why people value natural resources, what benefits people seek and derive from those resources, and how people affect and are affected by those resources and their management (Decker, Brown, and Seimer 2001). Clearly articulating how HD information can be used and integrated into natural resource management planning and decision-making is an important challenge faced by the HD field. To address this challenge, we formed a collaborative team to explore the issue of HD capacity-building for natural resource organizations and to advance the HD field. We define HD capacity as activities, efforts, and resources that enhance the ability of HD researchers and practitioners and natural managers and decision-makers to understand and address the social aspects of conservation.Specifically, we sought to examine current barriers to integration of HD into natural resource management, knowledge needed to improve HD capacity, and existing HD tools, resources, and training opportunities. We conducted a needs assessment of HD experts and practitioners, developed a framework for considering HD activities that can contribute both directly and indirectly throughout any phase of an adaptive management cycle, and held a workshop to review preliminary findings and gather additional input through breakout group discussions. This paper provides highlights from our collaborative initiative to help frame and inform future HD capacity-building efforts and natural resource organizations and also provides a list of existing human dimensions tools and resources.

  13. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  14. Computing Bounds on Resource Levels for Flexible Plans

    Science.gov (United States)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  15. Computational Chemistry Capacity Building in an Underprivileged ...

    African Journals Online (AJOL)

    Bridging the gap with the other continents requires the identification of capacity ... university in South Africa), where computational chemistry research capacity has ... testifies the feasibility of such capacity building also in conditions of limited ...

  16. Challenges of human resource capacity building assistance

    International Nuclear Information System (INIS)

    Noro, Naoko

    2013-01-01

    At the first Nuclear Security Summit in Washington DC in 2010, Integrated Support Center for Nuclear Nonproliferation and Nuclear Security (ISCN) of the Japan Atomic Energy Agency was established based on Japan's National Statement which expressed Japan's strong commitment to contribute to the strengthening of nuclear security in Asian region. ISCN began its activities from JFY 2011. One of the main activities of ISCN is human resource capacity building support. Since JFY 2011, ISCN has offered various nuclear security training courses, seminars and workshops and total number of the participants to the ISCN's event reached more than 700. For the past three years, ISCN has been facing variety of challenges of nuclear security human resource assistance. This paper will briefly illustrate ISCN's achievement in the past years and introduce challenges and measures of ISCN in nuclear security human resource capacity building assistance. (author)

  17. Resource Management in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  18. Environmental sustainability control by water resources carrying capacity concept: application significance in Indonesia

    Science.gov (United States)

    Djuwansyah, M. R.

    2018-02-01

    This paper reviews the use of Water Resources carrying capacity concept to control environmental sustainability with the particular note for the case in Indonesia. Carrying capacity is a capability measure of an environment or an area to support human and the other lives as well as their activities in a sustainable manner. Recurrently water-related hazards and environmental problems indicate that the environments are exploited over its carrying capacity. Environmental carrying capacity (ECC) assessment includes Land and Water Carrying Capacity analysis of an area, suggested to always refer to the dimension of the related watershed as an incorporated hydrologic unit on the basis of resources availability estimation. Many countries use this measure to forecast the future sustainability of regional development based on water availability. Direct water Resource Carrying Capacity (WRCC) assessment involves population number determination together with their activities could be supported by available water, whereas indirect WRCC assessment comprises the analysis of supply-demand balance status of water. Water resource limits primarily environmental carrying capacity rather than the land resource since land capability constraints are easier. WRCC is a crucial factor known to control land and water resource utilization, particularly in a growing densely populated area. Even though capability of water resources is relatively perpetual, the utilization pattern of these resources may change by socio-economic and cultural technology level of the users, because of which WRCC should be evaluated periodically to maintain usage sustainability of water resource and environment.

  19. Resource allocation in grid computing

    NARCIS (Netherlands)

    Koole, Ger; Righter, Rhonda

    2007-01-01

    Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of

  20. Research on Water Resources Design Carrying Capacity

    Directory of Open Access Journals (Sweden)

    Guanghua Qin

    2016-04-01

    Full Text Available Water resources carrying capacity (WRCC is a recently proposed management concept, which aims to support sustainable socio-economic development in a region or basin. However, the calculation of future WRCC is not well considered in most studies, because water resources and the socio-economic development mode for one area or city in the future are quite uncertain. This paper focused on the limits of traditional methods of WRCC and proposed a new concept, water resources design carrying capacity (WRDCC, which incorporated the concept of design. In WRDCC, the population size that the local water resources can support is calculated based on the balance of water supply and water consumption, under the design water supply and design socio-economic development mode. The WRDCC of Chengdu city in China is calculated. Results show that the WRDCC (population size of Chengdu city in development modeI (II, III will be 997 ×104 (770 × 104, 504 × 104 in 2020, and 934 × 104 (759 × 104, 462 × 104 in 2030. Comparing the actual population to the carrying population (WRDCC in 2020 and 2030, a bigger gap will appear, which means there will be more and more pressure on the society-economic sustainable development.

  1. Carrying capacity of water resources in Bandung Basin

    Science.gov (United States)

    Marganingrum, D.

    2018-02-01

    The concept of carrying capacity is widely used in various sectors as a management tool for sustainable development processes. This idea has also been applied in watershed or basin scale. Bandung Basin is the upstream of Citarum watershed known as one of the national strategic areas. This area has developed into a metropolitan area loaded with various environmental problems. Therefore, research that is related to environmental carrying capacity in this area becomes a strategic issue. However, research on environmental carrying capacity that has been done in this area is still partial either in water balance terminology, land suitability, ecological footprint, or balance of supply and demand of resources. This paper describes the application of the concept of integrated environmental carrying capacity in order to overcome the increasing complexity and dynamic environmental problems. The sector that becomes the focus of attention is the issue of water resources. The approach method to be carried out is to combine the concept of maximum balance and system dynamics. The dynamics of the proposed system is the ecological dynamics and population that cannot be separated from one another as a unity of the Bandung Basin ecosystem.

  2. Statistics Online Computational Resource for Education

    Science.gov (United States)

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  3. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  4. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  5. [Evaluation of comprehensive capacity of resources and environments in Poyang Lake Eco-economic Zone].

    Science.gov (United States)

    Song, Yan-Chun; Yu, Dan

    2014-10-01

    With the development of the society and economy, the contradictions among population, resources and environment are increasingly worse. As a result, the capacity of resources and environment becomes one of the focal issues for many countries and regions. Through investigating and analyzing the present situation and the existing problems of resources and environment in Poyang Lake Eco-economic Zone, seven factors were chosen as the evaluation criterion layer, namely, land resources, water resources, biological resources, mineral resources, ecological-geological environment, water environment and atmospheric environment. Based on the single factor evaluation results and with the county as the evaluation unit, the comprehensive capacity of resources and environment was evaluated by using the state space method in Poyang Lake Eco-economic Zone. The results showed that it boasted abundant biological resources, quality atmosphere and water environment, and relatively stable geological environment, while restricted by land resource, water resource and mineral resource. Currently, although the comprehensive capacity of the resources and environments in Poyang Lake Eco-economic Zone was not overloaded as a whole, it has been the case in some counties/districts. State space model, with clear indication and high accuracy, could serve as another approach to evaluating comprehensive capacity of regional resources and environment.

  6. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resourcesresources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  7. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  8. Concept and Connotation of Water Resources Carrying Capacity in Water Ecological Civilization Construction

    Science.gov (United States)

    Chao, Zhilong; Song, Xiaoyu; Feng, Xianghua

    2018-01-01

    Water ecological civilization construction is based on the water resources carrying capacity, guided by the sustainable development concept, adhered to the human-water harmony thoughts. This paper has comprehensive analyzed the concept and characteristics of the carrying capacity of water resources in the water ecological civilization construction, and discussed the research methods and evaluation index system of water carrying capacity in the water ecological civilization construction, finally pointed out that the problems and solutions of water carrying capacity in the water ecological civilization construction and put forward the future research prospect.

  9. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  10. Biophysical constraints on the computational capacity of biochemical signaling networks

    Science.gov (United States)

    Wang, Ching-Hao; Mehta, Pankaj

    Biophysics fundamentally constrains the computations that cells can carry out. Here, we derive fundamental bounds on the computational capacity of biochemical signaling networks that utilize post-translational modifications (e.g. phosphorylation). To do so, we combine ideas from the statistical physics of disordered systems and the observation by Tony Pawson and others that the biochemistry underlying protein-protein interaction networks is combinatorial and modular. Our results indicate that the computational capacity of signaling networks is severely limited by the energetics of binding and the need to achieve specificity. We relate our results to one of the theoretical pillars of statistical learning theory, Cover's theorem, which places bounds on the computational capacity of perceptrons. PM and CHW were supported by a Simons Investigator in the Mathematical Modeling of Living Systems Grant, and NIH Grant No. 1R35GM119461 (both to PM).

  11. Refining teacher design capacity: mathematics teachers' interactions with digital curriculum resources

    NARCIS (Netherlands)

    Pepin, B.; Gueudet, G.; Trouche, L.

    2017-01-01

    The goal of this conceptual paper is to develop enhanced understandings of mathematics teacher design and design capacity when interacting with digital curriculum resources. We argue that digital resources in particular offer incentives and increasing opportunities for mathematics teachers’ design,

  12. Construction of an evaluation index system of water resources bearing capacity: An empirical study in Xi’an, China

    Science.gov (United States)

    Qu, X. E.; Zhang, L. L.

    2017-08-01

    In this paper, a comprehensive evaluation of the water resources bearing capacity of Xi’an is performed. By constructing a comprehensive evaluation index system of the water resources bearing capacity that included water resources, economy, society, and ecological environment, we empirically studied the dynamic change and regional differences of the water resources bearing capacities of Xi’an districts through the TOPSIS method (Technique for Order Preference by Similarity to an Ideal Solution). Results show that the water resources bearing capacity of Xi’an significantly increased over time, and the contributions of the subsystems from high to low are as follows: water resources subsystem, social subsystem, ecological subsystem, and economic subsystem. Furthermore, there are large differences between the water resources bearing capacities of the different districts in Xi’an. The water resources bearing capacities from high to low are urban areas, Huxian, Zhouzhi, Gaoling, and Lantian. Overall, the water resources bearing capacity of Xi’an is still at a the lower level, which is highly related to the scarcity of water resources, population pressure, insufficient water saving consciousness, irrational industrial structure, low water-use efficiency, and so on.

  13. Computer Resources | College of Engineering & Applied Science

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  14. Improving resource capacity planning in hospitals with business approaches.

    NARCIS (Netherlands)

    van Lent, Wineke Agnes Marieke; van Lent, W.A.M.

    2011-01-01

    This dissertation contributed to the knowledge on the translation of approaches from businesses and services to improve the resource capacity planning on tactical and operational level in (oncologic) hospital care. The following studies were presented: * Chapter 2 surveyed the business approaches

  15. Planning for partnerships: Maximizing surge capacity resources through service learning.

    Science.gov (United States)

    Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B

    2015-01-01

    Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts.

  16. Application of the Computer Capacity to the Analysis of Processors Evolution

    OpenAIRE

    Ryabko, Boris; Rakitskiy, Anton

    2017-01-01

    The notion of computer capacity was proposed in 2012, and this quantity has been estimated for computers of different kinds. In this paper we show that, when designing new processors, the manufacturers change the parameters that affect the computer capacity. This allows us to predict the values of parameters of future processors. As the main example we use Intel processors, due to the accessibility of detailed description of all their technical characteristics.

  17. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  18. Evaluation of Water Resources Carrying Capacity in Shandong Province Based on Fuzzy Comprehensive Evaluation

    Directory of Open Access Journals (Sweden)

    Zhao Qiang

    2018-01-01

    Full Text Available Water resources carrying capacity is the maximum available water resources supporting by the social and economic development. Based on investigating and statisticing on the current situation of water resources in Shandong Province, this paper selects 13 factors including per capita water resources, water resources utilization, water supply modulus, rainfall, per capita GDP, population density, per capita water consumption, water consumption per million yuan, The water consumption of industrial output value, the agricultural output value of farmland, the irrigation rate of cultivated land, the water consumption rate of ecological environment and the forest coverage rate were used as the evaluation factors. Then,the fuzzy comprehensive evaluation model was used to analyze the water resources carrying capacity Force status evaluation. The results showed : The comprehensive evaluation results of water resources in Shandong Province were lower than 0.6 in 2001-2009 and higher than 0.6 in 2010-2015, which indicating that the water resources carrying capacity of Shandong Province has been improved.; In addition, most of the years a value of less than 0.6, individual years below 0.4, the interannual changes are relatively large, from that we can see the level of water resources is generally weak, the greater the interannual changes in Shandong Province.

  19. Modeling water resources as a constraint in electricity capacity expansion models

    Science.gov (United States)

    Newmark, R. L.; Macknick, J.; Cohen, S.; Tidwell, V. C.; Woldeyesus, T.; Martinez, A.

    2013-12-01

    In the United States, the electric power sector is the largest withdrawer of freshwater in the nation. The primary demand for water from the electricity sector is for thermoelectric power plant cooling. Areas likely to see the largest near-term growth in population and energy usage, the Southwest and the Southeast, are also facing freshwater scarcity and have experienced water-related power reliability issues in the past decade. Lack of water may become a barrier for new conventionally-cooled power plants, and alternative cooling systems will impact technology cost and performance. Although water is integral to electricity generation, it has long been neglected as a constraint in future electricity system projections. Assessing the impact of water resource scarcity on energy infrastructure development is critical, both for conventional and renewable energy technologies. Efficiently utilizing all water types, including wastewater and brackish sources, or utilizing dry-cooling technologies, will be essential for transitioning to a low-carbon electricity system. This work provides the first demonstration of a national electric system capacity expansion model that incorporates water resources as a constraint on the current and future U.S. electricity system. The Regional Electricity Deployment System (ReEDS) model was enhanced to represent multiple cooling technology types and limited water resource availability in its optimization of electricity sector capacity expansion to 2050. The ReEDS model has high geographic and temporal resolution, making it a suitable model for incorporating water resources, which are inherently seasonal and watershed-specific. Cooling system technologies were assigned varying costs (capital, operations and maintenance), and performance parameters, reflecting inherent tradeoffs in water impacts and operating characteristics. Water rights supply curves were developed for each of the power balancing regions in ReEDS. Supply curves include costs

  20. Evaluation of Resources Carrying Capacity in China Based on Remote Sensing and GIS

    Science.gov (United States)

    Liu, K.; Gan, Y. H.; Zhang, T.; Luo, Z. Y.; Wang, J. J.; Lin, F. N.

    2018-04-01

    This paper accurately extracted the information of arable land, grassland (wetland), forest land, water area and construction land, based on 1 : 250000 basic geographic information data. It made model modification of comprehensive CCRR to achieve carrying capacity calculation taking resource quality into consideration. Ultimately it achieved a comprehensive assessment of CCRR status in China. The top ten cities where the status of carrying capacity of resources was overloaded were Wenzhou, Shanghai, Chengdu, Baoding, Shantou, Jieyang, Dongguan, Fuyang, Zhoukou and Handan. The cities were basically distributed in the central and southern areas with convenient transportation and more economically developed areas. Among the cities in surplus status, resources carrying capacity in Hulun Buir was the most abundant, followed by Heihe, Bayingolin Mongol Autonomous Prefecture, Qiqihar, Chifeng and Jiamusi, all of which were located in northeastern China with a small population and plentiful cultivated land.

  1. Framework of Resource Management for Intercloud Computing

    Directory of Open Access Journals (Sweden)

    Mohammad Aazam

    2014-01-01

    Full Text Available There has been a very rapid increase in digital media content, due to which media cloud is gaining importance. Cloud computing paradigm provides management of resources and helps create extended portfolio of services. Through cloud computing, not only are services managed more efficiently, but also service discovery is made possible. To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible for standalone clouds to handle everything with the increasing user demands. For scalability and better service provisioning, at times, clouds have to communicate with other clouds and share their resources. This scenario is called Intercloud computing or cloud federation. The study on Intercloud computing is still in its start. Resource management is one of the key concerns to be addressed in Intercloud computing. Already done studies discuss this issue only in a trivial and simplistic way. In this study, we present a resource management model, keeping in view different types of services, different customer types, customer characteristic, pricing, and refunding. The presented framework was implemented using Java and NetBeans 8.0 and evaluated using CloudSim 3.0.3 toolkit. Presented results and their discussion validate our model and its efficiency.

  2. Development of human resource capacity building assistance for nuclear security

    International Nuclear Information System (INIS)

    Nakamura, Yo; Noro, Naoko

    2014-01-01

    The Integrated Support Center for Nuclear Nonproliferation and Nuclear Security (ISCN) of the Japan Atomic Energy Agency (JAEA) has been providing nuclear security human resource development projects targeting at nuclear emerging countries in Asia in cooperation with the authorities concerned including the Sandia National Laboratory (SNL) and the International Atomic Energy Agency (IAEA). In the aftermath of the attacks of Sept. 11, the threat of terrorism was internationally recognized and thus the human resource capacity building is underway as an urgent task. In order to responding to emerging threats, the human resource capacity building that ISCN has implemented thus far needs to be multilaterally analyzed in order to develop more effective training programs. This paper studies ISCN's future direction by analyzing its achievements, as well as introduces the collaborative relationships with SNL that contributes to the reflection and maintenance of international trends for the contents of nuclear security training, the nuclear security enhancement support with which Japan is to provide nuclear emerging countries in Asia, and the achievements of the nuclear security training program that ISCN implemented. (author)

  3. System capacity and economic modeling computer tool for satellite mobile communications systems

    Science.gov (United States)

    Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.

    1988-01-01

    A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.

  4. Improving ATLAS computing resource utilization with HammerCloud

    CERN Document Server

    Schovancova, Jaroslava; The ATLAS collaboration

    2018-01-01

    HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which have yet to be automated, and improve utilization of available computing resources. We present recent evolution of the auto-exclusion/recovery tools: faster inclusion of new resources in testing machinery, machine learning algorithms for anomaly detection, categorized resources as master vs. slave for the purpose of blacklisting, and a tool for auto-exclusion/recovery of resources triggered by Event Service job failures that is being extended to other workflows besides the Event Service. We describe how HammerCloud helped commissioning various concepts and components of distributed systems: simplified configuration of qu...

  5. ResourceGate: A New Solution for Cloud Computing Resource Allocation

    OpenAIRE

    Abdullah A. Sheikh

    2012-01-01

    Cloud computing has taken place to be focused by educational and business communities. These concerns include their needs to improve the Quality of Services (QoS) provided, also services such as reliability, performance and reducing costs. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring these benefits is considered to be the major factor in the cloud computing environment. This paper surveys recent research related to cloud computing resource al...

  6. Aggregated Computational Toxicology Resource (ACTOR)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aggregated Computational Toxicology Resource (ACTOR) is a database on environmental chemicals that is searchable by chemical name and other identifiers, and by...

  7. Aggregated Computational Toxicology Online Resource

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...

  8. Capacity Expansion and Reliability Evaluation on the Networks Flows with Continuous Stochastic Functional Capacity

    Directory of Open Access Journals (Sweden)

    F. Hamzezadeh

    2014-01-01

    Full Text Available In many systems such as computer network, fuel distribution, and transportation system, it is necessary to change the capacity of some arcs in order to increase maximum flow value from source s to sink t, while the capacity change incurs minimum cost. In real-time networks, some factors cause loss of arc’s flow. For example, in some flow distribution systems, evaporation, erosion or sediment in pipes waste the flow. Here we define a real capacity, or the so-called functional capacity, which is the operational capacity of an arc. In other words, the functional capacity of an arc equals the possible maximum flow that may pass through the arc. Increasing the functional arcs capacities incurs some cost. There is a certain resource available to cover the costs. First, we construct a mathematical model to minimize the total cost of expanding the functional capacities to the required levels. Then, we consider the loss of flow on each arc as a stochastic variable and compute the system reliability.

  9. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cole, Wesley J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Richards, James [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-01

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss common modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges

  10. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    International Nuclear Information System (INIS)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  11. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  12. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  13. Strengthening Capacity to Respond to Computer Security Incidents ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... in the form of spam, improper access to confidential data and cyber theft. ... These teams are usually known as computer security incident response teams ... regional capacity for preventing and responding to cyber security incidents in Latin ...

  14. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  15. Some issues of creation of belarusian language computer resources

    OpenAIRE

    Rubashko, N.; Nevmerjitskaia, G.

    2003-01-01

    The main reason for creation of computer resources of natural language is the necessity to bring into accord the ways of language normalization with the form of its existence - the computer form of language usage should correspond to the computer form of language standards fixation. This paper discusses various aspects of the creation of Belarusian language computer resources. It also briefly gives an overview of the objectives of the project involved.

  16. Physical-resource requirements and the power of quantum computation

    International Nuclear Information System (INIS)

    Caves, Carlton M; Deutsch, Ivan H; Blume-Kohout, Robin

    2004-01-01

    The primary resource for quantum computation is the Hilbert-space dimension. Whereas Hilbert space itself is an abstract construction, the number of dimensions available to a system is a physical quantity that requires physical resources. Avoiding a demand for an exponential amount of these resources places a fundamental constraint on the systems that are suitable for scalable quantum computation. To be scalable, the number of degrees of freedom in the computer must grow nearly linearly with the number of qubits in an equivalent qubit-based quantum computer. These considerations rule out quantum computers based on a single particle, a single atom, or a single molecule consisting of a fixed number of atoms or on classical waves manipulated using the transformations of linear optics

  17. Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount

    Energy Technology Data Exchange (ETDEWEB)

    Supriyadi, E-mail: supriyadi-uno@yahoo.co.nz [Physics, Faculty of Mathematics and Natural Sciences, University of Jember, Jl. Kalimantan Kampus Bumi Tegal Boto, Jember 68181 (Indonesia); Srigutomo, Wahyu [Complex system and earth physics, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Munandar, Arif [Kelompok Program Penelitian Panas Bumi, PSDG, Badan Geologi, Kementrian ESDM, Jl. Soekarno Hatta No. 444 Bandung 40254 (Indonesia)

    2014-03-24

    Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.

  18. Demand response and energy efficiency in the capacity resource procurement: Case studies of forward capacity markets in ISO New England, PJM and Great Britain

    International Nuclear Information System (INIS)

    Liu, Yingqi

    2017-01-01

    Demand-side resources like demand response (DR) and energy efficiency (EE) can contribute to the capacity adequacy underpinning power system reliability. Forward capacity markets are established in many liberalised markets to procure capacity, with a strong interest in procuring DR and EE. With case studies of ISO New England, PJM and Great Britain, this paper examines the process and trends of procuring DR and EE in forward capacity markets, and the design for integration mechanisms. It finds that the contribution of DR and EE varies wildly across these three capacity markets, due to a set of factors regarding mechanism design, market conditions and regulatory provisions, and the offering of EE is more heavily influenced by regulatory utility EE obligation. DR and EE are complementary in targeting end-uses and customers for capacity resources, thus highlighting the value of procuring them both. System needs and resources’ market potential need to be considered in defining capacity products. Over the long-term, it is important to ensure the removal of barriers for these demand-side resources and the capability of providers in addressing risks of unstable funding and forward planning. For the EDR Pilot in the UK, better coordination with forward capacity auction needs to be achieved. - Highlights: • Trends of demand response and energy efficiency in capacity markets are analysed. • Integration mechanisms, market conditions and regulatory provisions are key factors. • Participation of energy efficiency is influenced by regulatory utility obligations. • Procuring both demand response and energy efficiency in capacity market is valuable. • Critical analysis of the design of capacity products and integration mechanisms.

  19. Research on elastic resource management for multi-queue under cloud computing environment

    Science.gov (United States)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  20. A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking

    Directory of Open Access Journals (Sweden)

    Fang-Yie Leu

    2012-03-01

    Full Text Available In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short, which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET, users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.

  1. Adaptive resource allocation scheme using sliding window subchannel gain computation: context of OFDMA wireless mobiles systems

    International Nuclear Information System (INIS)

    Khelifa, F.; Samet, A.; Ben Hassen, W.; Afif, M.

    2011-01-01

    Multiuser diversity combined with Orthogonal Frequency Division Multiple Access (OFDMA) are a promising technique for achieving high downlink capacities in new generation of cellular and wireless network systems. The total capacity of OFDMA based-system is maximized when each subchannel is assigned to the mobile station with the best channel to noise ratio for that subchannel with power is uniformly distributed between all subchannels. A contiguous method for subchannel construction is adopted in IEEE 802.16 m standard in order to reduce OFDMA system complexity. In this context, new subchannel gain computation method, can contribute, jointly with optimal assignment subchannel to maximize total system capacity. In this paper, two new methods have been proposed in order to achieve a better trade-off between fairness and efficiency use of resources. Numerical results show that proposed algorithms provide low complexity, higher total system capacity and fairness among users compared to others recent methods.

  2. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  3. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  4. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  5. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  6. Optimal Computing Resource Management Based on Utility Maximization in Mobile Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Haoyu Meng

    2017-01-01

    Full Text Available Mobile crowdsourcing, as an emerging service paradigm, enables the computing resource requestor (CRR to outsource computation tasks to each computing resource provider (CRP. Considering the importance of pricing as an essential incentive to coordinate the real-time interaction among the CRR and CRPs, in this paper, we propose an optimal real-time pricing strategy for computing resource management in mobile crowdsourcing. Firstly, we analytically model the CRR and CRPs behaviors in form of carefully selected utility and cost functions, based on concepts from microeconomics. Secondly, we propose a distributed algorithm through the exchange of control messages, which contain the information of computing resource demand/supply and real-time prices. We show that there exist real-time prices that can align individual optimality with systematic optimality. Finally, we also take account of the interaction among CRPs and formulate the computing resource management as a game with Nash equilibrium achievable via best response. Simulation results demonstrate that the proposed distributed algorithm can potentially benefit both the CRR and CRPs. The coordinator in mobile crowdsourcing can thus use the optimal real-time pricing strategy to manage computing resources towards the benefit of the overall system.

  7. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  8. CLOUD COMPUTING OVERVIEW AND CHALLENGES: A REVIEW PAPER

    OpenAIRE

    Satish Kumar*, Vishal Thakur, Payal Thakur, Ashok Kumar Kashyap

    2017-01-01

    Cloud computing era is the most resourceful, elastic, utilized and scalable period for internet technology to use the computing resources over the internet successfully. Cloud computing did not provide only the speed, accuracy, storage capacity and efficiency for computing but it also lead to propagate the green computing and resource utilization. In this research paper, a brief description of cloud computing, cloud services and cloud security challenges is given. Also the literature review o...

  9. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    Science.gov (United States)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  10. Load/resource matching for period-of-record computer simulation

    International Nuclear Information System (INIS)

    Lindsey, E.D. Jr.; Robbins, G.E. III

    1991-01-01

    The Southwestern Power Administration (Southwestern), an agency of the Department of Energy, is responsible for marketing the power and energy produced at Federal hydroelectric power projects developed by the U.S. Army Corps of Engineers in the southwestern United States. This paper reports that in order to maximize benefits from limited resources, to evaluate proposed changes in the operation of existing projects, and to determine the feasibility and marketability of proposed new projects, Southwestern utilizes a period-of-record computer simulation model created in the 1960's. Southwestern is constructing a new computer simulation model to take advantage of changes in computers, policy, and procedures. Within all hydroelectric power reservoir systems, the ability of the resources to match the load demand is critical and presents complex problems. Therefore, the method used to compare available energy resources to energy load demands is a very important aspect of the new model. Southwestern has developed an innovative method which compares a resource duration curve with a load duration curve, adjusting the resource duration curve to make the most efficient use of the available resources

  11. Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ibanez, E.; Milligan, M.

    2014-04-01

    Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprint under different variable generation penetrations.

  12. TIGER-NET – enabling an Earth Observation capacity for Integrated Water Resource Management in Africa

    DEFF Research Database (Denmark)

    Walli, A.; Tøttrup, C.; Naeimi, V.

    As part of the TIGER initiative [1] the TIGER-NET project aims to support the assessment and monitoring of water resources from watershed to transboundary basin level delivering indispensable information for Integrated Water Resource Management in Africa through: 1. Development of an open......-source Water Observation and Information Systems (WOIS) for monitoring, assessing and inventorying water resources in a cost-effective manner; 2. Capacity building and training of African water authorities and technical centers to fully exploit the increasing observation capacity offered by current...... and upcoming generations of satellites, including the Sentinel missions. Dedicated application case studies have been developed and demonstrated covering all EO products required by and developed with the participating African water authorities for their water resource management tasks, such as water reservoir...

  13. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  14. Shared-resource computing for small research labs.

    Science.gov (United States)

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  15. Towards minimal resources of measurement-based quantum computation

    International Nuclear Information System (INIS)

    Perdrix, Simon

    2007-01-01

    We improve the upper bound on the minimal resources required for measurement-only quantum computation (M A Nielsen 2003 Phys. Rev. A 308 96-100; D W Leung 2004 Int. J. Quantum Inform. 2 33; S Perdrix 2005 Int. J. Quantum Inform. 3 219-23). Minimizing the resources required for this model is a key issue for experimental realization of a quantum computer based on projective measurements. This new upper bound also allows one to reply in the negative to the open question presented by Perdrix (2004 Proc. Quantum Communication Measurement and Computing) about the existence of a trade-off between observable and ancillary qubits in measurement-only QC

  16. A Tool and Process that Facilitate Community Capacity Building and Social Learning for Natural Resource Management

    Directory of Open Access Journals (Sweden)

    Christopher M. Raymond

    2013-03-01

    Full Text Available This study presents a self-assessment tool and process that facilitate community capacity building and social learning for natural resource management. The tool and process provide opportunities for rural landholders and project teams both to self-assess their capacity to plan and deliver natural resource management (NRM programs and to reflect on their capacities relative to other organizations and institutions that operate in their region. We first outline the tool and process and then present a critical review of the pilot in the South Australian Arid Lands NRM region, South Australia. Results indicate that participants representing local, organizational, and institutional tiers of government were able to arrive at a group consensus position on the strength, importance, and confidence of a variety of capacities for NRM categorized broadly as human, social, physical, and financial. During the process, participants learned a lot about their current capacity as well as capacity needs. Broad conclusions are discussed with reference to the iterative process for assessing and reflecting on community capacity.

  17. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M

    2007-03-22

    The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vast number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07

  18. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    International Nuclear Information System (INIS)

    Seager, M.

    2007-01-01

    The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vast number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as

  19. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  20. A Multi-Tiered Approach for Building Capacity in Hydrologic Modeling for Water Resource Management in Developing Regions

    Science.gov (United States)

    Markert, K. N.; Limaye, A. S.; Rushi, B. R.; Adams, E. C.; Anderson, E.; Ellenburg, W. L.; Mithieu, F.; Griffin, R.

    2017-12-01

    Water resource management is the process by which governments, businesses and/or individuals reach and implement decisions that are intended to address the future quantity and/or quality of water for societal benefit. The implementation of water resource management typically requires the understanding of the quantity and/or timing of a variety of hydrologic variables (e.g. discharge, soil moisture and evapotranspiration). Often times these variables for management are simulated using hydrologic models particularly in data sparse regions. However, there are several large barriers to entry in learning how to use models, applying best practices during the modeling process, and selecting and understanding the most appropriate model for diverse applications. This presentation focuses on a multi-tiered approach to bring the state-of-the-art hydrologic modeling capabilities and methods to developing regions through the SERVIR program, a joint NASA and USAID initiative that builds capacity of regional partners and their end users on the use of Earth observations for environmental decision making. The first tier is a series of trainings on the use of multiple hydrologic models, including the Variable Infiltration Capacity (VIC) and Ensemble Framework For Flash Flood Forecasting (EF5), which focus on model concepts and steps to successfully implement the models. We present a case study for this in a pilot area, the Nyando Basin in Kenya. The second tier is focused on building a community of practice on applied hydrology modeling aimed at creating a support network for hydrologists in SERVIR regions and promoting best practices. The third tier is a hydrologic inter-comparison project under development in the SERVIR regions. The objective of this step is to understand model performance under specific decision-making scenarios, and to share knowledge among hydrologists in SERVIR regions. The results of these efforts include computer programs, training materials, and new

  1. Assessing employability capacities and career adaptability in a sample of human resource professionals

    Directory of Open Access Journals (Sweden)

    Melinde Coetzee

    2015-06-01

    Full Text Available Orientation: Employers have come to recognise graduates’ employability capacities and their ability to adapt to new work demands as important human capital resources for sustaining a competitive business advantage. Research purpose: The study sought (1 to ascertain whether a significant relationship exists between a set of graduate employability capacities and a set of career adaptability capacities and (2 to identify the variables that contributed the most to this relationship. Motivation for the study: Global competitive markets and technological advances are increasingly driving the demand for graduate knowledge and skills in a wide variety of jobs. Contemporary career theory further emphasises career adaptability across the lifespan as a critical skill for career management agency. Despite the apparent importance attached to employees’ employability and career adaptability, there seems to be a general lack of research investigating the association between these constructs. Research approach, design and method: A cross-sectional, quantitative research design approach was followed. Descriptive statistics, Pearson product-moment correlations and canonical correlation analysis were performed to achieve the objective of the study. The participants (N = 196 were employed in professional positions in the human resource field and were predominantly early career black people and women. Main findings: The results indicated positive multivariate relationships between the variables and showed that lifelong learning capacities and problem solving, decision-making and interactive skills contributed the most to explaining the participants’ career confidence, career curiosity and career control. Practical/managerial implications: The study suggests that developing professional graduates’ employability capacities may strengthen their career adaptability. These capacities were shown to explain graduates’ active engagement in career management

  2. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  3. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  4. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  5. Using OSG Computing Resources with (iLC)Dirac

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Petric, Marko

    2017-01-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called 'SiteDirectors', which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional sitespecific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were develo...

  6. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  7. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  8. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  9. Cloud Provider Capacity Augmentation Through Automated Resource Bartering

    OpenAIRE

    Gohera, Syeda ZarAfshan; Bloodsworth, Peter; Rasool, Raihan Ur; McClatchey, Richard

    2018-01-01

    Growing interest in Cloud Computing places a heavy workload on cloud providers which is becoming increasingly difficult for them to manage with their primary datacenter infrastructures. Resource limitations can make providers vulnerable to significant reputational damage and it often forces customers to select services from the larger, more established companies, sometimes at a higher price. Funding limitations, however, commonly prevent emerging and even established providers from making con...

  10. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Tupputi, S A; Girolamo, A Di; Kouba, T; Schovancová, J

    2014-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  11. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  12. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  13. Capacity of Distribution Feeders for Hosting Distributed Energy Resources

    DEFF Research Database (Denmark)

    Papathanassiou, S.; Hatziargyriou, N.; Anagnostopoulos, P.

    The last two decades have seen an unprecedented development of distributed energy resources (DER) all over the world. Several countries have adopted a variety of support schemes (feed-in tariffs, green certificates, direct subsidies, tax exemptions etc.) so as to promote distributed generation (DG...... standards of the networks. To address this need in a timely and effective manner, simplified methodologies and practical rules of thumbs are often applied to assess the DER hosting capacity of existing distribution networks, avoiding thus detailed and time consuming analytical studies. The scope...

  14. Development of Computer-Based Resources for Textile Education.

    Science.gov (United States)

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  15. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    Science.gov (United States)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  16. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  17. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  18. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  19. System dynamics model of Suzhou water resources carrying capacity and its application

    Directory of Open Access Journals (Sweden)

    Li Cheng

    2010-06-01

    Full Text Available A model of Suzhou water resources carrying capacity (WRCC was set up using the method of system dynamics (SD. In the model, three different water resources utilization programs were adopted: (1 continuity of existing water utilization, (2 water conservation/saving, and (3 water exploitation. The dynamic variation of the Suzhou WRCC was simulated with the supply-decided principle for the time period of 2001 to 2030, and the results were characterized based on socio-economic factors. The corresponding Suzhou WRCC values for several target years were calculated by the model. Based on these results, proper ways to improve the Suzhou WRCC are proposed. The model also produced an optimized plan, which can provide a scientific basis for the sustainable utilization of Suzhou water resources and for the coordinated development of the society, economy, and water resources.

  20. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  1. Lean computing for the cloud

    CERN Document Server

    Bauer, Eric

    2016-01-01

    Applies lean manufacturing principles across the cloud service delivery chain to enable application and infrastructure service providers to sustainably achieve the shortest lead time, best quality, and value This book focuses on lean in the context of cloud computing capacity management of applications and the physical and virtual cloud resources that support them. Lean Computing for the Cloud considers business, architectural and operational aspects of efficiently delivering valuable services to end users via cloud-based applications hosted on shared cloud infrastructure. The work also focuses on overall optimization of the service delivery chain to enable both application service and infrastructure service providers to adopt leaner, demand driven operations to serve end users more efficiently. The book’s early chapters analyze how capacity management morphs with cloud computing into interlocked physical infrastructure capacity management, virtual resou ce capacity management, and application capacity ma...

  2. Quantum computing with incoherent resources and quantum jumps.

    Science.gov (United States)

    Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R

    2012-04-27

    Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.

  3. Computing Resource And Work Allocations Using Social Profiles

    Directory of Open Access Journals (Sweden)

    Peter Lavin

    2013-01-01

    Full Text Available If several distributed and disparate computer resources exist, many of whichhave been created for different and diverse reasons, and several large scale com-puting challenges also exist with similar diversity in their backgrounds, then oneproblem which arises in trying to assemble enough of these resources to addresssuch challenges is the need to align and accommodate the different motivationsand objectives which may lie behind the existence of both the resources andthe challenges. Software agents are offered as a mainstream technology formodelling the types of collaborations and relationships needed to do this. Asan initial step towards forming such relationships, agents need a mechanism toconsider social and economic backgrounds. This paper explores addressing so-cial and economic differences using a combination of textual descriptions knownas social profiles and search engine technology, both of which are integrated intoan agent technology.

  4. GridFactory - Distributed computing on ephemeral resources

    DEFF Research Database (Denmark)

    Orellana, Frederik; Niinimaki, Marko

    2011-01-01

    A novel batch system for high throughput computing is presented. The system is specifically designed to leverage virtualization and web technology to facilitate deployment on cloud and other ephemeral resources. In particular, it implements a security model suited for forming collaborations...

  5. LHCb Computing Resources: 2012 re-assessment, 2013 request and 2014 forecast

    CERN Document Server

    Graciani Diaz, Ricardo

    2012-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2012 data-taking period, request of computing resource needs for 2013, and a first forecast of the 2014 needs, when restart of data-taking is foreseen. Estimates are based on 2011 experience, as well as on the results of a simulation of the computing model described in the document. Differences in the model and deviations in the estimates from previous presented results are stressed.

  6. LHCb Computing Resources: 2011 re-assessment, 2012 request and 2013 forecast

    CERN Document Server

    Graciani, R

    2011-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2011 data taking period, request of computing resource needs for 2012 data taking period and a first forecast of the 2013 needs, when no data taking is foreseen. Estimates are based on 2010 experienced and last updates from LHC schedule, as well as on a new implementation of the computing model simulation tool. Differences in the model and deviations in the estimates from previous presented results are stressed.

  7. Multi-scale research of time and space differences about ecological footprint and ecological carrying capacity of the water resources

    Science.gov (United States)

    Li, Jiahong; Lei, Xiaohui; Fu, Qiang; Li, Tianxiao; Qiao, Yu; Chen, Lei; Liao, Weihong

    2018-03-01

    A multi-scale assessment framework for assessing and comparing the water resource sustainability based on the ecological footprint (EF) is introduced. The study aims to manage the water resource from different views in Heilongjiang Province. First of all, from the scale of each city, the water ecological carrying capacity (ECC) was calculated from 2000 to 2011, and map the spatial distribution of the recent 3 years which show that, the water ecological carrying capacity (ECC) is uneven and has a downward trend year by year. Then, from the perspective of the five secondary partition basins in Heilongjiang Province, the paper calculated the ecological carrying capacity (ECC), the ecological footprint (EF) and ecological surplus and deficit (S&D) situation of water resources from 2000 to 2011, which show that the ecological deficit situation is more prominent in Nenjiang and Suifenhe basins which are in an unsustainable development state. Finally, from the perspective of the province, the paper calculated the ecological carrying capacity (ECC), the ecological footprint (EF) and ecological S&D of water resources from 2000 to 2011 in Heilongjiang Province, which show that the ecological footprint (EF) is in the rising trend, and the correlation coefficient between the ecological carrying capacity (ECC) and the precipitation is 0.8. There are 5 years of unsustainable development state in Heilongjiang. The proposed multi-scale assessment of WEF aims to evaluate the complex relationship between water resource supply and consumption in different spatial scales and time series. It also provides more reasonable assessment result which can be used by managers and regulators.

  8. Computer Simulations of Developmental Change: The Contributions of Working Memory Capacity and Long-Term Knowledge

    Science.gov (United States)

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2008-01-01

    Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…

  9. Building Capacity to Use NASA Earth Observations in the Water Resource Sector

    Science.gov (United States)

    Childs-Gleason, L. M.; Ross, K. W.; Crepps, G.; Clayton, A.; Ruiz, M. L.; Rogers, L.; Allsbrook, K. N.

    2017-12-01

    The NASA DEVELOP National Program builds capacity to use and apply NASA Earth observations to address environmental concerns around the globe. The DEVELOP model builds capacity in both participants (students, recent graduates, and early and transitioning career professionals) who conduct the projects and partners (decision and policy makers) who are recipients of project methodologies and results. Projects focus on a spectrum of thematic topics, including water resource management which made up 30% of the DEVELOP FY2017 portfolio. During this period, DEVELOP conducted water-focused feasibility studies in collaboration with 22 partners across 13 U.S. states and five countries. This presentation will provide an overview of needs identified, DEVELOP's response, data sources, challenges, and lessons learned.

  10. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  11. Resource-poor settings: infrastructure and capacity building: care of the critically ill and injured during pandemics and disasters: CHEST consensus statement.

    Science.gov (United States)

    Geiling, James; Burkle, Frederick M; Amundson, Dennis; Dominguez-Cherit, Guillermo; Gomersall, Charles D; Lim, Matthew L; Luyckx, Valerie; Sarani, Babak; Uyeki, Timothy M; West, T Eoin; Christian, Michael D; Devereaux, Asha V; Dichter, Jeffrey R; Kissoon, Niranjan

    2014-10-01

    Planning for mass critical care (MCC) in resource-poor or constrained settings has been largely ignored, despite their large populations that are prone to suffer disproportionately from natural disasters. Addressing MCC in these settings has the potential to help vast numbers of people and also to inform planning for better-resourced areas. The Resource-Poor Settings panel developed five key question domains; defining the term resource poor and using the traditional phases of disaster (mitigation/preparedness/response/recovery), literature searches were conducted to identify evidence on which to answer the key questions in these areas. Given a lack of data upon which to develop evidence-based recommendations, expert-opinion suggestions were developed, and consensus was achieved using a modified Delphi process. The five key questions were then separated as follows: definition, infrastructure and capacity building, resources, response, and reconstitution/recovery of host nation critical care capabilities and research. Addressing these questions led the panel to offer 33 suggestions. Because of the large number of suggestions, the results have been separated into two sections: part 1, Infrastructure/Capacity in this article, and part 2, Response/Recovery/Research in the accompanying article. Lack of, or presence of, rudimentary ICU resources and limited capacity to enhance services further challenge resource-poor and constrained settings. Hence, capacity building entails preventative strategies and strengthening of primary health services. Assistance from other countries and organizations is needed to mount a surge response. Moreover, planning should include when to disengage and how the host nation can provide capacity beyond the mass casualty care event.

  12. Next Generation Computer Resources: Reference Model for Project Support Environments (Version 2.0)

    National Research Council Canada - National Science Library

    Brown, Alan

    1993-01-01

    The objective of the Next Generation Computer Resources (NGCR) program is to restructure the Navy's approach to acquisition of standard computing resources to take better advantage of commercial advances and investments...

  13. Active resources concept of computation for enterprise software

    Directory of Open Access Journals (Sweden)

    Koryl Maciej

    2017-06-01

    Full Text Available Traditional computational models for enterprise software are still to a great extent centralized. However, rapid growing of modern computation techniques and frameworks causes that contemporary software becomes more and more distributed. Towards development of new complete and coherent solution for distributed enterprise software construction, synthesis of three well-grounded concepts is proposed: Domain-Driven Design technique of software engineering, REST architectural style and actor model of computation. As a result new resources-based framework arises, which after first cases of use seems to be useful and worthy of further research.

  14. LHCb Computing Resource usage in 2017

    CERN Document Server

    Bozzi, Concezio

    2018-01-01

    This document reports the usage of computing resources by the LHCb collaboration during the period January 1st – December 31st 2017. The data in the following sections have been compiled from the EGI Accounting portal: https://accounting.egi.eu. For LHCb specific information, the data is taken from the DIRAC Accounting at the LHCb DIRAC Web portal: http://lhcb-portal-dirac.cern.ch.

  15. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Science.gov (United States)

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  16. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Nan Zhang

    Full Text Available Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  17. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  18. Dynamic provisioning of local and remote compute resources with OpenStack

    Science.gov (United States)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  19. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  20. LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents the computing resources needed by LHCb in 2019 and a reassessment of the 2018 requests, as resulting from the current experience of Run2 data taking and minor changes in the LHCb computing model parameters.

  1. THE VALUE OF CLOUD COMPUTING IN THE BUSINESS ENVIRONMENT

    OpenAIRE

    Mircea GEORGESCU; Marian MATEI

    2013-01-01

    Without any doubt, cloud computing has become one of the most significant trends in any enterprise, not only for IT businesses. Besides the fact that the cloud can offer access to low cost, considerably flexible computing resources, cloud computing also provides the capacity to create a new relationship between business entities and corporate IT departments. The value added to the business environment is given by the balanced use of resources, offered by cloud computing. The cloud mentality i...

  2. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  3. Resource-Aware Load Balancing Scheme using Multi-objective Optimization in Cloud Computing

    OpenAIRE

    Kavita Rana; Vikas Zandu

    2016-01-01

    Cloud computing is a service based, on-demand, pay per use model consisting of an interconnected and virtualizes resources delivered over internet. In cloud computing, usually there are number of jobs that need to be executed with the available resources to achieve optimal performance, least possible total time for completion, shortest response time, and efficient utilization of resources etc. Hence, job scheduling is the most important concern that aims to ensure that use’s requirement are ...

  4. VECTR: Virtual Environment Computational Training Resource

    Science.gov (United States)

    Little, William L.

    2018-01-01

    The Westridge Middle School Curriculum and Community Night is an annual event designed to introduce students and parents to potential employers in the Central Florida area. NASA participated in the event in 2017, and has been asked to come back for the 2018 event on January 25. We will be demonstrating our Microsoft Hololens Virtual Rovers project, and the Virtual Environment Computational Training Resource (VECTR) virtual reality tool.

  5. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    Science.gov (United States)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  6. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    OpenAIRE

    Cirasella, Jill

    2009-01-01

    This article is an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news.

  7. Managing resource capacity using hybrid simulation

    Science.gov (United States)

    Ahmad, Norazura; Ghani, Noraida Abdul; Kamil, Anton Abdulbasah; Tahar, Razman Mat

    2014-12-01

    Due to the diversity of patient flows and interdependency of the emergency department (ED) with other units in hospital, the use of analytical models seems not practical for ED modeling. One effective approach to study the dynamic complexity of ED problems is by developing a computer simulation model that could be used to understand the structure and behavior of the system. Attempts to build a holistic model using DES only will be too complex while if only using SD will lack the detailed characteristics of the system. This paper discusses the combination of DES and SD in order to get a better representation of the actual system than using either modeling paradigm solely. The model is developed using AnyLogic software that will enable us to study patient flows and the complex interactions among hospital resources for ED operations. Results from the model show that patients' length of stay is influenced by laboratories turnaround time, bed occupancy rate and ward admission rate.

  8. a Framework for Capacity Building in Mapping Coastal Resources Using Remote Sensing in the Philippines

    Science.gov (United States)

    Tamondong, A.; Cruz, C.; Ticman, T.; Peralta, R.; Go, G. A.; Vergara, M.; Estabillo, M. S.; Cadalzo, I. E.; Jalbuena, R.; Blanco, A.

    2016-06-01

    Remote sensing has been an effective technology in mapping natural resources by reducing the costs and field data gathering time and bringing in timely information. With the launch of several earth observation satellites, an increase in the availability of satellite imageries provides an immense selection of data for the users. The Philippines has recently embarked in a program which will enable the gathering of LiDAR data in the whole country. The capacity of the Philippines to take advantage of these advancements and opportunities is lacking. There is a need to transfer the knowledge of remote sensing technology to other institutions to better utilize the available data. Being an archipelagic country with approximately 36,000 kilometers of coastline, and most of its people depending on its coastal resources, remote sensing is an optimal choice in mapping such resources. A project involving fifteen (15) state universities and colleges and higher education institutions all over the country headed by the University of the Philippines Training Center for Applied Geodesy and Photogrammetry and funded by the Department of Science and Technology was formed to carry out the task of capacity building in mapping the country's coastal resources using LiDAR and other remotely sensed datasets. This paper discusses the accomplishments and the future activities of the project.

  9. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  10. Participatory monitoring and evaluation to aid investment in natural resource manager capacity at a range of scales.

    Science.gov (United States)

    Brown, Peter R; Jacobs, Brent; Leith, Peat

    2012-12-01

    Natural resource (NR) outcomes at catchment scale rely heavily on the adoption of sustainable practices by private NR managers because they control the bulk of the NR assets. Public funds are invested in capacity building of private landholders to encourage adoption of more sustainable natural resource management (NRM) practices. However, prioritisation of NRM funding programmes has often been top-down with limited understanding of the multiple dimensions of landholder capacity leading to a failure to address the underlying capacity constraints of local communities. We argue that well-designed participatory monitoring and evaluation of landholder capacity can provide a mechanism to codify the tacit knowledge of landholders about the social-ecological systems in which they are embedded. This process enables tacit knowledge to be used by regional NRM bodies and government agencies to guide NRM investment in the Australian state of New South Wales. This paper details the collective actions to remove constraints to improved NRM that were identified by discrete groups of landholders through this process. The actions spanned geographical and temporal scales, and responsibility for them ranged across levels of governance.

  11. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  12. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  13. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  14. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  15. Building sustainable organizational capacity to deliver HIV programs in resource-constrained settings: stakeholder perspectives.

    Science.gov (United States)

    Sharma, Anjali; Chiliade, Philippe; Michael Reyes, E; Thomas, Kate K; Collens, Stephen R; Rafael Morales, José

    2013-12-13

    In 2008, the US government mandated that HIV/AIDS care and treatment programs funded by the US President's Emergency Plan for AIDS Relief (PEPFAR) should shift from US-based international partners (IPs) to registered locally owned organizations (local partners, or LPs). The US Health Resources and Services Administration (HRSA) developed the Clinical Assessment for Systems Strengthening (ClASS) framework for technical assistance in resource-constrained settings. The ClASS framework involves all stakeholders in the identification of LPs' strengths and needs for technical assistance. This article examines the role of ClASS in building capacity of LPs that can endure and adapt to changing financial and policy environments. All stakeholders (n=68) in Kenya, Zambia, and Nigeria who had participated in the ClASS from LPs and IPs, the US Centers for Disease Control and Prevention (CDC), and, in Nigeria, HIV/AIDS treatment facilities (TFs) were interviewed individually or in groups (n=42) using an open-ended interview guide. Thematic analysis revealed stakeholder perspectives on ClASS-initiated changes and their sustainability. Local organizations were motivated to make changes in internal operations with the ClASS approach, PEPFAR's competitive funding climate, organizational goals, and desired patient health outcomes. Local organizations drew on internal resources and, if needed, technical assistance from IPs. Reportedly, ClASS-initiated changes and remedial action plans made LPs more competitive for PEPFAR funding. LPs also attributed their successful funding applications to their preexisting systems and reputation. Bureaucracy, complex and competing tasks, and staff attrition impeded progress toward the desired changes. Although CDC continues to provide technical assistance through IPs, declining PEPFAR funds threaten the consolidation of gains, smooth program transition, and continuity of treatment services. The well-timed adaptation and implementation of Cl

  16. Building sustainable organizational capacity to deliver HIV programs in resource-constrained settings: stakeholder perspectives

    Directory of Open Access Journals (Sweden)

    Anjali Sharma

    2013-12-01

    Full Text Available Background: In 2008, the US government mandated that HIV/AIDS care and treatment programs funded by the US President's Emergency Plan for AIDS Relief (PEPFAR should shift from US-based international partners (IPs to registered locally owned organizations (local partners, or LPs. The US Health Resources and Services Administration (HRSA developed the Clinical Assessment for Systems Strengthening (ClASS framework for technical assistance in resource-constrained settings. The ClASS framework involves all stakeholders in the identification of LPs’ strengths and needs for technical assistance. Objective: This article examines the role of ClASS in building capacity of LPs that can endure and adapt to changing financial and policy environments. Design: All stakeholders (n=68 in Kenya, Zambia, and Nigeria who had participated in the ClASS from LPs and IPs, the US Centers for Disease Control and Prevention (CDC, and, in Nigeria, HIV/AIDS treatment facilities (TFs were interviewed individually or in groups (n=42 using an open-ended interview guide. Thematic analysis revealed stakeholder perspectives on ClASS-initiated changes and their sustainability. Results: Local organizations were motivated to make changes in internal operations with the ClASS approach, PEPFAR's competitive funding climate, organizational goals, and desired patient health outcomes. Local organizations drew on internal resources and, if needed, technical assistance from IPs. Reportedly, ClASS-initiated changes and remedial action plans made LPs more competitive for PEPFAR funding. LPs also attributed their successful funding applications to their preexisting systems and reputation. Bureaucracy, complex and competing tasks, and staff attrition impeded progress toward the desired changes. Although CDC continues to provide technical assistance through IPs, declining PEPFAR funds threaten the consolidation of gains, smooth program transition, and continuity of treatment services

  17. Discovery of resources using MADM approaches for parallel and distributed computing

    Directory of Open Access Journals (Sweden)

    Mandeep Kaur

    2017-06-01

    Full Text Available Grid, a form of parallel and distributed computing, allows the sharing of data and computational resources among its users from various geographical locations. The grid resources are diverse in terms of their underlying attributes. The majority of the state-of-the-art resource discovery techniques rely on the static resource attributes during resource selection. However, the matching resources based on the static resource attributes may not be the most appropriate resources for the execution of user applications because they may have heavy job loads, less storage space or less working memory (RAM. Hence, there is a need to consider the current state of the resources in order to find the most suitable resources. In this paper, we have proposed a two-phased multi-attribute decision making (MADM approach for discovery of grid resources by using P2P formalism. The proposed approach considers multiple resource attributes for decision making of resource selection and provides the best suitable resource(s to grid users. The first phase describes a mechanism to discover all matching resources and applies SAW method to shortlist the top ranked resources, which are communicated to the requesting super-peer. The second phase of our proposed methodology applies integrated MADM approach (AHP enriched PROMETHEE-II on the list of selected resources received from different super-peers. The pairwise comparison of the resources with respect to their attributes is made and the rank of each resource is determined. The top ranked resource is then communicated to the grid user by the grid scheduler. Our proposed methodology enables the grid scheduler to allocate the most suitable resource to the user application and also reduces the search complexity by filtering out the less suitable resources during resource discovery.

  18. Optimised resource construction for verifiable quantum computation

    International Nuclear Information System (INIS)

    Kashefi, Elham; Wallden, Petros

    2017-01-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. (paper)

  19. Impact of changing computer technology on hydrologic and water resource modeling

    OpenAIRE

    Loucks, D.P.; Fedra, K.

    1987-01-01

    The increasing availability of substantial computer power at relatively low costs and the increasing ease of using computer graphics, of communicating with other computers and data bases, and of programming using high-level problem-oriented computer languages, is providing new opportunities and challenges for those developing and using hydrologic and water resources models. This paper reviews some of the progress made towards the development and application of computer support systems designe...

  20. ACToR - Aggregated Computational Toxicology Resource

    International Nuclear Information System (INIS)

    Judson, Richard; Richard, Ann; Dix, David; Houck, Keith; Elloumi, Fathi; Martin, Matthew; Cathey, Tommy; Transue, Thomas R.; Spencer, Richard; Wolf, Maritja

    2008-01-01

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast TM

  1. On library information resources construction under network environment

    International Nuclear Information System (INIS)

    Guo Huifang; Wang Jingjing

    2014-01-01

    Information resources construction is the primary task and critical measures for libraries. In the 2lst century, the knowledge economy era, with the continuous development of computer network technology, information resources have become an important part of libraries which have been a significant indicator of its capacity construction. The development of socialized Information, digitalization and internalization has put forward new requirements for library information resources construction. This paper describes the impact of network environment on construction of library information resources and proposes the measures of library information resources. (authors)

  2. CDMA systems capacity engineering

    CERN Document Server

    Kim, Kiseon

    2004-01-01

    This new hands-on resource tackles capacity planning and engineering issues that are crucial to optimizing wireless communication systems performance. Going beyond the system physical level and investigating CDMA system capacity at the service level, this volume is the single-source for engineering and analyzing systems capacity and resources.

  3. Building Human Resources Management Capacity for University Research: The Case at Four Leading Vietnamese Universities

    Science.gov (United States)

    Nguyen, T. L.

    2016-01-01

    At research-intensive universities, building human resources management (HRM) capacity has become a key approach to enhancing a university's research performance. However, despite aspiring to become a research-intensive university, many teaching-intensive universities in developing countries may not have created effective research-promoted HRM…

  4. Predicting the Pullout Capacity of Small Ground Anchors Using Nonlinear Integrated Computing Techniques

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2017-01-01

    Full Text Available This study investigates predicting the pullout capacity of small ground anchors using nonlinear computing techniques. The input-output prediction model for the nonlinear Hammerstein-Wiener (NHW and delay inputs for the adaptive neurofuzzy inference system (DANFIS are developed and utilized to predict the pullout capacity. The results of the developed models are compared with previous studies that used artificial neural networks and least square support vector machine techniques for the same case study. The in situ data collection and statistical performances are used to evaluate the models performance. Results show that the developed models enhance the precision of predicting the pullout capacity when compared with previous studies. Also, the DANFIS model performance is proven to be better than other models used to detect the pullout capacity of ground anchors.

  5. Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony Optimization Algorithm

    OpenAIRE

    Lingna He; Qingshui Li; Linan Zhu

    2012-01-01

    In order to replace the traditional Internet software usage patterns and enterprise management mode, this paper proposes a new business calculation mode- cloud computing, resources scheduling strategy is the key technology in cloud computing, Based on the study of cloud computing system structure and the mode of operation, The key research for cloud computing the process of the work scheduling and resource allocation problems based on ant colony algorithm , Detailed analysis and design of the...

  6. PERHITUNGAN IDLE CAPACITY DENGAN MENGGUNAKAN CAM-I CAPACITY MODEL DALAM RANGKA EFISIENSI BIAYA PADA PT X

    Directory of Open Access Journals (Sweden)

    Muammar Aditya

    2015-09-01

    Full Text Available Aim for this research are to analyze capacity cost which incure from company production machines and human resources whose operate the production machine using CAM-I capacity model. CAM-I capacity model is an approach which focus  upon how to manage company resources. This research initiated at PT X which focus to production activity that used small mixer machine, extruder machine, oven drying machine, enrober machine, pan coting machine which consist of hot and cold pan coating machine, and packing machine which consist of vertical packing machine and horizontal packing machine as well as human resources that operates those machine. This research focus on rate capacity, productive capacity, idle capacity, and nonproductive capacity to measure capacity cost. Result of this research shows most of the capacity owned by either by production machine or human resources are not utilized to its maximum potential. There are need to reduce capacity cost owned by production machine and human resoures to increase the product sales but if its unachieveable there will be need to increase efficiency from production machine and human resources by reducing their quantityDOI: 10.15408/ess.v4i1.1961

  7. Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research.

    Science.gov (United States)

    Erdemir, Ahmet; Hunter, Peter J; Holzapfel, Gerhard A; Loew, Leslie M; Middleton, John; Jacobs, Christopher R; Nithiarasu, Perumal; Löhner, Rainlad; Wei, Guowei; Winkelstein, Beth A; Barocas, Victor H; Guilak, Farshid; Ku, Joy P; Hicks, Jennifer L; Delp, Scott L; Sacks, Michael; Weiss, Jeffrey A; Ateshian, Gerard A; Maas, Steve A; McCulloch, Andrew D; Peng, Grace C Y

    2018-02-01

    The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate

  8. Computer-aided resource planning and scheduling for radiological services

    Science.gov (United States)

    Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.

    1996-05-01

    There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.

  9. Water Resource Management Mechanisms for Intrastate Violent Conflict Resolution: the Capacity Gap and What To Do About It.

    Science.gov (United States)

    Workman, M.; Veilleux, J. C.

    2014-12-01

    Violent conflict and issues surrounding available water resources are both global problems and are connected. Violent conflict is increasingly intrastate in nature and coupled with increased hydrological variability as a function of climate change, there will be increased pressures on water resource use. The majority of mechanisms designed to secure water resources are often based on the presence of a governance framework or another type of institutional capacity, such as offered through a supra- or sub-national organization like the United Nations or a river basin organization. However, institutional frameworks are not present or loose functionality during violent conflict. Therefore, it will likely be extremely difficult to secure water resources for a significant proportion of populations in Fragile and Conflict Affected States. However, the capacity in Organisation for Economic Co-operation and Development nations for the appropriate interventions to address this problem is reduced by an increasing reluctance to participate in interventionist operations following a decade of expeditionary warfighting mainly in Iraq and Afghanistan, and related defence cuts. Therefore, future interventions in violent conflict and securing water resources may be more indirect in nature. This paper assesses the state of understanding key areas in the present literature and highlights the gap of securing water resources during violent conflict in the absence of institutional capacity. There is a need to close this gap as a matter of urgency by formulating frameworks to assess the lack of institutional oversight / framework for water resources in areas where violent conflict is prevalent; developing inclusive resource management platforms through transparency and reconciliation mechanisms; and developing endogenous confidence-building measures and evaluate how these may be encouraged by exogenous initiatives including those facilitated by the international community. This effort

  10. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    Science.gov (United States)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in

  11. Can the Teachers' Creativity Overcome Limited Computer Resources?

    Science.gov (United States)

    Nikolov, Rumen; Sendova, Evgenia

    1988-01-01

    Describes experiences of the Research Group on Education (RGE) at the Bulgarian Academy of Sciences and the Ministry of Education in using limited computer resources when teaching informatics. Topics discussed include group projects; the use of Logo; ability grouping; and out-of-class activities, including publishing a pupils' magazine. (13…

  12. Virtual partitioning for robust resource sharing: computational techniques for heterogeneous traffic

    NARCIS (Netherlands)

    Borst, S.C.; Mitra, D.

    1998-01-01

    We consider virtual partitioning (VP), which is a scheme for sharing a resource among several traffic classes in an efficient, fair, and robust manner. In the preliminary design stage, each traffic class is allocated a nominal capacity, which is based on expected offered traffic and required quality

  13. Patient flow based allocation of hospital resources.

    Science.gov (United States)

    Vissers, J M

    1995-01-01

    The current practice of allocating resources within a hospital introduces peaks and troughs in the workloads of departments and leads therefore to loss of capacity. This happens when requirements for capacity coordination are not adequately taken into account in the decision making process of allocating resources to specialties. The first part of this research involved an analysis of the hospital's production system on dependencies between resources, resulting in a number of capacity coordination requirements that need to be fulfilled for optimized resource utilization. The second, modelling, part of the study involved the development of a framework for resource management decision making, of a set of computer models to support hospital managerial decision making on resource allocation issues in various parts of the hospital, and of an implementation strategy for the application of the models to concrete hospital settings. The third part of the study was devoted to a number of case-studies, illustrating the use of the models when applied in various resource management projects, such as a reorganization of an operating theatre timetable, or the development of a master plan for activities of a group of general surgeons serving two locations of a merged hospital system. The paper summarizes the main findings of the study and concludes with a discussion of results obtained with the new allocation procedure and with recommendations for future research.

  14. Weakly and strongly polynomial algorithms for computing the maximum decrease in uniform arc capacities

    Directory of Open Access Journals (Sweden)

    Ghiyasvand Mehdi

    2016-01-01

    Full Text Available In this paper, a new problem on a directed network is presented. Let D be a feasible network such that all arc capacities are equal to U. Given a t > 0, the network D with arc capacities U - t is called the t-network. The goal of the problem is to compute the largest t such that the t-network is feasible. First, we present a weakly polynomial time algorithm to solve this problem, which runs in O(log(nU maximum flow computations, where n is the number of nodes. Then, an O(m2n time approach is presented, where m is the number of arcs. Both the weakly and strongly polynomial algorithms are inspired by McCormick and Ervolina (1994.

  15. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  16. Integration of Openstack cloud resources in BES III computing cluster

    Science.gov (United States)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  17. Radiologic total lung capacity measurement. Development and evaluation of a computer-based system

    Energy Technology Data Exchange (ETDEWEB)

    Seeley, G.W.; Mazzeo, J.; Borgstrom, M.; Hunter, T.B.; Newell, J.D.; Bjelland, J.C.

    1986-11-01

    The development of a computer-based radiologic total lung capacity (TLC) measurement system designed to be used by non-physician personnel is detailed. Four operators tested the reliability and validity of the system by measuring inspiratory PA and lateral pediatric chest radiographs with a Graf spark pen interfaced to a DEC VAX 11/780 computer. First results suggest that the ultimate goal of developing an accurate and easy to use TLC measurement system for non-physician personnel is attainable.

  18. Recent development of computational resources for new antibiotics discovery

    DEFF Research Database (Denmark)

    Kim, Hyun Uk; Blin, Kai; Lee, Sang Yup

    2017-01-01

    Understanding a complex working mechanism of biosynthetic gene clusters (BGCs) encoding secondary metabolites is a key to discovery of new antibiotics. Computational resources continue to be developed in order to better process increasing volumes of genome and chemistry data, and thereby better...

  19. Relationship between human resource ability and market access capacity on business performance. (case study of wood craft micro- and small-scale industries in Gianyar Regency, Bali)

    Science.gov (United States)

    Sukartini, N. W.; Sudarmini, N. M.; Lasmini, N. K.

    2018-01-01

    The aims of this research are to: (1) analyze the influence of Human Resource Ability on market access capacity in Wood Craft Micro and Small Industry; (2) to analyze the effect of market access capacity on business performance; (3) analyze the influence of Human Resources ability on business performance. Data were collected using questionnaires, interviews, observations, and literature studies. The resulting data were analyzed using Struture Equation Modeling (SEM). The results of the analysis show that (1) there is a positive and significant influence of the ability of Human Resources on market access capacity in Wood Craft Micro-and Small-Scale Industries in Gianyar; (2) there is a positive and significant influence of market access capacity on business performance; and (3) there is a positive and significant influence of Human Resource ability on business performance. To improve the ability to access the market and business performance, it is recommended that human resource ability need to be improved through training; government and higher education institutions are expected to play a role in improving the ability of human resources (craftsmen) through provision of training programs

  20. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud

  1. Computational resources for ribosome profiling: from database to Web server and software.

    Science.gov (United States)

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  3. Uranium from Coal Ash: Resource Assessment and Outlook on Production Capacities

    International Nuclear Information System (INIS)

    Monnet, Antoine

    2014-01-01

    Conclusion: Uranium production from coal-ash is technically feasible: in some situations, it could reach commercial development, in such case, fast lead time will be a plus. Technically accessible resources are significant (1.1 to 4.5 MtU). Yet most of those are low grade. Potential reserves don’t exceed 200 ktU (cut-off grade = 200 ppm). • By-product uranium production => constrained production capacities; • Realistic production potential < 700 tU/year; • ~ 1% of current needs. → Coal ash will not be a significant source of uranium for the 21st century – even if production constrains are released (increase in coal consumption

  4. Telemedicine Based on Mobile Devices and Mobile Cloud Computing

    OpenAIRE

    Lidong Wang; Cheryl Ann Alexander

    2014-01-01

    Mobile devices such as smartphones and tablets support kinds of mobile computing and services. They can access to the cloud or offload the computation-intensive part to the cloud computing resources. Mobile cloud computing (MCC) integrates the cloud computing into the mobile environment, which extends mobile devices’ battery lifetime, improves their data storage capacity and processing power, and improves their reliability and information security. In this paper, the applications of smartphon...

  5. Function Package for Computing Quantum Resource Measures

    Science.gov (United States)

    Huang, Zhiming

    2018-05-01

    In this paper, we present a function package for to calculate quantum resource measures and dynamics of open systems. Our package includes common operators and operator lists, frequently-used functions for computing quantum entanglement, quantum correlation, quantum coherence, quantum Fisher information and dynamics in noisy environments. We briefly explain the functions of the package and illustrate how to use the package with several typical examples. We expect that this package is a useful tool for future research and education.

  6. The Fermilab computing farms in 2000

    International Nuclear Information System (INIS)

    Troy Dawson

    2001-01-01

    The year 2000 was a year of evolutionary change for the Fermilab computer farms. Additional compute capacity was acquired by the addition of PCs for the CDF, D0 and CMS farms. This was done in preparation for Run 2 production and for CMS Monte Carlo production. Additional I/O capacity was added for all the farms. This continues the trend to standardize the I/O systems on the SGI O2x00 architecture. Strong authentication was installed on the CDF and D0 farms. The farms continue to provide large CPU resources for experiments and those users whose calculations benefit from large CPU/low IO resources. The user community will change in 2001 now that the 1999 fixed-target experiments have almost finished processing and Run 2, SDSS, miniBooNE, MINOS, BTeV, and other future experiments and projects will be the major users in the future

  7. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  8. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  9. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    Science.gov (United States)

    Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.

  10. Universal resources for approximate and stochastic measurement-based quantum computation

    International Nuclear Information System (INIS)

    Mora, Caterina E.; Piani, Marco; Miyake, Akimasa; Van den Nest, Maarten; Duer, Wolfgang; Briegel, Hans J.

    2010-01-01

    We investigate which quantum states can serve as universal resources for approximate and stochastic measurement-based quantum computation in the sense that any quantum state can be generated from a given resource by means of single-qubit (local) operations assisted by classical communication. More precisely, we consider the approximate and stochastic generation of states, resulting, for example, from a restriction to finite measurement settings or from possible imperfections in the resources or local operations. We show that entanglement-based criteria for universality obtained in M. Van den Nest et al. [New J. Phys. 9, 204 (2007)] for the exact, deterministic case can be lifted to the much more general approximate, stochastic case. This allows us to move from the idealized situation (exact, deterministic universality) considered in previous works to the practically relevant context of nonperfect state preparation. We find that any entanglement measure fulfilling some basic requirements needs to reach its maximum value on some element of an approximate, stochastic universal family of resource states, as the resource size grows. This allows us to rule out various families of states as being approximate, stochastic universal. We prove that approximate, stochastic universality is in general a weaker requirement than deterministic, exact universality and provide resources that are efficient approximate universal, but not exact deterministic universal. We also study the robustness of universal resources for measurement-based quantum computation under realistic assumptions about the (imperfect) generation and manipulation of entangled states, giving an explicit expression for the impact that errors made in the preparation of the resource have on the possibility to use it for universal approximate and stochastic state preparation. Finally, we discuss the relation between our entanglement-based criteria and recent results regarding the uselessness of states with a high

  11. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  12. The Trope Tank: A Laboratory with Material Resources for Creative Computing

    Directory of Open Access Journals (Sweden)

    Nick Montfort

    2014-12-01

    Full Text Available http://dx.doi.org/10.5007/1807-9288.2014v10n2p53 Principles for organizing and making use of a laboratory with material computing resources are articulated. This laboratory, the Trope Tank, is a facility for teaching, research, and creative collaboration and offers hardware (in working condition and set up for use from the 1970s, 1980s, and 1990s, including videogame systems, home computers, and an arcade cabinet. To aid in investigating the material history of texts, the lab has a small 19th century letterpress, a typewriter, a print terminal, and dot-matrix printers. Other resources include controllers, peripherals, manuals, books, and software on physical media. These resources are used for teaching, loaned for local exhibitions and presentations, and accessed by researchers and artists. The space is primarily a laboratory (rather than a library, studio, or museum, so materials are organized by platform and intended use. Textual information about the historical contexts of the available systems, and resources are set up to allow easy operation, and even casual use, by researchers, teachers, students, and artists.

  13. NMRbox: A Resource for Biomolecular NMR Computation.

    Science.gov (United States)

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  14. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  15. Consolidation of cloud computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  16. Assessing water resources adaptive capacity to climate change impacts in the Pacific Northwest Region of North America

    Directory of Open Access Journals (Sweden)

    A. F. Hamlet

    2011-05-01

    Full Text Available Climate change impacts in Pacific Northwest Region of North America (PNW are projected to include increasing temperatures and changes in the seasonality of precipitation (increasing precipitation in winter, decreasing precipitation in summer. Changes in precipitation are also spatially varying, with the northwestern parts of the region generally experiencing greater increases in cool season precipitation than the southeastern parts. These changes in climate are projected to cause loss of snowpack and associated streamflow timing shifts which will increase cool season (October–March flows and decrease warm season (April–September flows and water availability. Hydrologic extremes such as the 100 yr flood and extreme low flows are also expected to change, although these impacts are not spatially homogeneous and vary with mid-winter temperatures and other factors. These changes have important implications for natural ecosystems affected by water, and for human systems.

    The PNW is endowed with extensive water resources infrastructure and well-established and well-funded management agencies responsible for ensuring that water resources objectives (such as water supply, water quality, flood control, hydropower production, environmental services, etc. are met. Likewise, access to observed hydrological, meteorological, and climatic data and forecasts is in general exceptionally good in the United States and Canada, and is often supported by federally funded programs that ensure that these resources are freely available to water resources practitioners, policy makers, and the general public.

    Access to these extensive resources support the argument that at a technical level the PNW has high capacity to deal with the potential impacts of natural climate variability on water resources. To the extent that climate change will manifest itself as moderate changes in variability or extremes, we argue that existing water resources

  17. Assessing water resources adaptive capacity to climate change impacts in the Pacific Northwest Region of North America

    Science.gov (United States)

    Hamlet, A. F.

    2011-05-01

    Climate change impacts in Pacific Northwest Region of North America (PNW) are projected to include increasing temperatures and changes in the seasonality of precipitation (increasing precipitation in winter, decreasing precipitation in summer). Changes in precipitation are also spatially varying, with the northwestern parts of the region generally experiencing greater increases in cool season precipitation than the southeastern parts. These changes in climate are projected to cause loss of snowpack and associated streamflow timing shifts which will increase cool season (October-March) flows and decrease warm season (April-September) flows and water availability. Hydrologic extremes such as the 100 yr flood and extreme low flows are also expected to change, although these impacts are not spatially homogeneous and vary with mid-winter temperatures and other factors. These changes have important implications for natural ecosystems affected by water, and for human systems. The PNW is endowed with extensive water resources infrastructure and well-established and well-funded management agencies responsible for ensuring that water resources objectives (such as water supply, water quality, flood control, hydropower production, environmental services, etc.) are met. Likewise, access to observed hydrological, meteorological, and climatic data and forecasts is in general exceptionally good in the United States and Canada, and is often supported by federally funded programs that ensure that these resources are freely available to water resources practitioners, policy makers, and the general public. Access to these extensive resources support the argument that at a technical level the PNW has high capacity to deal with the potential impacts of natural climate variability on water resources. To the extent that climate change will manifest itself as moderate changes in variability or extremes, we argue that existing water resources infrastructure and institutional arrangements

  18. An entropy theorem for computing the capacity of weakly (d, k)-constrained sequences

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Schouhamer Immink, K.A.

    2000-01-01

    We find an analytic expression for the maximum of the normalized entropy -SieTpiln pi/SieTipi where the set T is the disjoint union of sets Sn of positive integers that are assigned probabilities Pn, SnPn =1. This result is applied to the computation of the capacity of weakly (d,k)-constrained

  19. Efficient Buffer Capacity and Scheduler Setting Computation for Soft Real-Time Stream Processing Applications

    NARCIS (Netherlands)

    Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef

    2007-01-01

    Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique

  20. Why Are We Talking About Capacity Markets?

    Energy Technology Data Exchange (ETDEWEB)

    Frew, Bethany

    2017-06-28

    Revenue sufficiency or 'missing money' concerns in wholesale electricity markets are important because they could lead to resource (or capacity) adequacy shortfalls. Capacity markets or other capacity-based payments are among the proposed solutions to remedy these challenges. This presentation provides a high-level overview of the importance of and process for ensuring resource adequacy, and then discusses considerations for capacity markets under futures with high penetrations of variable resources such as wind and solar.

  1. A novel resource management method of providing operating system as a service for mobile transparent computing.

    Science.gov (United States)

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  2. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    Directory of Open Access Journals (Sweden)

    Yonghua Xiong

    2014-01-01

    Full Text Available This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU virtualization and mobile agent for mobile transparent computing (MTC to devise a method of managing shared resources and services management (SRSM. It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user’s requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  3. Focused attention improves working memory: implications for flexible-resource and discrete-capacity models.

    Science.gov (United States)

    Souza, Alessandra S; Rerko, Laura; Lin, Hsuan-Yu; Oberauer, Klaus

    2014-10-01

    Performance in working memory (WM) tasks depends on the capacity for storing objects and on the allocation of attention to these objects. Here, we explored how capacity models need to be augmented to account for the benefit of focusing attention on the target of recall. Participants encoded six colored disks (Experiment 1) or a set of one to eight colored disks (Experiment 2) and were cued to recall the color of a target on a color wheel. In the no-delay condition, the recall-cue was presented after a 1,000-ms retention interval, and participants could report the retrieved color immediately. In the delay condition, the recall-cue was presented at the same time as in the no-delay condition, but the opportunity to report the color was delayed. During this delay, participants could focus attention exclusively on the target. Responses deviated less from the target's color in the delay than in the no-delay condition. Mixture modeling assigned this benefit to a reduction in guessing (Experiments 1 and 2) and transposition errors (Experiment 2). We tested several computational models implementing flexible or discrete capacity allocation, aiming to explain both the effect of set size, reflecting the limited capacity of WM, and the effect of delay, reflecting the role of attention to WM representations. Both models fit the data better when a spatially graded source of transposition error is added to its assumptions. The benefits of focusing attention could be explained by allocating to this object a higher proportion of the capacity to represent color.

  4. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  5. Resource Planning in Glaucoma: A Tool to Evaluate Glaucoma Service Capacity.

    Science.gov (United States)

    Batra, Ruchika; Sharma, Hannah E; Elaraoud, Ibrahim; Mohamed, Shabbir

    2017-12-28

    The National Patient Safety Agency (2009) publication advising timely follow-up of patients with established glaucoma followed several reported instances of visual loss due to postponed appointments and patients lost to follow-up. The Royal College of Ophthalmologists Quality Standards Development Group stated that all hospital appointments should occur within 15% of the intended follow-up period. To determine whether: 1. Glaucoma follow-up appointments at a teaching hospital occur within the requested time 2. Appointments are requested at appropriate intervals based on the NICE Guidelines 3. The capacity of the glaucoma service is adequate Methods: A two-part audit was undertaken of 98 and 99 consecutive patients respectively attending specialist glaucoma clinics. In the first part, the reasons for delayed appointments were recorded. In the second part the requested follow-up was compared with NICE guidelines where applicable. Based on the findings, changes were implemented and a re-audit of 100 patients was carried out. The initial audit found that although clinical decisions regarding follow-up intervals were 100% compliant with NICE guidelines where applicable, 24% of appointments were delayed beyond 15% of the requested period, due to administrative errors and inadequate capacity, leading to significant clinical deterioration in two patients. Following the introduction of an electronic appointment tracker and increased clinical capacity created by extra clinics and clinicians, the re-audit found a marked decrease in the percentage of appointments being delayed (9%). This audit is a useful tool to evaluate glaucoma service provision, assist in resource planning for the service and bring about change in a non-confrontational way. It can be widely applied and adapted for use in other medical specialities.

  6. Mobile cloud computing for computation offloading: Issues and challenges

    Directory of Open Access Journals (Sweden)

    Khadija Akherfi

    2018-01-01

    Full Text Available Despite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC integrates mobile computing and Cloud Computing (CC in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs such as limited battery lifetime, limited processing capabilities, and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition, it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research.

  7. The ISCB Student Council Internship Program: Expanding computational biology capacity worldwide.

    Science.gov (United States)

    Anupama, Jigisha; Francescatto, Margherita; Rahman, Farzana; Fatima, Nazeefa; DeBlasio, Dan; Shanmugam, Avinash Kumar; Satagopam, Venkata; Santos, Alberto; Kolekar, Pandurang; Michaut, Magali; Guney, Emre

    2018-01-01

    Education and training are two essential ingredients for a successful career. On one hand, universities provide students a curriculum for specializing in one's field of study, and on the other, internships complement coursework and provide invaluable training experience for a fruitful career. Consequently, undergraduates and graduates are encouraged to undertake an internship during the course of their degree. The opportunity to explore one's research interests in the early stages of their education is important for students because it improves their skill set and gives their career a boost. In the long term, this helps to close the gap between skills and employability among students across the globe and balance the research capacity in the field of computational biology. However, training opportunities are often scarce for computational biology students, particularly for those who reside in less-privileged regions. Aimed at helping students develop research and academic skills in computational biology and alleviating the divide across countries, the Student Council of the International Society for Computational Biology introduced its Internship Program in 2009. The Internship Program is committed to providing access to computational biology training, especially for students from developing regions, and improving competencies in the field. Here, we present how the Internship Program works and the impact of the internship opportunities so far, along with the challenges associated with this program.

  8. The ISCB Student Council Internship Program: Expanding computational biology capacity worldwide.

    Directory of Open Access Journals (Sweden)

    Jigisha Anupama

    2018-01-01

    Full Text Available Education and training are two essential ingredients for a successful career. On one hand, universities provide students a curriculum for specializing in one's field of study, and on the other, internships complement coursework and provide invaluable training experience for a fruitful career. Consequently, undergraduates and graduates are encouraged to undertake an internship during the course of their degree. The opportunity to explore one's research interests in the early stages of their education is important for students because it improves their skill set and gives their career a boost. In the long term, this helps to close the gap between skills and employability among students across the globe and balance the research capacity in the field of computational biology. However, training opportunities are often scarce for computational biology students, particularly for those who reside in less-privileged regions. Aimed at helping students develop research and academic skills in computational biology and alleviating the divide across countries, the Student Council of the International Society for Computational Biology introduced its Internship Program in 2009. The Internship Program is committed to providing access to computational biology training, especially for students from developing regions, and improving competencies in the field. Here, we present how the Internship Program works and the impact of the internship opportunities so far, along with the challenges associated with this program.

  9. Measuring the impact of computer resource quality on the software development process and product

    Science.gov (United States)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  10. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  11. Alluvial Diamond Resource Potential and Production Capacity Assessment of Ghana

    Science.gov (United States)

    Chirico, Peter G.; Malpeli, Katherine C.; Anum, Solomon; Phillips, Emily C.

    2010-01-01

    In May of 2000, a meeting was convened in Kimberley, South Africa, and attended by representatives of the diamond industry and leaders of African governments to develop a certification process intended to assure that rough, exported diamonds were free of conflictual concerns. This meeting was supported later in 2000 by the United Nations in a resolution adopted by the General Assembly. By 2002, the Kimberley Process Certification Scheme (KPCS) was ratified and signed by both diamond-producing and diamond-importing countries. Over 70 countries were included as members at the end of 2007. To prevent trade in 'conflict' diamonds while protecting legitimate trade, the KPCS requires that each country set up an internal system of controls to prevent conflict diamonds from entering any imported or exported shipments of rough diamonds. Every diamond or diamond shipment must be accompanied by a Kimberley Process (KP) certificate and be contained in tamper-proof packaging. The objective of this study was to assess the alluvial diamond resource endowment and current production capacity of the alluvial diamond-mining sector in Ghana. A modified volume and grade methodology was used to estimate the remaining diamond reserves within the Birim and Bonsa diamond fields. The production capacity of the sector was estimated using a formulaic expression of the number of workers reported in the sector, their productivity, and the average grade of deposits mined. This study estimates that there are approximately 91,600,000 carats of alluvial diamonds remaining in both the Birim and Bonsa diamond fields: 89,000,000 carats in the Birim and 2,600,000 carats in the Bonsa. Production capacity is calculated to be 765,000 carats per year, based on the formula used and available data on the number of workers and worker productivity. Annual production is highly dependent on the international diamond market and prices, the numbers of seasonal workers actively mining in the sector, and

  12. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  13. Hospitals Capability in Response to Disasters Considering Surge Capacity Approach

    Directory of Open Access Journals (Sweden)

    Gholamreza Khademipour

    2016-01-01

    Full Text Available Background: The man-made and natural disasters have adverse effects with sound, apparent, and unknown consequences. Among various components of disaster management in health sector, the most important role is performed by health-treatment systems, especially hospitals. Therefore, the present study aimed to evaluate the surge capacity of hospitals of Kerman Province in disaster in 2015. Materials and Methods: This is a quantitative study, conducted on private, military, and medical sciences hospitals of Kerman Province. The sampling method was total count and data collection for the research was done by questionnaire. The first section of the questionnaire included demographic information of the studied hospitals and second part examined the hospital capacity in response to disasters in 4 fields of equipment, physical space, human resources, and applied programs. The extracted data were analyzed by descriptive statistics. Results: The mean capability of implementing the surge capacity programs by hospitals of Kerman Province in disasters and in 4 fields of equipment, physical space, human resources, and applied programs was evaluated as 7.33% (weak. The surge capacity capability of state hospitals in disasters was computed as 8% and compared to private hospitals (6.07% had a more suitable condition. Conclusion: Based on the results of study and significance of preparedness of hospitals in response to disasters, it is proposed that managers of studied hospitals take measures to promote the hospital response capacity to disasters based on 4 components of increasing hospital capacity.

  14. Surgical resource utilization in urban terrorist bombing: a computer simulation.

    Science.gov (United States)

    Hirshberg, A; Stein, M; Walden, R

    1999-09-01

    The objective of this study was to analyze the utilization of surgical staff and facilities during an urban terrorist bombing incident. A discrete-event computer model of the emergency room and related hospital facilities was constructed and implemented, based on cumulated data from 12 urban terrorist bombing incidents in Israel. The simulation predicts that the admitting capacity of the hospital depends primarily on the number of available surgeons and defines an optimal staff profile for surgeons, residents, and trauma nurses. The major bottlenecks in the flow of critical casualties are the shock rooms and the computed tomographic scanner but not the operating rooms. The simulation also defines the number of reinforcement staff needed to treat noncritical casualties and shows that radiology is the major obstacle to the flow of these patients. Computer simulation is an important new tool for the optimization of surgical service elements for a multiple-casualty situation.

  15. Resource planning for gas utilities: Using a model to analyze pivotal issues

    Energy Technology Data Exchange (ETDEWEB)

    Busch, J.F.; Comnes, G.A.

    1995-11-01

    With the advent of wellhead price decontrols that began in the late 1970s and the development of open access pipelines in the 1980s and 90s, gas local distribution companies (LDCs) now have increased responsibility for their gas supplies and face an increasingly complex array of supply and capacity choices. Heretofore this responsibility had been share with the interstate pipelines that provide bundled firm gas supplies. Moreover, gas supply an deliverability (capacity) options have multiplied as the pipeline network becomes increasing interconnected and as new storage projects are developed. There is now a fully-functioning financial market for commodity price hedging instruments and, on interstate Pipelines, secondary market (called capacity release) now exists. As a result of these changes in the natural gas industry, interest in resource planning and computer modeling tools for LDCs is increasing. Although in some ways the planning time horizon has become shorter for the gas LDC, the responsibility conferred to the LDC and complexity of the planning problem has increased. We examine current gas resource planning issues in the wake of the Federal Energy Regulatory Commission`s (FERC) Order 636. Our goal is twofold: (1) to illustrate the types of resource planning methods and models used in the industry and (2) to illustrate some of the key tradeoffs among types of resources, reliability, and system costs. To assist us, we utilize a commercially-available dispatch and resource planning model and examine four types of resource planning problems: the evaluation of new storage resources, the evaluation of buyback contracts, the computation of avoided costs, and the optimal tradeoff between reliability and system costs. To make the illustration of methods meaningful yet tractable, we developed a prototype LDC and used it for the majority of our analysis.

  16. School nutritional capacity, resources and practices are associated with availability of food/beverage items in schools.

    Science.gov (United States)

    Mâsse, Louise C; de Niet, Judith E

    2013-02-19

    The school food environment is important to target as less healthful food and beverages are widely available at schools. This study examined whether the availability of specific food/beverage items was associated with a number of school environmental factors. Principals from elementary (n=369) and middle/high schools (n=118) in British Columbia (BC), Canada completed a survey measuring characteristics of the school environment. Our measurement framework integrated constructs from the Theories of Organizational Change and elements from Stillman's Tobacco Policy Framework adapted for obesity prevention. Our measurement framework included assessment of policy institutionalization of nutritional guidelines at the district and school levels, climate, nutritional capacity and resources (nutritional resources and participation in nutritional programs), nutritional practices, and school community support for enacting stricter nutritional guidelines. We used hierarchical mixed-effects logistic regression analyses to examine associations with the availability of fruit, vegetables, pizza/hamburgers/hot dogs, chocolate candy, sugar-sweetened beverages, and french fried potatoes. In elementary schools, fruit and vegetable availability was more likely among schools that have more nutritional resources (OR=6.74 and 5.23, respectively). In addition, fruit availability in elementary schools was highest in schools that participated in the BC School Fruit and Vegetable Nutritional Program and the BC Milk program (OR=4.54 and OR=3.05, respectively). In middle/high schools, having more nutritional resources was associated with vegetable availability only (OR=5.78). Finally, middle/high schools that have healthier nutritional practices (i.e., which align with upcoming provincial/state guidelines) were less likely to have the following food/beverage items available at school: chocolate candy (OR= .80) and sugar-sweetened beverages (OR= .76). School nutritional capacity, resources

  17. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    Science.gov (United States)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  18. An integrated system for land resources supervision based on the IoT and cloud computing

    Science.gov (United States)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  19. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  20. An interactive computer approach to performing resource analysis for a multi-resource/multi-project problem. [Spacelab inventory procurement planning

    Science.gov (United States)

    Schlagheck, R. A.

    1977-01-01

    New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.

  1. Testing a computer-based ostomy care training resource for staff nurses.

    Science.gov (United States)

    Bales, Isabel

    2010-05-01

    Fragmented teaching and ostomy care provided by nonspecialized clinicians unfamiliar with state-of-the-art care and products have been identified as problems in teaching ostomy care to the new ostomate. After conducting a literature review of theories and concepts related to the impact of nurse behaviors and confidence on ostomy care, the author developed a computer-based learning resource and assessed its effect on staff nurse confidence. Of 189 staff nurses with a minimum of 1 year acute-care experience employed in the acute care, emergency, and rehabilitation departments of an acute care facility in the Midwestern US, 103 agreed to participate and returned completed pre- and post-tests, each comprising the same eight statements about providing ostomy care. F and P values were computed for differences between pre- and post test scores. Based on a scale where 1 = totally disagree and 5 = totally agree with the statement, baseline confidence and perceived mean knowledge scores averaged 3.8 and after viewing the resource program post-test mean scores averaged 4.51, a statistically significant improvement (P = 0.000). The largest difference between pre- and post test scores involved feeling confident in having the resources to learn ostomy skills independently. The availability of an electronic ostomy care resource was rated highly in both pre- and post testing. Studies to assess the effects of increased confidence and knowledge on the quality and provision of care are warranted.

  2. Alluvial diamond resource potential and production capacity assessment of the Central African Republic

    Science.gov (United States)

    Chirico, Peter G.; Barthelemy, Francis; Ngbokoto, Francois A.

    2010-01-01

    In May of 2000, a meeting was convened in Kimberley, South Africa, and attended by representatives of the diamond industry and leaders of African governments to develop a certification process intended to assure that rough, exported diamonds were free of conflict concerns. This meeting was supported later in 2000 by the United Nations in a resolution adopted by the General Assembly. By 2002, the Kimberly Process Certification Scheme (KPCS) was ratified and signed by diamond-producing and diamond-importing countries. Over 70 countries were included as members of the KPCS at the end of 2007. To prevent trade in "conflict diamonds" while protecting legitimate trade, the KPCS requires that each country set up an internal system of controls to prevent conflict diamonds from entering any imported or exported shipments of rough diamonds. Every diamond or diamond shipment must be accompanied by a Kimberley Process (KP) certificate and be contained in tamper-proof packaging. The objective of this study was (1) to assess the naturally occurring endowment of diamonds in the Central African Republic (potential resources) based on geological evidence, previous studies, and recent field data and (2) to assess the diamond-production capacity and measure the intensity of mining activity. Several possible methods can be used to estimate the potential diamond resource. However, because there is generally a lack of sufficient and consistent data recording all diamond mining in the Central African Republic and because time to conduct fieldwork and accessibility to the diamond mining areas are limited, two different methodologies were used: the volume and grade approach and the content per kilometer approach. Estimates are that approximately 39,000,000 carats of alluvial diamonds remain in the eastern and western zones of the CAR combined. This amount is roughly twice the total amount of diamonds reportedly exported from the Central African Republic since 1931. Production capacity is

  3. Blockchain-Empowered Fair Computational Resource Sharing System in the D2D Network

    Directory of Open Access Journals (Sweden)

    Zhen Hong

    2017-11-01

    Full Text Available Device-to-device (D2D communication is becoming an increasingly important technology in future networks with the climbing demand for local services. For instance, resource sharing in the D2D network features ubiquitous availability, flexibility, low latency and low cost. However, these features also bring along challenges when building a satisfactory resource sharing system in the D2D network. Specifically, user mobility is one of the top concerns for designing a cooperative D2D computational resource sharing system since mutual communication may not be stably available due to user mobility. A previous endeavour has demonstrated and proven how connectivity can be incorporated into cooperative task scheduling among users in the D2D network to effectively lower average task execution time. There are doubts about whether this type of task scheduling scheme, though effective, presents fairness among users. In other words, it can be unfair for users who contribute many computational resources while receiving little when in need. In this paper, we propose a novel blockchain-based credit system that can be incorporated into the connectivity-aware task scheduling scheme to enforce fairness among users in the D2D network. Users’ computational task cooperation will be recorded on the public blockchain ledger in the system as transactions, and each user’s credit balance can be easily accessible from the ledger. A supernode at the base station is responsible for scheduling cooperative computational tasks based on user mobility and user credit balance. We investigated the performance of the credit system, and simulation results showed that with a minor sacrifice of average task execution time, the level of fairness can obtain a major enhancement.

  4. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    Science.gov (United States)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  5. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Jayatilaka, B. [Fermilab; Levshina, T. [Fermilab; Sehgal, C. [Fermilab; Gardner, R. [Chicago U.; Rynge, M. [USC - ISI, Marina del Rey; Würthwein, F. [UC, San Diego

    2017-11-22

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  6. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    The general potential of computer games for teaching and learning is becoming widely recognized. In particular, within the application contexts of primary and lower secondary education, the relevance and value and computer games seem more accepted, and the possibility and willingness to incorporate...... computer games as a possible resource at the level of other educational resources seem more frequent. For some reason, however, to apply computer games in processes of teaching and learning at the high school level, seems an almost non-existent event. This paper reports on study of incorporating...... the learning game “Global Conflicts: Latin America” as a resource into the teaching and learning of a course involving the two subjects “English language learning” and “Social studies” at the final year in a Danish high school. The study adapts an explorative research design approach and investigates...

  7. Development of urbanization in arid and semi arid regions based on the water resource carrying capacity -- a case study of Changji, Xinjiang

    Science.gov (United States)

    Xiao, H.; Zhang, L.; Chai, Z.

    2017-07-01

    The arid and semiarid region in China where have a relatively weak economic foundation, independent development capacity, and the low-level of urbanization. The new urbanization within these regions is facing severe challenges brought by the constraints of resources. In this paper, we selected the Changji Hui Autonomous Prefecture, Xinjiang Uyghur Autonomous Region as study area. We found that agricultural planting structure is the key water consumption index based on the research about the main water demands of domestic, agriculture and industry. Finally, we suggest that more attentions should be paid to the rational utilization of water resources, population carrying capacity, and adjust and upgrade the industrial structure, with the purpose of coordination with the Silk Road Economic Belt.

  8. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  9. Tactical resource allocation and elective patient admission planning in care processes.

    Science.gov (United States)

    Hulshof, Peter J H; Boucherie, Richard J; Hans, Erwin W; Hurink, Johann L

    2013-06-01

    Tactical planning of resources in hospitals concerns elective patient admission planning and the intermediate term allocation of resource capacities. Its main objectives are to achieve equitable access for patients, to meet production targets/to serve the strategically agreed number of patients, and to use resources efficiently. This paper proposes a method to develop a tactical resource allocation and elective patient admission plan. These tactical plans allocate available resources to various care processes and determine the selection of patients to be served that are at a particular stage of their care process. Our method is developed in a Mixed Integer Linear Programming (MILP) framework and copes with multiple resources, multiple time periods and multiple patient groups with various uncertain treatment paths through the hospital, thereby integrating decision making for a chain of hospital resources. Computational results indicate that our method leads to a more equitable distribution of resources and provides control of patient access times, the number of patients served and the fraction of allocated resource capacity. Our approach is generic, as the base MILP and the solution approach allow for including various extensions to both the objective criteria and the constraints. Consequently, the proposed method is applicable in various settings of tactical hospital management.

  10. Application of microarray analysis on computer cluster and cloud platforms.

    Science.gov (United States)

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  11. Negative quasi-probability as a resource for quantum computation

    International Nuclear Information System (INIS)

    Veitch, Victor; Ferrie, Christopher; Emerson, Joseph; Gross, David

    2012-01-01

    A central problem in quantum information is to determine the minimal physical resources that are required for quantum computational speed-up and, in particular, for fault-tolerant quantum computation. We establish a remarkable connection between the potential for quantum speed-up and the onset of negative values in a distinguished quasi-probability representation, a discrete analogue of the Wigner function for quantum systems of odd dimension. This connection allows us to resolve an open question on the existence of bound states for magic state distillation: we prove that there exist mixed states outside the convex hull of stabilizer states that cannot be distilled to non-stabilizer target states using stabilizer operations. We also provide an efficient simulation protocol for Clifford circuits that extends to a large class of mixed states, including bound universal states. (paper)

  12. School nutritional capacity, resources and practices are associated with availability of food/beverage items in schools

    Science.gov (United States)

    2013-01-01

    Background The school food environment is important to target as less healthful food and beverages are widely available at schools. This study examined whether the availability of specific food/beverage items was associated with a number of school environmental factors. Methods Principals from elementary (n = 369) and middle/high schools (n = 118) in British Columbia (BC), Canada completed a survey measuring characteristics of the school environment. Our measurement framework integrated constructs from the Theories of Organizational Change and elements from Stillman’s Tobacco Policy Framework adapted for obesity prevention. Our measurement framework included assessment of policy institutionalization of nutritional guidelines at the district and school levels, climate, nutritional capacity and resources (nutritional resources and participation in nutritional programs), nutritional practices, and school community support for enacting stricter nutritional guidelines. We used hierarchical mixed-effects logistic regression analyses to examine associations with the availability of fruit, vegetables, pizza/hamburgers/hot dogs, chocolate candy, sugar-sweetened beverages, and french fried potatoes. Results In elementary schools, fruit and vegetable availability was more likely among schools that have more nutritional resources (OR = 6.74 and 5.23, respectively). In addition, fruit availability in elementary schools was highest in schools that participated in the BC School Fruit and Vegetable Nutritional Program and the BC Milk program (OR = 4.54 and OR = 3.05, respectively). In middle/high schools, having more nutritional resources was associated with vegetable availability only (OR = 5.78). Finally, middle/high schools that have healthier nutritional practices (i.e., which align with upcoming provincial/state guidelines) were less likely to have the following food/beverage items available at school: chocolate candy (OR = .80) and sugar

  13. Estimating social carrying capacity through computer simulation modeling: an application to Arches National Park, Utah

    Science.gov (United States)

    Benjamin Wang; Robert E. Manning; Steven R. Lawson; William A. Valliere

    2001-01-01

    Recent research and management experience has led to several frameworks for defining and managing carrying capacity of national parks and related areas. These frameworks rely on monitoring indicator variables to ensure that standards of quality are maintained. The objective of this study was to develop a computer simulation model to estimate the relationships between...

  14. A Constraint programming-based genetic algorithm for capacity output optimization

    Directory of Open Access Journals (Sweden)

    Kate Ean Nee Goh

    2014-10-01

    Full Text Available Purpose: The manuscript presents an investigation into a constraint programming-based genetic algorithm for capacity output optimization in a back-end semiconductor manufacturing company.Design/methodology/approach: In the first stage, constraint programming defining the relationships between variables was formulated into the objective function. A genetic algorithm model was created in the second stage to optimize capacity output. Three demand scenarios were applied to test the robustness of the proposed algorithm.Findings: CPGA improved both the machine utilization and capacity output once the minimum requirements of a demand scenario were fulfilled. Capacity outputs of the three scenarios were improved by 157%, 7%, and 69%, respectively.Research limitations/implications: The work relates to aggregate planning of machine capacity in a single case study. The constraints and constructed scenarios were therefore industry-specific.Practical implications: Capacity planning in a semiconductor manufacturing facility need to consider multiple mutually influenced constraints in resource availability, process flow and product demand. The findings prove that CPGA is a practical and an efficient alternative to optimize the capacity output and to allow the company to review its capacity with quick feedback.Originality/value: The work integrates two contemporary computational methods for a real industry application conventionally reliant on human judgement.

  15. Expanding Capacity and Promoting Inclusion in Introductory Computer Science: A Focus on Near-Peer Mentor Preparation and Code Review

    Science.gov (United States)

    Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey

    2017-01-01

    A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…

  16. Computer calculation of heat capacity of natural gases over a wide range of pressure and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Dranchuk, P.M. (Alberta Univ., Edmonton, AB (Canada)); Abou-Kassem, J.H. (Pennsylvania State Univ., University Park, PA (USA))

    1992-04-01

    A method is presented whereby specific heats or heat capacities of natural gases, both sweet and sour, at elevated pressures and temperatures may be made suitable to modern-day machine calculation. The method involves developing a correlation for ideal isobaric heat capacity as a function of gas gravity and pseudo reduced temperature over the temperature range of 300 to 1500 K, and a mathematical equation for the isobaric heat capacity departure based on accepted thermodynamic principles applied to an equation of state that adequately describes the behavior of gases to which the Standing and Katz Z factor correlation applies. The heat capacity departure equation is applicable over the range of 0.2 {le} Pr {le} 15 and 1.05 {le} Tr {le} 3, where Pr and Tr refer to the reduced pressure and temperature respectively. The significance of the method presented lies in its utility and adaptability to computer applications. 25 refs., 2 figs., 4 tabs.

  17. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    Science.gov (United States)

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  18. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  19. 78 FR 77161 - Grant Program To Build Tribal Energy Development Capacity

    Science.gov (United States)

    2013-12-20

    ... Feasibility studies and energy resource assessments; Purchase of resource assessment data; Research and... used to eliminate capacity gaps or obtain the development of energy resource development capacity... eliminate any identified capacity gaps; (f) Objectives of the proposal describing how the proposed project...

  20. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    Science.gov (United States)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  1. Computing challenges of the CMS experiment

    International Nuclear Information System (INIS)

    Krammer, N.; Liko, D.

    2017-01-01

    The success of the LHC experiments is due to the magnificent performance of the detector systems and the excellent operating computing systems. The CMS offline software and computing system is successfully fulfilling the LHC Run 2 requirements. For the increased data rate of future LHC operation, together with high pileup interactions, improvements of the usage of the current computing facilities and new technologies became necessary. Especially for the challenge of the future HL-LHC a more flexible and sophisticated computing model is needed. In this presentation, I will discuss the current computing system used in the LHC Run 2 and future computing facilities for the HL-LHC runs using flexible computing technologies like commercial and academic computing clouds. The cloud resources are highly virtualized and can be deployed for a variety of computing tasks providing the capacities for the increasing needs of large scale scientific computing.

  2. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  3. Capacity market design and renewable energy: Performance incentives, qualifying capacity, and demand curves

    Energy Technology Data Exchange (ETDEWEB)

    Botterud, Audun; Levin, Todd; Byers, Conleigh

    2018-01-01

    A review of capacity markets in the United States in the context of increasing levels of variable renewable energy finds substantial differences with respect to incentives for operational performance, methods to calculate qualifying capacity for variable renewable energy and energy storage, and demand curves for capacity. The review also reveals large differences in historical capacity market clearing prices. The authors conclude that electricity market design must continue to evolve to achieve cost-effective policies for resource adequacy.

  4. The Development of Educational and/or Training Computer Games for Students with Disabilities

    Science.gov (United States)

    Kwon, Jungmin

    2012-01-01

    Computer and video games have much in common with the strategies used in special education. Free resources for game development are becoming more widely available, so lay computer users, such as teachers and other practitioners, now have the capacity to develop games using a low budget and a little self-teaching. This article provides a guideline…

  5. AN INVESTIGATION OF RELATIONSHIP BETWEEN LEADERSHIP STYLES OF HUMAN RESOURCES MANAGER, CREATIVE PROBLEM SOLVING CAPACITY AND CAREER SATISFACTION: AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Hüseyin YILMAZ

    2016-12-01

    Full Text Available The aim of this study is the creative problem-solving capacity of the organization with leadership behaviors of human resources managers and employees to examine the relationship between career satisfaction and is tested empirically. Research within the scope of the required data structured questionnaire method, operating in the province of Aydin was obtained from 130 employees working in five star hotels. Democratic leadership style according to the factor analysis, easygoing, participants converter, and releasing autocratic leadership dimensions were determined. According to the analysis, the dependent variable with a significant level of research and positive leadership style has been determined that no relationships. Regression analysis revealed that the leadership of the relationship with the creative problem-solving capacity of democratic leadership in style when found to be stronger than other leadership styles, while the variable describing the career of the employee satisfaction level of the maximum it was concluded that the creative problem-solving capacity of the organization. Research in the context of human resources on the very important for organizations, leadership behavior, creative problem-solving capacity and career satisfaction studies analyzing the relationships between variables it seems to be quite limited. The discovery by analyzing the relationship between the aforementioned variables, can make significant contributions to knowledge in the literature and are expected to form the basis for future research.

  6. The Relation between Acquisition of a Theory of Mind and the Capacity to Hold in Mind.

    Science.gov (United States)

    Gordon, Anne C. L.; Olson, David R.

    1998-01-01

    Tested hypothesized relationship between development of a theory of mind and increasing computational resources in 3- to 5-year olds. Found that the correlations between performance on theory of mind tasks and dual processing tasks were as high as r=.64, suggesting that changes in working memory capacity allow the expression of, and arguably the…

  7. An Investigation of the Relationship between College Chinese EFL Students' Autonomous Learning Capacity and Motivation in Using Computer-Assisted Language Learning

    Science.gov (United States)

    Pu, Minran

    2009-01-01

    The purpose of the study was to investigate the relationship between college EFL students' autonomous learning capacity and motivation in using web-based Computer-Assisted Language Learning (CALL) in China. This study included three questionnaires: the student background questionnaire, the questionnaire on student autonomous learning capacity, and…

  8. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  9. Capacity Maximizing Constellations

    Science.gov (United States)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  10. Photonic entanglement as a resource in quantum computation and quantum communication

    OpenAIRE

    Prevedel, Robert; Aspelmeyer, Markus; Brukner, Caslav; Jennewein, Thomas; Zeilinger, Anton

    2008-01-01

    Entanglement is an essential resource in current experimental implementations for quantum information processing. We review a class of experiments exploiting photonic entanglement, ranging from one-way quantum computing over quantum communication complexity to long-distance quantum communication. We then propose a set of feasible experiments that will underline the advantages of photonic entanglement for quantum information processing.

  11. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  12. A Conceptual Model for the Sustainable Governance of Integrated Management of National Water Resources with a Focus on Training and Capacity Building

    Directory of Open Access Journals (Sweden)

    Alaleh Ghaemi

    2017-09-01

    Full Text Available The instabilities over the past two decades in governing water resources have led to the need for an integrated approach to the problem. Moreover, the decent and sustainable governance of water resources has come to be recognized as the supplement to the integrated management of water resources. The present study strives to develop a conceptual model of water reources sustainable governance with emphasis on training and capacity-building. For this purpose, expert views presented to different international meetings and world conferences on water were reviewed to develop a comprehensive and all-embracuing conceptual model of sustainable governance for the integrated management of water resources with a focus on training and capacity-building. In a second stage of the study, both internationally published literature and the regulatory documents on water management approved at the national level were consulted to derive appropriate standards, criteria, and indicators for the implementation of the proposed conceptual model. The relevance of these indicators was validated by soliciting expert views while their stability was calculated via the Cronbach’s alpha formula to be 0.94. The third stage of the study involved the ranking and gradation of the indicators using the relevant software in a fuzzy decision-making environment based on interviews with 110 senior water executives, academics working in the field, senior agricultural managers, water experts in local communities, and NGO activists. The emerging model finally consisted of 9 criteria and 52 indicators, amongst which the criterion of public participation and the indicator of training and capacity-building won the highest scores. It may be claimed that the proposed conceptual model is quite relevant and adapted to the sustainable governance presently sought. The key roles in this model are played by public participation as well as training and capacity building that must be on the priority

  13. Life satisfaction in 6 European countries: the relationship to health, self-esteem, and social and financial resources among people (Aged 65-89) with reduced functional capacity.

    Science.gov (United States)

    Borg, Christel; Fagerström, Cecilia; Balducci, Cristian; Burholt, Vanessa; Ferring, Dieter; Weber, Germain; Wenger, Clare; Holst, Göran; Hallberg, Ingalill R

    2008-01-01

    The aim of this study was to investigate how overall health, participation in physical activities, self-esteem, and social and financial resources are related to life satisfaction among people aged 65 and older with reduced activities of daily living (ADL) capacity in 6 European countries. A subsample of the European Study of Adults' Well-Being (ESAW), consisting of 2,195 people with reduced ADL capacity from Sweden, the United Kingdom, the Netherlands, Luxembourg, Austria, and Italy, was included. The Older Americans' Resources Schedule (OARS), the Life Satisfaction Index Z, and the Self-Esteem Scale were used. In all national samples, overall health, self-esteem, and feeling worried, rather than ADL capacity, were significantly associated with life satisfaction. The findings indicate the importance of taking not only the reduction in functional capacity into account but also the individual's perception of health and self-esteem when outlining health care and nursing aimed at improving life satisfaction. The study thus suggests that personal rather than environmental factors are important for life satisfaction among people with reduced ADL capacity living in Europe.

  14. Resource-constrained project scheduling: computing lower bounds by solving minimum cut problems

    NARCIS (Netherlands)

    Möhring, R.H.; Nesetril, J.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen

    1999-01-01

    We present a novel approach to compute Lagrangian lower bounds on the objective function value of a wide class of resource-constrained project scheduling problems. The basis is a polynomial-time algorithm to solve the following scheduling problem: Given a set of activities with start-time dependent

  15. Representation of Solar Capacity Value in the ReEDS Capacity Expansion Model

    Energy Technology Data Exchange (ETDEWEB)

    Sigrin, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sullivan, P. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ibanez, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2014-03-01

    An important issue for electricity system operators is the estimation of renewables' capacity contributions to reliably meeting system demand, or their capacity value. While the capacity value of thermal generation can be estimated easily, assessment of wind and solar requires a more nuanced approach due to the resource variability. Reliability-based methods, particularly assessment of the Effective Load-Carrying Capacity, are considered to be the most robust and widely-accepted techniques for addressing this resource variability. This report compares estimates of solar PV capacity value by the Regional Energy Deployment System (ReEDS) capacity expansion model against two sources. The first comparison is against values published by utilities or other entities for known electrical systems at existing solar penetration levels. The second comparison is against a time-series ELCC simulation tool for high renewable penetration scenarios in the Western Interconnection. Results from the ReEDS model are found to compare well with both comparisons, despite being resolved at a super-hourly temporal resolution. Two results are relevant for other capacity-based models that use a super-hourly resolution to model solar capacity value. First, solar capacity value should not be parameterized as a static value, but must decay with increasing penetration. This is because -- for an afternoon-peaking system -- as solar penetration increases, the system's peak net load shifts to later in the day -- when solar output is lower. Second, long-term planning models should determine system adequacy requirements in each time period in order to approximate LOLP calculations. Within the ReEDS model we resolve these issues by using a capacity value estimate that varies by time-slice. Within each time period the net load and shadow price on ReEDS's planning reserve constraint signals the relative importance of additional firm capacity.

  16. On the computation of the higher-order statistics of the channel capacity over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-12-01

    The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.

  17. On the computation of the higher-order statistics of the channel capacity over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2012-01-01

    The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.

  18. A Safety Resource Allocation Mechanism against Connection Fault for Vehicular Cloud Computing

    Directory of Open Access Journals (Sweden)

    Tianpeng Ye

    2016-01-01

    Full Text Available The Intelligent Transportation System (ITS becomes an important component of the smart city toward safer roads, better traffic control, and on-demand service by utilizing and processing the information collected from sensors of vehicles and road side infrastructure. In ITS, Vehicular Cloud Computing (VCC is a novel technology balancing the requirement of complex services and the limited capability of on-board computers. However, the behaviors of the vehicles in VCC are dynamic, random, and complex. Thus, one of the key safety issues is the frequent disconnections between the vehicle and the Vehicular Cloud (VC when this vehicle is computing for a service. More important, the connection fault will disturb seriously the normal services of VCC and impact the safety works of the transportation. In this paper, a safety resource allocation mechanism is proposed against connection fault in VCC by using a modified workflow with prediction capability. We firstly propose the probability model for the vehicle movement which satisfies the high dynamics and real-time requirements of VCC. And then we propose a Prediction-based Reliability Maximization Algorithm (PRMA to realize the safety resource allocation for VCC. The evaluation shows that our mechanism can improve the reliability and guarantee the real-time performance of the VCC.

  19. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  20. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Di Girolamo, Alessandro; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall

    2016-01-01

    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. ...

  1. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  2. Alluvial diamond resource potential and production capacity assessment of Mali

    Science.gov (United States)

    Chirico, Peter G.; Barthelemy, Francis; Kone, Fatiaga

    2010-01-01

    In May of 2000, a meeting was convened in Kimberley, South Africa, and attended by representatives of the diamond industry and leaders of African governments to develop a certification process intended to assure that rough, exported diamonds were free of conflictual concerns. This meeting was supported later in 2000 by the United Nations in a resolution adopted by the General Assembly. By 2002, the Kimberley Process Certification Scheme (KPCS) was ratified and signed by diamond-producing and diamond-importing countries. Over 70 countries were included as members of the KPCS at the end of 2007. To prevent trade in "conflict diamonds" while protecting legitimate trade, the KPCS requires that each country set up an internal system of controls to prevent conflict diamonds from entering any imported or exported shipments of rough diamonds. Every diamond or diamond shipment must be accompanied by a Kimberley Process (KP) certificate and be contained in tamper-proof packaging. The objective of this study was (1) to assess the naturally occurring endowment of diamonds in Mali (potential resources) based on geological evidence, previous studies, and recent field data and (2) to assess the diamond-production capacity and measure the intensity of mining activity. Several possible methods can be used to estimate the potential diamond resource. However, because there is generally a lack of sufficient and consistent data recording all diamond mining in Mali and because time to conduct fieldwork and accessibility to the diamond mining areas are limited, four different methodologies were used: the cylindrical calculation of the primary kimberlitic deposits, the surface area methodology, the volume and grade approach, and the content per kilometer approach. Approximately 700,000 carats are estimated to be in the alluvial deposits of the Kenieba region, with 540,000 carats calculated to lie within the concentration grade deposits. Additionally, 580,000 carats are estimated to have

  3. Open Educational Resources: The Role of OCW, Blogs and Videos in Computer Networks Classroom

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2012-09-01

    Full Text Available This paper analyzes the learning experiences and opinions obtained from a group of undergraduate students in their interaction with several on-line multimedia resources included in a free on-line course about Computer Networks. These new educational resources employed are based on the Web2.0 approach such as blogs, videos and virtual labs which have been added in a web-site for distance self-learning.

  4. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  5. Evaluation of Cloud Computing Hidden Benefits by Using Real Options Analysis

    Directory of Open Access Journals (Sweden)

    Pavel Náplava

    2016-12-01

    Full Text Available Cloud computing technologies have brought new attributes to the IT world. One of them is a flexibility of IT resources. It enables effectively both to downsize and upsize the capacity of IT resources in real time. Requirements for IT size change defines business strategy and actual market state. IT costs are not stable but dynamic in this case. Standard investment valuation methods (both static and dynamic are not able to include the flexibility attribute to the evaluation of IT projects. This article describes the application of the Real Options Analysis method for the valuation of the cloud computing flexibility. The method compares costs of the on-premise and cloud computing solutions by combining put and call option valuation. Cloud computing providers can use the method as an advanced tool that explains hidden benefits of cloud computing. Unexperienced cloud computing customers can simulate the market behavior and better plan necessary IT investments.

  6. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  7. Anesthesia Capacity in Ghana: A Teaching Hospital's Resources, and the National Workforce and Education.

    Science.gov (United States)

    Brouillette, Mark A; Aidoo, Alfred J; Hondras, Maria A; Boateng, Nana A; Antwi-Kusi, Akwasi; Addison, William; Hermanson, Alec R

    2017-12-01

    Quality anesthetic care is lacking in low- and middle-income countries (LMICs). Global health leaders call for perioperative capacity reports in limited-resource settings to guide improved health care initiatives. We describe a teaching hospital's resources and the national workforce and education in this LMIC capacity report. A prospective observational study was conducted at Komfo Anokye Teaching Hospital (KATH) in Kumasi, Ghana, during 4 weeks in August 2016. Teaching hospital data were generated from observations of hospital facilities and patient care, review of archival records, and interviews with KATH personnel. National data were obtained from interviews with KATH personnel, correspondence with Ghana's anesthesia society, and review of public records. The practice of anesthesia at KATH incorporated preanesthesia clinics, intraoperative management, and critical care. However, there were not enough physicians to consistently supervise care, especially in postanesthesia care units (PACUs) and the critical care unit (CCU). Clean water and electricity were usually reliable in all 16 operating rooms (ORs) and throughout the hospital. Equipment and drugs were inventoried in detail. While much basic infrastructure, equipment, and medications were present in ORs, patient safety was hindered by hospital-wide oxygen supply failures and shortage of vital signs monitors and working ventilators in PACUs and the CCU. In 2015, there were 10,319 anesthetics administered, with obstetric and gynecologic, general, and orthopedic procedures comprising 62% of surgeries. From 2011 to 2015, all-cause perioperative mortality rate in ORs and PACUs was 0.65% or 1 death per 154 anesthetics, with 99% of deaths occurring in PACUs. Workforce and education data at KATH revealed 10 anesthesia attending physicians, 61 nurse anesthetists (NAs), and 7 anesthesia resident physicians in training. At the national level, 70 anesthesia attending physicians and 565 NAs cared for Ghana's population

  8. Supporting Capacity Development for Sustainable Land Administration Infrastructures

    DEFF Research Database (Denmark)

    Enemark, Stig

    2005-01-01

    and for identifying an adequate response to these needs at societal, organisational and individual levels. The paper examines the capacity building concept and underpins the need for institutional development to facilitate the design and implementation of efficient Land Administration Models and to support good......Land management is the process by which the resources of land are put into good effect. Land management encompasses all activities associated with the management of land and natural resources that are required to achieve sustainable development. Land Administration Systems are institutional......, the national capacity to manage land rights, restrictions and responsibilities is not well developed in terms of mature institutions and the necessary human resources and skills. In this regard, the capacity building concept offers some guidance for analysing and assessing the capacity needs...

  9. Capacitated Dynamic Lot Sizing with Capacity Acquisition

    DEFF Research Database (Denmark)

    Li, Hongyan; Meissner, Joern

    One of the fundamental problems in operations management is to determine the optimal investment in capacity. Capacity investment consumes resources and the decision is often irreversible. Moreover, the available capacity level affects the action space for production and inventory planning decisions...

  10. Capacitated dynamic lot sizing with capacity acquisition

    DEFF Research Database (Denmark)

    Li, Hongyan; Meissner, Joern

    2011-01-01

    One of the fundamental problems in operations management is determining the optimal investment in capacity. Capacity investment consumes resources and the decision, once made, is often irreversible. Moreover, the available capacity level affects the action space for production and inventory...

  11. New computational methodology for large 3D neutron transport problems

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    We present a new computational methodology, based on 3D characteristics method, dedicated to solve very large 3D problems without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we set up a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (authors)

  12. 岩溶地区资源环境承载力分析——以贵州省为例%Research on the Resource and Environmental Carrying Capacity of Karst Region

    Institute of Scientific and Technical Information of China (English)

    王金凤; 代稳; 马士彬; 王立威

    2017-01-01

    With the rapid development of social economy, the contradiction between population, resources and environment is becoming more and more serious, and the carrying capacity of resources and environment is under great pressure, especially in karst area. Using the analytic hierarchy process and the state space method, constructing evaluation index system of resources and environment carrying capacity of the karst area from three aspects:resource carrying capacity,environmental carrying capacity and social economic coordination,selecting water resources, land resources, tourism resources, water environment, atmospheric environment, population, economy and society as the evaluation index layer,this paper will evaluate quantitatively the resources and environment carrying capacity in Guizhou Province in 2000, 2004 and 2008 and 2013 four phase. The results showed that resources and environment carrying capacity in Guizhou Province showed a low level state and have to the middle level towards the trend in 2013 because the resources and environment capacity index in 2000, 2004, 2008 and 2013 is 0.09364, 0.08957, 0.09230, 0.0113.Zunyi City, in Qiandongnan Prefecture, Guizhou southwest Prefecture three administrative region of resources and environment carrying capacity is in medium state and the rest of the six administrative regions in the lower level in nine administrative regions.%随着社会经济的飞速发展,人口、资源与环境之间的矛盾日趋加剧,资源环境承载力面临着巨大的压力,尤其是岩溶地区.本文利用层次分析与状态空间法,从资源承载力、环境承载力和社会经济协调力三个方面构建岩溶地区资源环境承载力评价体系,选取水资源、土地资源、旅游资源、水环境、大气环境、人口、经济及社会为评价指标层,对贵州省2000、2004、2008和2013年4个时相的资源环境承载力进行了定量评价.结果表明:贵州省2000、2004、2008、2013

  13. Linear-programming-based heuristics for project capacity planning

    NARCIS (Netherlands)

    Gademann, A.J.R.M.; Schutten, J.M.J.

    2005-01-01

    Many multi-project organizations are capacity driven, which means that their operations are constrained by various scarce resources. An important planning aspect in a capacity driven multi-project organization is capacity planning. By capacity planning, we mean the problem of matching demand for

  14. Laboratory capacity building for the International Health Regulations (IHR[2005]) in resource-poor countries: the experience of the African Field Epidemiology Network (AFENET).

    Science.gov (United States)

    Masanza, Monica Musenero; Nqobile, Ndlovu; Mukanga, David; Gitta, Sheba Nakacubo

    2010-12-03

    Laboratory is one of the core capacities that countries must develop for the implementation of the International Health Regulations (IHR[2005]) since laboratory services play a major role in all the key processes of detection, assessment, response, notification, and monitoring of events. While developed countries easily adapt their well-organized routine laboratory services, resource-limited countries need considerable capacity building as many gaps still exist. In this paper, we discuss some of the efforts made by the African Field Epidemiology Network (AFENET) in supporting laboratory capacity development in the Africa region. The efforts range from promoting graduate level training programs to building advanced technical, managerial and leadership skills to in-service short course training for peripheral laboratory staff. A number of specific projects focus on external quality assurance, basic laboratory information systems, strengthening laboratory management towards accreditation, equipment calibration, harmonization of training materials, networking and provision of pre-packaged laboratory kits to support outbreak investigation. Available evidence indicates a positive effect of these efforts on laboratory capacity in the region. However, many opportunities exist, especially to support the roll-out of these projects as well as attending to some additional critical areas such as biosafety and biosecuity. We conclude that AFENET's approach of strengthening national and sub-national systems provide a model that could be adopted in resource-limited settings such as sub-Saharan Africa.

  15. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  16. Exploratory Experimentation and Computation

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2010-02-25

    We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

  17. Methodology for Clustering High-Resolution Spatiotemporal Solar Resource Data

    Energy Technology Data Exchange (ETDEWEB)

    Getman, Dan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-09-01

    In this report, we introduce a methodology to achieve multiple levels of spatial resolution reduction of solar resource data, with minimal impact on data variability, for use in energy systems modeling. The selection of an appropriate clustering algorithm, parameter selection including cluster size, methods of temporal data segmentation, and methods of cluster evaluation are explored in the context of a repeatable process. In describing this process, we illustrate the steps in creating a reduced resolution, but still viable, dataset to support energy systems modeling, e.g. capacity expansion or production cost modeling. This process is demonstrated through the use of a solar resource dataset; however, the methods are applicable to other resource data represented through spatiotemporal grids, including wind data. In addition to energy modeling, the techniques demonstrated in this paper can be used in a novel top-down approach to assess renewable resources within many other contexts that leverage variability in resource data but require reduction in spatial resolution to accommodate modeling or computing constraints.

  18. Generation and exploration of aggregation abstractions for scheduling and resource allocation

    Science.gov (United States)

    Lowry, Michael R.; Linden, Theodore A.

    1993-01-01

    This paper presents research on the abstraction of computational theories for scheduling and resource allocation. The paper describes both theory and methods for the automated generation of aggregation abstractions and approximations in which detailed resource allocation constraints are replaced by constraints between aggregate demand and capacity. The interaction of aggregation abstraction generation with the more thoroughly investigated abstractions of weakening operator preconditions is briefly discussed. The purpose of generating abstract theories for aggregated demand and resources includes: answering queries about aggregate properties, such as gross feasibility; reducing computational costs by using the solution of aggregate problems to guide the solution of detailed problems; facilitating reformulating theories to approximate problems for which there are efficient problem-solving methods; and reducing computational costs of scheduling by providing more opportunities for variable and value-ordering heuristics to be effective. Experiments are being developed to characterize the properties of aggregations that make them cost effective. Both abstract and concrete theories are represented in a variant of first-order predicate calculus, which is a parameterized multi-sorted logic that facilitates specification of large problems. A particular problem is conceptually represented as a set of ground sentences that is consistent with a quantified theory.

  19. Contract on using computer resources of another

    Directory of Open Access Journals (Sweden)

    Cvetković Mihajlo

    2016-01-01

    Full Text Available Contractual relations involving the use of another's property are quite common. Yet, the use of computer resources of others over the Internet and legal transactions arising thereof certainly diverge from the traditional framework embodied in the special part of contract law dealing with this issue. Modern performance concepts (such as: infrastructure, software or platform as high-tech services are highly unlikely to be described by the terminology derived from Roman law. The overwhelming novelty of high-tech services obscures the disadvantageous position of contracting parties. In most cases, service providers are global multinational companies which tend to secure their own unjustified privileges and gain by providing lengthy and intricate contracts, often comprising a number of legal documents. General terms and conditions in these service provision contracts are further complicated by the '.service level agreement', rules of conduct and (nonconfidentiality guarantees. Without giving the issue a second thought, users easily accept the pre-fabricated offer without reservations, unaware that such a pseudo-gratuitous contract actually conceals a highly lucrative and mutually binding agreement. The author examines the extent to which the legal provisions governing sale of goods and services, lease, loan and commodatum may apply to 'cloud computing' contracts, and analyses the scope and advantages of contractual consumer protection, as a relatively new area in contract law. The termination of a service contract between the provider and the user features specific post-contractual obligations which are inherent to an online environment.

  20. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  1. Big Data in Cloud Computing: A Resource Management Perspective

    Directory of Open Access Journals (Sweden)

    Saeed Ullah

    2018-01-01

    Full Text Available The modern day advancement is increasingly digitizing our lives which has led to a rapid growth of data. Such multidimensional datasets are precious due to the potential of unearthing new knowledge and developing decision-making insights from them. Analyzing this huge amount of data from multiple sources can help organizations to plan for the future and anticipate changing market trends and customer requirements. While the Hadoop framework is a popular platform for processing larger datasets, there are a number of other computing infrastructures, available to use in various application domains. The primary focus of the study is how to classify major big data resource management systems in the context of cloud computing environment. We identify some key features which characterize big data frameworks as well as their associated challenges and issues. We use various evaluation metrics from different aspects to identify usage scenarios of these platforms. The study came up with some interesting findings which are in contradiction with the available literature on the Internet.

  2. Mobile devices and computing cloud resources allocation for interactive applications

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2017-06-01

    Full Text Available Using mobile devices such as smartphones or iPads for various interactive applications is currently very common. In the case of complex applications, e.g. chess games, the capabilities of these devices are insufficient to run the application in real time. One of the solutions is to use cloud computing. However, there is an optimization problem of mobile device and cloud resources allocation. An iterative heuristic algorithm for application distribution is proposed. The algorithm minimizes the energy cost of application execution with constrained execution time.

  3. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  4. Market-Based Resource Allocation in a Wirelessly Integrated Naval Engineering Plant

    Science.gov (United States)

    2009-12-01

    available wireless nodes will be developed. Using a multi-agent approach based on free market economics (termed market based control) will be explored...as battery power, data storage capacity, MPU time, wireless bandwidth, etc.) required to perform complex computational tasks are available only in a...network. One approach to this problem is to apply free-market economics to help allocate these resources. Free-market economies can be thought of as

  5. Reliability-oriented multi-resource allocation in a stochastic-flow network

    International Nuclear Information System (INIS)

    Hsieh, C.-C.; Lin, M.-H.

    2003-01-01

    A stochastic-flow network consists of a set of nodes, including source nodes which supply various resources and sink nodes at which resource demands take place, and a collection of arcs whose capacities have multiple operational states. The network reliability of such a stochastic-flow network is the probability that resources can be successfully transmitted from source nodes through multi-capacitated arcs to sink nodes. Although the evaluation schemes of network reliability in stochastic-flow networks have been extensively studied in the literature, how to allocate various resources at source nodes in a reliable means remains unanswered. In this study, a resource allocation problem in a stochastic-flow network is formulated that aims to determine the optimal resource allocation policy at source nodes subject to given resource demands at sink nodes such that the network reliability of the stochastic-flow network is maximized, and an algorithm for computing the optimal resource allocation is proposed that incorporates the principle of minimal path vectors. A numerical example is given to illustrate the proposed algorithm

  6. Computer Processing 10-20-30. Teacher's Manual. Senior High School Teacher Resource Manual.

    Science.gov (United States)

    Fisher, Mel; Lautt, Ray

    Designed to help teachers meet the program objectives for the computer processing curriculum for senior high schools in the province of Alberta, Canada, this resource manual includes the following sections: (1) program objectives; (2) a flowchart of curriculum modules; (3) suggestions for short- and long-range planning; (4) sample lesson plans;…

  7. Human Resources Capacity Building as a Strategy in Strengthening Nuclear Knowledge Sustainability in the Experimental Fuel Element Installation of BATAN-Indonesia

    International Nuclear Information System (INIS)

    Ratih Langenati; Bambang, Herutomo; Arief Sasongko Adhi

    2014-01-01

    Strategy in Strengthening Nuclear Knowledge Sustainability: • In order to maintain human resources capacity related to nuclear fuel production technology, a nuclear knowledge preservation program is implemented in the EFEI. • The program includes coaching/training, mentoring and documenting important knowledge. • The program activities are monitored and evaluated quarterly for its improvement in the following year

  8. Experience building and operating the CMS Tier-1 computing centres

    Science.gov (United States)

    Albert, M.; Bakken, J.; Bonacorsi, D.; Brew, C.; Charlot, C.; Huang, Chih-Hao; Colling, D.; Dumitrescu, C.; Fagan, D.; Fassi, F.; Fisk, I.; Flix, J.; Giacchetti, L.; Gomez-Ceballos, G.; Gowdy, S.; Grandi, C.; Gutsche, O.; Hahn, K.; Holzman, B.; Jackson, J.; Kreuzer, P.; Kuo, C. M.; Mason, D.; Pukhaeva, N.; Qin, G.; Quast, G.; Rossman, P.; Sartirana, A.; Scheurer, A.; Schott, G.; Shih, J.; Tader, P.; Thompson, R.; Tiradani, A.; Trunov, A.

    2010-04-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  9. Experience building and operating the CMS Tier-1 computing centres

    International Nuclear Information System (INIS)

    Albert, M; Bakken, J; Huang, Chih-Hao; Dumitrescu, C; Fagan, D; Fisk, I; Giacchetti, L; Gutsche, O; Holzman, B; Bonacorsi, D; Grandi, C; Brew, C; Jackson, J; Charlot, C; Colling, D; Fassi, F; Flix, J; Gomez-Ceballos, G; Hahn, K; Gowdy, S

    2010-01-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  10. Stochastic Resource Allocation for Energy-Constrained Systems

    Directory of Open Access Journals (Sweden)

    Sachs DanielGrobe

    2009-01-01

    Full Text Available Battery-powered wireless systems running media applications have tight constraints on energy, CPU, and network capacity, and therefore require the careful allocation of these limited resources to maximize the system's performance while avoiding resource overruns. Usually, resource-allocation problems are solved using standard knapsack-solving techniques. However, when allocating conservable resources like energy (which unlike CPU and network remain available for later use if they are not used immediately knapsack solutions suffer from excessive computational complexity, leading to the use of suboptimal heuristics. We show that use of Lagrangian optimization provides a fast, elegant, and, for convex problems, optimal solution to the allocation of energy across applications as they enter and leave the system, even if the exact sequence and timing of their entrances and exits is not known. This permits significant increases in achieved utility compared to heuristics in common use. As our framework requires only a stochastic description of future workloads, and not a full schedule, we also significantly expand the scope of systems that can be optimized.

  11. Combining performance measures to investigate capacity changes in fisheries

    DEFF Research Database (Denmark)

    Thøgersen, Thomas Talund; Pascoe, Sean

    2014-01-01

    the actual fishing capacity. In both cases, the relationship between effort indicators and capacity needs to be resolved in order for the manager to introduce the right interventions. Previous studies have estimated these relationships in multi-species fisheries using either a multi-output distance function......The Common Fisheries Policy (CFP) aims to achieve a balance between the European fleet capacity and the resources available. This can be realized either by temporarily reducing the fishing effort (i.e. capacity utilization) or quotas in the hope of increasing the resources available or reducing...... catches of cod, plaice and Nephrops and that gross tonnage is a more consistent indicator of fishing capacity than engine power....

  12. Developing nursing and midwifery students' capacity for coping with bullying and aggression in clinical settings: Students' evaluation of a learning resource.

    Science.gov (United States)

    Hogan, Rosemarie; Orr, Fiona; Fox, Deborah; Cummins, Allison; Foureur, Maralyn

    2018-03-01

    An innovative blended learning resource for undergraduate nursing and midwifery students was developed in a large urban Australian university, following a number of concerning reports by students on their experiences of bullying and aggression in clinical settings. The blended learning resource included interactive online learning modules, comprising film clips of realistic clinical scenarios, related readings, and reflective questions, followed by in-class role-play practice of effective responses to bullying and aggression. On completion of the blended learning resource 210 participants completed an anonymous survey (65.2% response rate). Qualitative data was collected and a thematic analysis of the participants' responses revealed the following themes: 'Engaging with the blended learning resource'; 'Responding to bullying' and 'Responding to aggression'. We assert that developing nursing and midwifery students' capacity to effectively respond to aggression and bullying, using a self-paced blended learning resource, provides a solution to managing some of the demands of the clinical setting. The blended learning resource, whereby nursing and midwifery students were introduced to realistic portrayals of bullying and aggression in clinical settings, developed their repertoire of effective responding and coping skills for use in their professional practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models

    Energy Technology Data Exchange (ETDEWEB)

    Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-03

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources. We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.

  14. SYSTEMATIC LITERATURE REVIEW ON RESOURCE ALLOCATION AND RESOURCE SCHEDULING IN CLOUD COMPUTING

    OpenAIRE

    B. Muni Lavanya; C. Shoba Bindu

    2016-01-01

    The objective the work is intended to highlight the key features and afford finest future directions in the research community of Resource Allocation, Resource Scheduling and Resource management from 2009 to 2016. Exemplifying how research on Resource Allocation, Resource Scheduling and Resource management has progressively increased in the past decade by inspecting articles, papers from scientific and standard publications. Survey materialized in three-fold process. Firstly, investigate on t...

  15. Resource management for device-to-device underlay communication

    CERN Document Server

    Song, Lingyang; Xu, Chen

    2013-01-01

    Device-to-Device (D2D) communication will become a key feature supported by next generation cellular networks, a topic of enormous importance to modern communication. Currently, D2D serves as an underlay to the cellular network as a means to increase spectral efficiency. Although D2D communication brings large benefits in terms of system capacity, it also causes interference as well as increased computation complexity to cellular networks as a result of spectrum sharing. Thus, efficient resource management must be performed to guarantee a target performance level of cellular communication.This

  16. Modeling of Groundwater Resources Heavy Metals Concentration Using Soft Computing Methods: Application of Different Types of Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Meysam Alizamir

    2017-09-01

    Full Text Available Nowadays, groundwater resources play a vital role as a source of drinking water in arid and semiarid regions and forecasting of pollutants content in these resources is very important. Therefore, this study aimed to compare two soft computing methods for modeling Cd, Pb and Zn concentration in groundwater resources of Asadabad Plain, Western Iran. The relative accuracy of several soft computing models, namely multi-layer perceptron (MLP and radial basis function (RBF for forecasting of heavy metals concentration have been investigated. In addition, Levenberg-Marquardt, gradient descent and conjugate gradient training algorithms were utilized for the MLP models. The ANN models for this study were developed using MATLAB R 2014 Software program. The MLP performs better than the other models for heavy metals concentration estimation. The simulation results revealed that MLP model was able to model heavy metals concentration in groundwater resources favorably. It generally is effectively utilized in environmental applications and in the water quality estimations. In addition, out of three algorithms, Levenberg-Marquardt was better than the others were. This study proposed soft computing modeling techniques for the prediction and estimation of heavy metals concentration in groundwater resources of Asadabad Plain. Based on collected data from the plain, MLP and RBF models were developed for each heavy metal. MLP can be utilized effectively in applications of prediction of heavy metals concentration in groundwater resources of Asadabad Plain.

  17. Integrating GRID tools to build a computing resource broker: activities of DataGrid WP1

    International Nuclear Information System (INIS)

    Anglano, C.; Barale, S.; Gaido, L.; Guarise, A.; Lusso, S.; Werbrouck, A.

    2001-01-01

    Resources on a computational Grid are geographically distributed, heterogeneous in nature, owned by different individuals or organizations with their own scheduling policies, have different access cost models with dynamically varying loads and availability conditions. This makes traditional approaches to workload management, load balancing and scheduling inappropriate. The first work package (WP1) of the EU-funded DataGrid project is addressing the issue of optimizing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of-date (collected in a finite amount of time at a very loosely coupled site). The authors describe the DataGrid approach in integrating existing software components (from Condor, Globus, etc.) to build a Grid Resource Broker, and the early efforts to define a workable scheduling strategy

  18. Capacity Building in Land Management

    DEFF Research Database (Denmark)

    Enemark, Stig; Ahene, Rexford

    2003-01-01

    There is a significant need for capacity building in the interdisciplinary area of land management especially in developing countries and countries in transition, to deal with the complex issues of building efficient land information systems and sustainable institutional infrastructures. Capacity...... building in land management is not only a question of establishing a sufficient technological level or sufficient economic resources. It is mainly a question of understanding the interdisciplinary and cross-sectoral nature of land administration systems, and understanding the need for human resource...... and professionals for implementing the new land policy. The curriculum combines the diploma and the bachelor level and it combines the key areas of land surveying, land management and physical planning....

  19. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  20. Demonstration and evaluation of the 20-ton-capacity load-cell-based weighing system, Eldorado Resources, Ltd., Port Hope, Ontario, September 3-4, 1986

    International Nuclear Information System (INIS)

    Cooley, J.N.; Huxford, T.J.

    1986-01-01

    On September 3 and 4, 1986, the prototype 20-ton-capacity load-cell-based weighing system (LCBWS) developed by the US Enrichment Safeguards Program (ESP) at Martin Marietta Energy Systems, Inc., was field tested at the Eldorado Resources, Ltd., (ERL) facility in Port Hope, Ontario. The 20-ton-capacity LCBWS has been designed and fabricated for use by the International Atomic Energy Agency (IAEA) for verifying the masses of large-capacity UF 6 cylinders during IAEA safeguards inspections at UF 6 handling facilities. The purpose of the Canadian field test was to demonstrate and to evaluate with IAEA inspectorates and with UF 6 bulk handling facility operators at Eldorado the principles, procedures, and hardware associated with using the 20-ton-capacity LCBWS as a portable means for verifying the masses of 10- and 14-ton UF 6 cylinders. Session participants included representatives from the IAEA, Martin Marietta Energy Systems, Inc., Eldorado Resources, Ltd., the Atomic Energy Control Board (AECB), and the International Safeguards Project Office (ISPO) at Brookhaven National Laboratory (BNL). Appendix A presents the list of participants and their organization affiliation. The two-day field test involved a formal briefing by ESP staff, two cylinder weighing sessions, IAEA critiques of the LCBWS hardware and software, and concluding discussions on the field performance of the system. Appendix B cites the meeting agenda. Summarized in this report are (1) the technical information presented by the system developers, (2) results from the weighing sessions, and (3) observations, suggestions, and concluding statements from meeting participants

  1. Economics and Design of Capacity Markets for the Power Sector

    OpenAIRE

    Peter Cramton; Axel Ockenfels

    2012-01-01

    Capacity markets are a means to assure resource adequacy. The need for a capacity market stems from several market failures the most prominent of which is the absence of a robust demand-side. Limited demand response makes market clearing problematic in times of scarcity. We present the economic motivation for a capacity market, present one specific market design that utilizes the best design features from various resource adequacy approaches analyzed in the literature, and we discuss other in...

  2. On the computation of the higher order statistics of the channel capacity for amplify-and-forward multihop transmission

    KAUST Repository

    Yilmaz, Ferkan; Tabassum, Hina; Alouini, Mohamed-Slim

    2014-01-01

    Higher order statistics (HOS) of the channel capacity provide useful information regarding the level of reliability of signal transmission at a particular rate. In this paper, we propose a novel and unified analysis, which is based on the moment-generating function (MGF) approach, to efficiently and accurately compute the HOS of the channel capacity for amplify-and-forward (AF) multihop transmission over generalized fading channels. More precisely, our easy-to-use and tractable mathematical formalism requires only the reciprocal MGFs of the transmission hop signal-to-noise ratio (SNR). Numerical and simulation results, which are performed to exemplify the usefulness of the proposed MGF-based analysis, are shown to be in perfect agreement. © 2013 IEEE.

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  4. Exploring the impact of reduced hydro capacity and lignite resources on the Macedonian power sector development

    Directory of Open Access Journals (Sweden)

    Taseska-Gjorgievskaa Verica

    2014-01-01

    Full Text Available The reference development pathway of the Macedonian energy sector highlights the important role that lignite and hydro power play in the power sector, each accounting for 40% of total capacity in 2021. In 2030, this dominance continues, although hydro has a higher share due to the retirement of some of the existing lignite plants. Three sensitivity runs of the MARKAL-Macedonia energy system model have been undertaken to explore the importance of these technologies to the system, considering that their resource may be reduced with time: (1 Reducing the availability of lignite from domestic mines by 50% in 2030 (with limited capacity of imports, (2 Removing three large hydro options, which account for 310 MW in the business-as-usual case, and (3 Both of the above restrictions. The reduction in lignite availability is estimated to lead to additional overall system costs of 0.7%, compared to hydro restrictions at only 0.1%. With both restrictions applied, the additional costs rise to over 1%, amounting to 348 M€ over the 25 year planning horizon. In particular, costs are driven up by an increasing reliance on electricity imports. In all cases, the total electricity generation decreases, but import increases, which leads to a drop in capacity requirements. In both, the lignite and the hydro restricted cases, it is primarily gas-fired generation and imports that “fill the gap”. This highlights the importance of an increasingly diversified and efficient supply, which should be promoted through initiatives on renewables, energy efficiency, and lower carbon emissions.

  5. The influence of working memory capacity on experimental heat pain.

    Science.gov (United States)

    Nakae, Aya; Endo, Kaori; Adachi, Tomonori; Ikeda, Takashi; Hagihira, Satoshi; Mashimo, Takashi; Osaka, Mariko

    2013-10-01

    Pain processing and attention have a bidirectional interaction that depends upon one's relative ability to use limited-capacity resources. However, correlations between the size of limited-capacity resources and pain have not been evaluated. Working memory capacity, which is a cognitive resource, can be measured using the reading span task (RST). In this study, we hypothesized that an individual's potential working memory capacity and subjective pain intensity are related. To test this hypothesis, we evaluated 31 healthy participants' potential working memory capacity using the RST, and then applied continuous experimental heat stimulation using the listening span test (LST), which is a modified version of the RST. Subjective pain intensities were significantly lower during the challenging parts of the RST. The pain intensity under conditions where memorizing tasks were performed was compared with that under the control condition, and it showed a correlation with potential working memory capacity. These results indicate that working memory capacity reflects the ability to process information, including precise evaluations of changes in pain perception. In this work, we present data suggesting that changes in subjective pain intensity are related, depending upon individual potential working memory capacities. Individual working memory capacity may be a phenotype that reflects sensitivity to changes in pain perception. Copyright © 2013 American Pain Society. Published by Elsevier Inc. All rights reserved.

  6. Uncertainty in adaptive capacity

    International Nuclear Information System (INIS)

    Neil Adger, W.; Vincent, K.

    2005-01-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. (authors)

  7. Human resource capacity for information management in selected ...

    African Journals Online (AJOL)

    Results: it was established that capacity building was usually undertaken through on-job trainings i.e. 85.1% (103) health workers had on-job training on filling of data collection tools and only 10% (13) had received formal classroom training on the same. Further, only 9.1% (11) health workers had received information ...

  8. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  9. Strengthening Research Capacity to Enhance Natural Resources ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... to Enhance Natural Resources Management and Improve Rural Livelihoods ... and contribute to the food and income security of the rural poor by enhancing the ... of its 2017 call for proposals to establish Cyber Policy Centres in the Global South. ... partnering on a new initiative, aimed at reducing the emerging risk that.

  10. Secondary Power Resources of the Fuel and Energy Complex in Ukraine

    Directory of Open Access Journals (Sweden)

    Shkrabets F.P.

    2016-04-01

    Full Text Available This article describes the types of secondary energy resources that occur during or as a result of mining or of technological processes at metallurgical, coke and chemical enterprises. The research of opportunities to use them directly at industrial enterprises, in case when an energy resource or the energy generated “is not a commodity” was carried out. To generate electricity from secondary sources, the use of diesel power plants and gas–turbine facilities was offered. The values ​​of investments in the construction of thermal power plants (TPP based on different types of secondary energy resources were calculated. Tentative capacities of power plants, which utilize the energy of secondary sources were also computed. The figures used for assessing the release and use of secondary energy resources were given. The necessity of using secondary sources of energy to reduce harmful effects on the environment was emphasized.

  11. Institutional capacity for health systems research in East and Central African Schools of Public Health: strengthening human and financial resources

    Science.gov (United States)

    2014-01-01

    Background Despite its importance in providing evidence for health-related policy and decision-making, an insufficient amount of health systems research (HSR) is conducted in low-income countries (LICs). Schools of public health (SPHs) are key stakeholders in HSR. This paper, one in a series of four, examines human and financial resources capacities, policies and organizational support for HSR in seven Africa Hub SPHs in East and Central Africa. Methods Capacity assessment done included document analysis to establish staff numbers, qualifications and publications; self-assessment using a tool developed to capture individual perceptions on the capacity for HSR and institutional dialogues. Key informant interviews (KIIs) were held with Deans from each SPH and Ministry of Health and non-governmental officials, focusing on perceptions on capacity of SPHs to engage in HSR, access to funding, and organizational support for HSR. Results A total of 123 people participated in the self-assessment and 73 KIIs were conducted. Except for the National University of Rwanda and the University of Nairobi SPH, most respondents expressed confidence in the adequacy of staffing levels and HSR-related skills at their SPH. However, most of the researchers operate at individual level with low outputs. The average number of HSR-related publications was only capacity. This study underscores the need to form effective multidisciplinary teams to enhance research of immediate and local relevance. Capacity strengthening in the SPH needs to focus on knowledge translation and communication of findings to relevant audiences. Advocacy is needed to influence respective governments to allocate adequate funding for HSR to avoid donor dependency that distorts local research agenda. PMID:24888371

  12. Developing capacity in health informatics in a resource poor setting: lessons from Peru.

    Science.gov (United States)

    Kimball, Ann Marie; Curioso, Walter H; Arima, Yuzo; Fuller, Sherrilynne; Garcia, Patricia J; Segovia-Juarez, Jose; Castagnetto, Jesus M; Leon-Velarde, Fabiola; Holmes, King K

    2009-10-27

    The public sectors of developing countries require strengthened capacity in health informatics. In Peru, where formal university graduate degrees in biomedical and health informatics were lacking until recently, the AMAUTA Global Informatics Research and Training Program has provided research and training for health professionals in the region since 1999. The Fogarty International Center supports the program as a collaborative partnership between Universidad Peruana Cayetano Heredia in Peru and the University of Washington in the United States of America. The program aims to train core professionals in health informatics and to strengthen the health information resource capabilities and accessibility in Peru. The program has achieved considerable success in the development and institutionalization of informatics research and training programs in Peru. Projects supported by this program are leading to the development of sustainable training opportunities for informatics and eight of ten Peruvian fellows trained at the University of Washington are now developing informatics programs and an information infrastructure in Peru. In 2007, Universidad Peruana Cayetano Heredia started offering the first graduate diploma program in biomedical informatics in Peru.

  13. Exploiting short-term memory in soft body dynamics as a computational resource.

    Science.gov (United States)

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. LHCb: Self managing experiment resources

    CERN Multimedia

    Stagni, F

    2013-01-01

    Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System ( Resource Status System ) delivering real time informatio...

  15. Free energy and heat capacity

    International Nuclear Information System (INIS)

    Kurata, M.; Devanathan, R.

    2015-01-01

    Free energy and heat capacity of actinide elements and compounds are important properties for the evaluation of the safety and reliable performance of nuclear fuel. They are essential inputs for models that describe complex phenomena that govern the behaviour of actinide compounds during nuclear fuels fabrication and irradiation. This chapter introduces various experimental methods to measure free energy and heat capacity to serve as inputs for models and to validate computer simulations. This is followed by a discussion of computer simulation of these properties, and recent simulations of thermophysical properties of nuclear fuel are briefly reviewed. (authors)

  16. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  17. The NILE system architecture: fault-tolerant, wide-area access to computing and data resources

    International Nuclear Information System (INIS)

    Ricciardi, Aleta; Ogg, Michael; Rothfus, Eric

    1996-01-01

    NILE is a multi-disciplinary project building a distributed computing environment for HEP. It provides wide-area, fault-tolerant, integrated access to processing and data resources for collaborators of the CLEO experiment, though the goals and principles are applicable to many domains. NILE has three main objectives: a realistic distributed system architecture design, the design of a robust data model, and a Fast-Track implementation providing a prototype design environment which will also be used by CLEO physicists. This paper focuses on the software and wide-area system architecture design and the computing issues involved in making NILE services highly-available. (author)

  18. ADAPTATION OF JOHNSON SEQUENCING ALGORITHM FOR JOB SCHEDULING TO MINIMISE THE AVERAGE WAITING TIME IN CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    SOUVIK PAL

    2016-09-01

    Full Text Available Cloud computing is an emerging paradigm of Internet-centric business computing where Cloud Service Providers (CSPs are providing services to the customer according to their needs. The key perception behind cloud computing is on-demand sharing of resources available in the resource pool provided by CSP, which implies new emerging business model. The resources are provisioned when jobs arrive. The job scheduling and minimization of waiting time are the challenging issue in cloud computing. When a large number of jobs are requested, they have to wait for getting allocated to the servers which in turn may increase the queue length and also waiting time. This paper includes system design for implementation which is concerned with Johnson Scheduling Algorithm that provides the optimal sequence. With that sequence, service times can be obtained. The waiting time and queue length can be reduced using queuing model with multi-server and finite capacity which improves the job scheduling model.

  19. Financial accounting as a method of household finance capacity valuation

    Directory of Open Access Journals (Sweden)

    A. B. Untanov

    2017-01-01

    Full Text Available The article presents existing household finance capacity investigations. Comparison conducting allowed to determinate collisions and flaws of previous works. That substantiates to find a new approach in household finance capacity valuation necessity. The article contains theoretical research of household finance fundamental categories. In particular, it notes significant difference between domestic and foreign experience of household finance determination. Although emphasizing key similarities allows identifying household finance capacity composition. Moreover, the article provides a public and corporate finance sectors experience, which contains a huge knowledge of finance capacity investigations. Used research allows classify finance capacity not only as a resource valuation, but also as an economic entity’s ability to generate financial result. In terms of resource valuation, the paper suggests assessing both financial resources in classical meaning and any other property, which participating household economic activity and could be evaluated. The author’s position in terms of household finance capacity valuation is suggested. A broad definition of finance capacity causes applying conceptually different approach in this paper. Thus, comparative analysis method is suggested to substantiate household and corporate firm similarities. Used method allows forming household financial accounting, which leads to clear determination of household finance capacity composition and structure. Specificity forming household financial accounting is considered. An author’s position in regards existing contradictions with early research is suggested.

  20. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  1. Selecting, Evaluating and Creating Policies for Computer-Based Resources in the Behavioral Sciences and Education.

    Science.gov (United States)

    Richardson, Linda B., Comp.; And Others

    This collection includes four handouts: (1) "Selection Critria Considerations for Computer-Based Resources" (Linda B. Richardson); (2) "Software Collection Policies in Academic Libraries" (a 24-item bibliography, Jane W. Johnson); (3) "Circulation and Security of Software" (a 19-item bibliography, Sara Elizabeth Williams); and (4) "Bibliography of…

  2. Logical and physical resource management in the common node of a distributed function laboratory computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-01-01

    A scheme for managing resources required for transaction processing in the common node of a distributed function computer system has been given. The scheme has been found to be satisfactory for all common node services provided so far

  3. Computation for LHC experiments: a worldwide computing grid

    International Nuclear Information System (INIS)

    Fairouz, Malek

    2010-01-01

    In normal operating conditions the LHC detectors are expected to record about 10 10 collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10 9 octets per second and recording capacity of a few tens of 10 15 octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  4. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  5. Capacity management of nursing staff as a vehicle for organizational improvement

    NARCIS (Netherlands)

    Elkhuizen, Sylvia G.; Bor, Gert; Smeenk, Marjolein; Klazinga, Niek S.; Bakker, Piet J. M.

    2007-01-01

    Background: Capacity management systems create insight into required resources like staff and equipment. For inpatient hospital care, capacity management requires information on beds and nursing staff capacity, on a daily as well as annual basis. This paper presents a comprehensive capacity model

  6. Capacity Markets and Market Stability

    International Nuclear Information System (INIS)

    Stauffer, Hoff

    2006-01-01

    The good news is that market stability can be achieved through a combination of longer-term contracts, auctions for far enough in the future to permit new entry, a capacity management system, and a demand curve. The bad news is that if and when stable capacity markets are designed, the markets may seem to be relatively close to where we started - with integrated resource planning. Market ideologues will find this anathema. (author)

  7. AN INVESTIGATION OF RELATIONSHIP BETWEEN LEADERSHIP STYLES OF HUMAN RESOURCES MANAGER, CREATIVE PROBLEM SOLVING CAPACITY AND CAREER SATISFACTION: AN EMPIRICAL STUDY

    OpenAIRE

    Hüseyin YILMAZ

    2016-01-01

    The aim of this study is the creative problem-solving capacity of the organization with leadership behaviors of human resources managers and employees to examine the relationship between career satisfaction and is tested empirically. Research within the scope of the required data structured questionnaire method, operating in the province of Aydin was obtained from 130 employees working in five star hotels. Democratic leadership style according to the factor analysis, easygoing, participants c...

  8. Controlling user access to electronic resources without password

    Science.gov (United States)

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  9. Self managing experiment resources

    International Nuclear Information System (INIS)

    Stagni, F; Ubeda, M; Charpentier, P; Tsaregorodtsev, A; Romanovskiy, V; Roiser, S; Graciani, R

    2014-01-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  10. Multi-model approach to petroleum resource appraisal using analytic methodologies for probabilistic systems

    Science.gov (United States)

    Crovelli, R.A.

    1988-01-01

    The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.

  11. A framework for self-assessment of capacity needs in land administration

    DEFF Research Database (Denmark)

    Enemark, Stig; van der Molen, Paul

    2006-01-01

    This paper is facing the widely stated problem of poor institutional capacity of land administration agencies in many developing and transition countries. Responding to this problem has not been simple. The challenges of building capacity in land administration are immense and not similar to just...... human resource development. Capacity building addresses the broader concept of the ability of organisations and individuals to perform functions effectively, efficiently and sustainable. The guidelines presented in this paper address the ability/capacity of land administration systems at the societal...... processes; to needed human resources and training programs. For each step the capacity of the system can be assessed and possible or needed improvement can be identified. The guidelines aim to function as a basis for in-country self-assessment of the capacity needs in land administration. The government may...

  12. Building the Capacity to Innovate: The Role of Human Capital. Research Report

    Science.gov (United States)

    Smith, Andrew; Courvisanos, Jerry; Tuck, Jacqueline; McEachern, Steven

    2012-01-01

    This report examines the link between human resource management practices and innovation. It is based on a conceptual framework in which "human resource stimuli measures"--work organisation, working time, areas of training and creativity--feed into innovative capacity or innovation. Of course, having innovative capacity does not…

  13. Efficient Computation of Buffer Capacities for Cyclo-Static Dataflow Graphs

    NARCIS (Netherlands)

    Wiggers, M.H.; Bekooij, Marco Jan Gerrit; Bekooij, Marco J.G.; Smit, Gerardus Johannes Maria

    A key step in the design of cyclo-static real-time systems is the determination of buffer capacities. In our multi-processor system, we apply back-pressure, which means that tasks wait for space in output buffers. Consequently buffer capacities affect the throughput. This requires the derivation of

  14. Efficient Computation of Buffer Capacities for Cyclo-Static Dataflow Graphs

    NARCIS (Netherlands)

    Wiggers, M.H.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2006-01-01

    A key step in the design of cyclo-static real-time systems is the determination of buffer capacities. In our multi-processor system, we apply back-pressure, which means that tasks wait for space in output buffers. Consequently buffer capacities affect the throughput. This requires the derivation of

  15. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    Science.gov (United States)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  16. On the cost of using capacity flexibility - a dynamic programming approach

    NARCIS (Netherlands)

    Wijngaard, J; Miltenburg, GJ

    1997-01-01

    This paper considers the problem of how to evaluate the resource use for sales opportunities in a production situation that is dually constrained: in operator capacity and machine capacity. The operator capacity is flexible, while the machine capacity is not. Therefore, there is a certain machine

  17. International Conference on Human Resource Development for Nuclear Power Programmes: Building and Sustaining Capacity. Presentations

    International Nuclear Information System (INIS)

    2014-01-01

    The objectives of the conference are to: • Review developments in the global status of HRD since the 2010 international conference; • Emphasize the role of human resources and capacity building programmes at the national and organizational level for achieving safe, secure and sustainable nuclear power programmes; • Discuss the importance of building competence in nuclear safety and security; • Provide a forum for information exchange on national, as well as international, policies and practices; • Share key elements and best practices related to the experience of Member States that are introducing, operating or expanding nuclear power programmes; • Highlight the practices and issues regarding HRD at the organizational and national level; • Highlight education and training programmes and practices; • Emphasize the role of nuclear knowledge management for knowledge transfer and HRD; and • Elaborate on the role and scope of various knowledge networks

  18. [Ecological carrying capacity and Chongming Island's ecological construction].

    Science.gov (United States)

    Wang, Kaiyun; Zou, Chunjing; Kong, Zhenghong; Wang, Tianhou; Chen, Xiaoyong

    2005-12-01

    This paper overviewed the goals of Chongming Island's ecological construction and its background, analyzed the current eco-economic status and constraints of the Island, and put forward some scientific issues on its ecological construction. It was suggested that for the resources-saving and sustainable development of the Island, the researches on its ecological construction should be based on its ecological carrying capacity, fully take the regional characteristics into consideration, and refer the successful development modes at home and abroad. The carrying capacity study should ground on systemic and dynamic views, give a thorough evaluation of the Island's present carrying capacity, simulate its possible changes, and forecast its demands and risks. Operable countermeasures to promote the Island's carrying capacity should be worked out, new industry structure, population scale, and optimized distribution projects conforming to regional carrying capacity should be formulated, and effective ecological security alarming and control system should be built, with the aim of providing suggestions and strategic evidences for the decision-making of economic development and sustainable environmental resources use of the region.

  19. Economics and design of capacity markets for the power sector

    Energy Technology Data Exchange (ETDEWEB)

    Cramton, Peter [Maryland Univ., College Park, MD (United States). Dept. of Economics; Ockenfels, Axel [Koeln Univ. (Germany). Dept. of Economics

    2012-06-15

    Capacity markets are a means to assure resource adequacy. The need for a capacity market stems from several market failures the most prominent of which is the absence of a robust demand-side. Limited demand response makes market clearing problematic in times of scarcity. We present the economic motivation for a capacity market, present one specific market design that utilizes the best design features from various resource adequacy approaches analyzed in the literature, and we discuss other instruments to deal with the problems. We then discuss the suitability of the market for Europe and Germany in particular. (orig.)

  20. DrugSig: A resource for computational drug repositioning utilizing gene expression signatures.

    Directory of Open Access Journals (Sweden)

    Hongyu Wu

    Full Text Available Computational drug repositioning has been proved as an effective approach to develop new drug uses. However, currently existing strategies strongly rely on drug response gene signatures which scattered in separated or individual experimental data, and resulted in low efficient outputs. So, a fully drug response gene signatures database will be very helpful to these methods. We collected drug response microarray data and annotated related drug and targets information from public databases and scientific literature. By selecting top 500 up-regulated and down-regulated genes as drug signatures, we manually established the DrugSig database. Currently DrugSig contains more than 1300 drugs, 7000 microarray and 800 targets. Moreover, we developed the signature based and target based functions to aid drug repositioning. The constructed database can serve as a resource to quicken computational drug repositioning. Database URL: http://biotechlab.fudan.edu.cn/database/drugsig/.

  1. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  2. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    International Nuclear Information System (INIS)

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered

  3. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Bigeleisen, Jacob; Berne, Bruce J.; Coton, F. Albert; Scheraga, Harold A.; Simmons, Howard E.; Snyder, Lawrence C.; Wiberg, Kenneth B.; Wipke, W. Todd

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered.

  4. 农业生产与水资源承载力评价%Agricultural production and evaluation in terms of water resources carrying capacity

    Institute of Scientific and Technical Information of China (English)

    虞祎; 张晖; 胡浩

    2016-01-01

    AbstractBased on the evaluation of water resources carrying capacity, especially taking into account the impact of agricultural pollution on sustainable use of water resources, a comprehensive analysis was conducted on the strains of water resources due to farming and animal production in different regions of China to provide reference for rational estimation of potential agricultural growth and correct approaches for structural adjustments in agriculture. Excess nitrogen and grey water were calculated as indicators to quantify the impact of agricultural pollution on water resources. Following nutrient balance theory, excess nitrogen was the difference between the sum of nitrogen provided by chemical fertilizer, livestock manure and soil, and total nitrogen needed by farming. Grey water was the amount of water required for diluting excessively high concentration of nitrogen in water to a more environmental-friendly level. Agricultural water footprint was the sum of agricultural water and grey water used. The huge quantity of excess nitrogen produced by farming and livestock consequently led to excessive amount of grey water, which more than doubled the amount of water used in agriculture. There was therefore the need to reserve enough environmental space for diluting pollution when estimating water resources carrying capacity based on water sustainability and healthy development. Water surplus were constructed to reflect the potential of water resources to support agricultural production with detailed environmental consideration. Water surplus was the difference between water resources and agricultural water footprint. Using 2003-2012 nationwide samples, a panel data model was constructed to analyze the impact of change in sown area and livestock head on water surplus. The results suggested that the nationwide water in China could carry a maximum of 168.89 million hm2 or 3.57 billion pigs. The water resources carrying capacity model results also showed that the

  5. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  6. 76 FR 39470 - Integrated Resource Plan

    Science.gov (United States)

    2011-07-06

    ... in the form of hydro-electric pump storage capacity. Increased load demands above the capacity of..., biomass, and wind energy, and energy storage resources. Each portfolio was optimized for the lowest net...

  7. Future computing needs for Fermilab

    International Nuclear Information System (INIS)

    1983-12-01

    The following recommendations are made: (1) Significant additional computing capacity and capability beyond the present procurement should be provided by 1986. A working group with representation from the principal computer user community should be formed to begin immediately to develop the technical specifications. High priority should be assigned to providing a large user memory, software portability and a productive computing environment. (2) A networked system of VAX-equivalent super-mini computers should be established with at least one such computer dedicated to each reasonably large experiment for both online and offline analysis. The laboratory staff responsible for mini computers should be augmented in order to handle the additional work of establishing, maintaining and coordinating this system. (3) The laboratory should move decisively to a more fully interactive environment. (4) A plan for networking both inside and outside the laboratory should be developed over the next year. (5) The laboratory resources devoted to computing, including manpower, should be increased over the next two to five years. A reasonable increase would be 50% over the next two years increasing thereafter to a level of about twice the present one. (6) A standing computer coordinating group, with membership of experts from all the principal computer user constituents of the laboratory, should be appointed by and report to the director. This group should meet on a regularly scheduled basis and be charged with continually reviewing all aspects of the laboratory computing environment

  8. Uranium supply/demand projections to 2030 in the OECD/NEA-IAEA ''Red Book''. Nuclear growth projections, global uranium exploration, uranium resources, uranium production and production capacity

    International Nuclear Information System (INIS)

    Vance, Robert

    2009-01-01

    World demand for electricity is expected to continue to grow rapidly over the next several decades to meet the needs of an increasing population and economic growth. The recognition by many governments that nuclear power can produce competitively priced, base load electricity that is essentially free of greenhouse gas emissions, combined with the role that nuclear can play in enhancing security of energy supplies, has increased the prospects for growth in nuclear generating capacity. Since the mid-1960s, with the co-operation of their member countries and states, the OECD Nuclear Energy Agency (NEA) and the International Atomic Energy Agency (IAEA) have jointly prepared periodic updates (currently every 2 years) on world uranium resources, production and demand. These updates have been published by the OECD/NEA in what is commonly known as the ''Red Book''. The 2007 edition replaces the 2005 edition and reflects information current as of 1 st January 2007. Uranium 2007: Resources, Production and Demand presents, in addition to updated resource figures, the results of a recent review of world uranium market fundamentals and provides a statistical profile of the world uranium industry. It contains official data provided by 40 countries (and one Country Report prepared by the IAEA Secretariat) on uranium exploration, resources, production and reactor-related requirements. Projections of nuclear generating capacity and reactor-related uranium requirements to 2030 as well as a discussion of long-term uranium supply and demand issues are also presented. (orig.)

  9. Communicative Planning As Institutional Capacity Building: From Discourse/Network To Opportunity

    Directory of Open Access Journals (Sweden)

    Delik Hudalah

    2013-05-01

    Full Text Available The paper redefines the ideas about communicative planning as not only participatory and democratic practice but also capacity building oriented toward the improvement of governance styles and consciousness. So far capacity building has focused on the exploitation of social resources internal to actors. These internal resources include knowledge (argumentation, debate, discourse formation etc and relational (network, coalition, alliance etc building. The paper argues that in dealing with very complex planning problems characterized by fragmented and uncertain institutional systems, the internal resources need to be coupled with the exploration of resources external to actors, namely the political opportunity structure and moment of opportunity. The analysis implies that the performance of communicative decision-making process as capacity building can be assessed in three aspects: strategic and inclusive actors’ involvement, the building of actors’ awareness on neglected but important planning issues and agendas, and consistency and deliberation in realizing and delivering agreed planning ideas, frameworks and decisions.

  10. Implications of Model Structure and Detail for Utility Planning: Scenario Case Studies Using the Resource Planning Model

    Energy Technology Data Exchange (ETDEWEB)

    Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hale, Elaine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-04-01

    In this report, we analyze the impacts of model configuration and detail in capacity expansion models, computational tools used by utility planners looking to find the least cost option for planning the system and by researchers or policy makers attempting to understand the effects of various policy implementations. The present analysis focuses on the importance of model configurations — particularly those related to capacity credit, dispatch modeling, and transmission modeling — to the construction of scenario futures. Our analysis is primarily directed toward advanced tools used for utility planning and is focused on those impacts that are most relevant to decisions with respect to future renewable capacity deployment. To serve this purpose, we develop and employ the NREL Resource Planning Model to conduct a case study analysis that explores 12 separate capacity expansion scenarios of the Western Interconnection through 2030.

  11. Methods and measures of enhancing production capacity of uranium mines

    International Nuclear Information System (INIS)

    Ni Yuhui

    2013-01-01

    Limited by resource conditions and mining conditions, the production capacity of uranium mines is generally small. The main factors to affect the production capacity determination of uranium mines are analyzed, the ways and measures to enhance the production capacity of uranium mines are explored from the innovations of technology and management mode. (author)

  12. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    International Nuclear Information System (INIS)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-01-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi–Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources. (paper)

  13. Resource management in utility and cloud computing

    CERN Document Server

    Zhao, Han

    2013-01-01

    This SpringerBrief reviews the existing market-oriented strategies for economically managing resource allocation in distributed systems. It describes three new schemes that address cost-efficiency, user incentives, and allocation fairness with regard to different scheduling contexts. The first scheme, taking the Amazon EC2? market as a case of study, investigates the optimal resource rental planning models based on linear integer programming and stochastic optimization techniques. This model is useful to explore the interaction between the cloud infrastructure provider and the cloud resource c

  14. Competition under capacitated dynamic lot-sizing with capacity acquisition

    DEFF Research Database (Denmark)

    Li, Hongyan; Meissner, Joern

    2011-01-01

    Lot-sizing and capacity planning are important supply chain decisions, and competition and cooperation affect the performance of these decisions. In this paper, we look into the dynamic lot-sizing and resource competition problem of an industry consisting of multiple firms. A capacity competition...... production setup, along with inventory carrying costs. The individual production lots of each firm are limited by a constant capacity restriction, which is purchased up front for the planning horizon. The capacity can be purchased from a spot market, and the capacity acquisition cost fluctuates...

  15. The Usage of informal computer based communication in the context of organization’s technological resources

    OpenAIRE

    Raišienė, Agota Giedrė; Jonušauskas, Steponas

    2011-01-01

    Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization's technological resources. Methodology - meta analysis, survey and descriptive analysis. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the ...

  16. Installed capacity in New York

    International Nuclear Information System (INIS)

    Charlton, J.

    2006-01-01

    This presentation discussed capacity issues related to the New York Independent System Operator (NYISO). The NYISO's market volume was approximately $11 billion in 2005, and it was responsible for providing 32,075 MW of electricity at peak load to its users. Regulatory uncertainty is currently discouraging investment in new generating resources. All load serving entities are required to contract for sufficient capacity in order to meet their capacity obligations. Market participants currently determine capacity and energy revenues. The NYISO market allows suppliers to recover variable costs for providing ancillary services, and the economic value of the revenue source governs decisions made in the wholesale electricity market. The installed capacity market was designed as a spot auction deficiency auction. Phased-in demand curves are used to modify the installed capacity market's design. A sloped demand curve mechanism is used to value capacity above the minimum requirement for both reliability and competition. Participation in the day-ahead market enhances competition and exerts downward pressure on energy and ancillary service market prices. It was concluded that the market structures and design features of the installed capacity markets recognize the need for system reliability in addition to encouraging robust competition and recognizing energy price caps and regulatory oversights. tabs., figs

  17. Results of a Nationwide Capacity Survey of Hospitals Providing Trauma Care in War-Affected Syria.

    Science.gov (United States)

    Mowafi, Hani; Hariri, Mahmoud; Alnahhas, Houssam; Ludwig, Elizabeth; Allodami, Tammam; Mahameed, Bahaa; Koly, Jamal Kaby; Aldbis, Ahmed; Saqqur, Maher; Zhang, Baobao; Al-Kassem, Anas

    2016-09-01

    The Syrian civil war has resulted in large-scale devastation of Syria's health infrastructure along with widespread injuries and death from trauma. The capacity of Syrian trauma hospitals is not well characterized. Data are needed to allocate resources for trauma care to the population remaining in Syria. To identify the number of trauma hospitals operating in Syria and to delineate their capacities. From February 1 to March 31, 2015, a nationwide survey of 94 trauma hospitals was conducted inside Syria, representing a coverage rate of 69% to 93% of reported hospitals in nongovernment controlled areas. Identification and geocoding of trauma and essential surgical services in Syria. Although 86 hospitals (91%) reported capacity to perform emergency surgery, 1 in 6 hospitals (16%) reported having no inpatient ward for patients after surgery. Sixty-three hospitals (70%) could transfuse whole blood but only 7 (7.4%) could separate and bank blood products. Seventy-one hospitals (76%) had any pharmacy services. Only 10 (11%) could provide renal replacement therapy, and only 18 (20%) provided any form of rehabilitative services. Syrian hospitals are isolated, with 24 (26%) relying on smuggling routes to refer patients to other hospitals and 47 hospitals (50%) reporting domestic supply lines that were never open or open less than daily. There were 538 surgeons, 378 physicians, and 1444 nurses identified in this survey, yielding a nurse to physician ratio of 1.8:1. Only 74 hospitals (79%) reported any salary support for staff, and 84 (89%) reported material support. There is an unmet need for biomedical engineering support in Syrian trauma hospitals, with 12 fixed x-ray machines (23%), 11 portable x-ray machines (13%), 13 computed tomographic scanners (22%), 21 adult (21%) and 5 pediatric (19%) ventilators, 14 anesthesia machines (10%), and 116 oxygen cylinders (15%) not functional. No functioning computed tomographic scanners remain in Aleppo, and 95 oxygen cylinders (42

  18. Higher education and capacity building in Africa

    DEFF Research Database (Denmark)

    Higher education has recently been recognised as a key driver for societal growth in the Global South and capacity building of African universities is now widely included in donor policies. The question is; how do capacity-building projects affect African universities, researchers and students? U...... is a valuable resource for researchers and postgraduate students in education, development studies, African studies and human geography, as well as anthropology and history.......? Universities and their scientific knowledges are often seen to have universal qualities; therefore, capacity building may appear straightforward. Higher Education and Capacity Building in Africa contests such universalistic notions. Inspired by ideas about the ‘geography of scientific knowledge’ it explores...

  19. Resource-adaptive cognitive processes

    CERN Document Server

    Crocker, Matthew W

    2010-01-01

    This book investigates the adaptation of cognitive processes to limited resources. The central topics of this book are heuristics considered as results of the adaptation to resource limitations, through natural evolution in the case of humans, or through artificial construction in the case of computational systems; the construction and analysis of resource control in cognitive processes; and an analysis of resource-adaptivity within the paradigm of concurrent computation. The editors integrated the results of a collaborative 5-year research project that involved over 50 scientists. After a mot

  20. New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

    CERN Document Server

    Adamova, Dagmar

    2017-01-01

    The performance of the Large Hadron Collider (LHC) during the ongoing Run 2 is above expectations both concerning the delivered luminosity and the LHC live time. This resulted in a volume of data much larger than originally anticipated. Based on the current data production levels and the structure of the LHC experiment computing models, the estimates of the data production rates and resource needs were re-evaluated for the era leading into the High Luminosity LHC (HLLHC), the Run 3 and Run 4 phases of LHC operation. It turns out that the raw data volume will grow 10 times by the HL-LHC era and the processing capacity needs will grow more than 60 times. While the growth of storage requirements might in principle be satisfied with a 20 per cent budget increase and technology advancements, there is a gap of a factor 6 to 10 between the needed and available computing resources. The threat of a lack of computing and storage resources was present already in the beginning of Run 2, but could still be mitigated, e.g....

  1. National hydroelectric power resources study. Preliminary inventory of hydropower resources. Volume 6. Northeast region

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-07-01

    In the Northeast region, the physical potential for all sites exceeds 33,000 MW of capacity with an estimated average annual energy of some 153,000 GWH. By comparison, the available data represent about 6% of the total capacity and 11% of the hydroelectric energy potential estimated for the entire US. Of the total capacity estimated for the region, 6100 MW has been installed. The remainder (27,200 MW, excluding the undeveloped capacity in the New England States) is the maximum which could be developed by upgrading and expanding existing projects (18,700 MW), and by installing new hydroelectric power capacity at all potentially feasible, undeveloped sites (8500 MW). Small-scale facilities account for about 15% of the region's total installed capacity, but another 1800 MW could be added to these and other small water-resource projects. In addition, 500 MW could be installed at potentially feasible, undeveloped small-scale sites. The small-scale resource varies considerably, with the states of New York, Maine, and New Hampshire having the largest potential for incremental development at existing projects in the Northeast region. West Virginia, Maryland, Delaware, New Jersey, Pennsylvania, New York, Connecticut, Massachusetts, Rhode Island, New Hampshire, Vermont, and Maine comprise the Northeast region.

  2. Use of Google Earth to strengthen public health capacity and facilitate management of vector-borne diseases in resource-poor environments.

    Science.gov (United States)

    Lozano-Fuentes, Saul; Elizondo-Quiroga, Darwin; Farfan-Ale, Jose Arturo; Loroño-Pino, Maria Alba; Garcia-Rejon, Julian; Gomez-Carro, Salvador; Lira-Zumbardo, Victor; Najera-Vazquez, Rosario; Fernandez-Salas, Ildefonso; Calderon-Martinez, Joaquin; Dominguez-Galera, Marco; Mis-Avila, Pedro; Morris, Natashia; Coleman, Michael; Moore, Chester G; Beaty, Barry J; Eisen, Lars

    2008-09-01

    Novel, inexpensive solutions are needed for improved management of vector-borne and other diseases in resource-poor environments. Emerging free software providing access to satellite imagery and simple editing tools (e.g. Google Earth) complement existing geographic information system (GIS) software and provide new opportunities for: (i) strengthening overall public health capacity through development of information for city infrastructures; and (ii) display of public health data directly on an image of the physical environment. We used freely accessible satellite imagery and a set of feature-making tools included in the software (allowing for production of polygons, lines and points) to generate information for city infrastructure and to display disease data in a dengue decision support system (DDSS) framework. Two cities in Mexico (Chetumal and Merida) were used to demonstrate that a basic representation of city infrastructure useful as a spatial backbone in a DDSS can be rapidly developed at minimal cost. Data layers generated included labelled polygons representing city blocks, lines representing streets, and points showing the locations of schools and health clinics. City blocks were colour-coded to show presence of dengue cases. The data layers were successfully imported in a format known as shapefile into a GIS software. The combination of Google Earth and free GIS software (e.g. HealthMapper, developed by WHO, and SIGEpi, developed by PAHO) has tremendous potential to strengthen overall public health capacity and facilitate decision support system approaches to prevention and control of vector-borne diseases in resource-poor environments.

  3. Development of Resource Sharing System Components for AliEn Grid Infrastructure

    CERN Document Server

    Harutyunyan, Artem

    2010-01-01

    The problem of the resource provision, sharing, accounting and use represents a principal issue in the contemporary scientific cyberinfrastructures. For example, collaborations in physics, astrophysics, Earth science, biology and medicine need to store huge amounts of data (of the order of several petabytes) as well as to conduct highly intensive computations. The appropriate computing and storage capacities cannot be ensured by one (even very large) research center. The modern approach to the solution of this problem suggests exploitation of computational and data storage facilities of the centers participating in collaborations. The most advanced implementation of this approach is based on Grid technologies, which enable effective work of the members of collaborations regardless of their geographical location. Currently there are several tens of Grid infrastructures deployed all over the world. The Grid infrastructures of CERN Large Hadron Collider experiments - ALICE, ATLAS, CMS, and LHCb which are exploi...

  4. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  5. Practical methodologies for the calculation of capacity in electricity markets for wind energy

    International Nuclear Information System (INIS)

    Botero B, Sergio; Giraldo V, Luis Alfonso; Isaza C, Felipe

    2008-01-01

    Determining the real capacity of the generators in a power market is an essential task in order to estimate the actual system reliability, and to estimate the reward for generators due to their capacity in the firm energy market. In the wind power case, which is an intermittent resource, several methodologies have been proposed to estimate the capacity of a wind power emplacement, not only for planning but also for firm energy remuneration purposes. This paper presents some methodologies that have been proposed or implemented around the world in order to calculate the capacity of this energy resource.

  6. Critical analysis of world uranium resources

    Science.gov (United States)

    Hall, Susan; Coleman, Margaret

    2013-01-01

    report’s analysis of 141 mines that are operating or are being actively developed identifies 2.7 million tU of in-situ uranium resources worldwide, approximately 2.1 million tU recoverable after mining and milling losses were deducted. Sixty-four operating mines report a total of 1.4 million tU of in-situ RAR (about 1 million tU recoverable). Seventy-seven developing mines/production centers report 1.3 million tU in-situ Reasonably Assured Resources (RAR) (about 1.1 million tU recoverable), which have a reasonable chance of producing uranium within 5 years. Most of the production is projected to come from conventional underground or open pit mines as opposed to in-situ leach mines. Production capacity in operating mines is about 76,000 tU/yr, and in developing mines is estimated at greater than 52,000 tU/yr. Production capacity in operating mines should be considered a maximum as mines seldom produce up to licensed capacity due to operational difficulties. In 2010, worldwide mines operated at 70 percent of licensed capacity, and production has never exceeded 89 percent of capacity. The capacity in developing mines is not always reported. In this study 35 percent of developing mines did not report a target licensed capacity, so estimates of future capacity may be too low. The Organisation for Economic Co-operation and Development’s Nuclear Energy Agency (NEA) and International Atomic Energy Agency (IAEA) estimate an additional 1.4 million tU economically recoverable resources, beyond that identified in operating or developing mines identified in this report. As well, 0.5 million tU in subeconomic resources, and 2.3 million tU in the geologically less certain inferred category are identified worldwide. These agencies estimate 2.2 million tU in secondary sources such as government and commercial stockpiles and re-enriched uranium tails. They also estimate that unconventional uranium supplies (uraniferous phosphate and black shale deposits) may contain up to 7.6 million t

  7. The impact of rationing of health resources on capacity of Australian public sector nurses to deliver nursing care after-hours: a qualitative study.

    Science.gov (United States)

    Henderson, Julie; Willis, Eileen; Toffoli, Luisa; Hamilton, Patricia; Blackman, Ian

    2016-12-01

    Australia, along with other countries, has introduced New Public Management (NPM) into public sector hospitals in an effort to contain healthcare costs. NPM is associated with outsourcing of service provision, the meeting of government performance indicators, workforce flexibility and rationing of resources. This study explores the impact of rationing of staffing and other resources upon delivery of care outside of business hours. Data was collected through semistructured interviews conducted with 21 nurses working in 2 large Australian metropolitan hospitals. Participants identified four strategies associated with NPM which add to workload after-hours and impacted on the capacity to deliver nursing care. These were functional flexibility, vertical substitution of staff, meeting externally established performance indicators and outsourcing. We conclude that cost containment alongside of the meeting of performance indicators has extended work traditionally performed during business hours beyond those hours when less staffing and material resources are available. This adds to nursing workload and potentially contributes to incomplete nursing care. © 2016 John Wiley & Sons Ltd.

  8. Technical and institutional capacity in local organisations to manage ...

    African Journals Online (AJOL)

    Technical and institutional capacity in local organisations to manage decentralised forest resources in Uganda. ... Southern Forests: a Journal of Forest Science ... to implement decentralised forest governance exists in local organisations through partnerships with other actors in the productive use of the available resources.

  9. Using Mosix for Wide-Area Compuational Resources

    Science.gov (United States)

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  10. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2011-12-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources.Methodology—meta analysis, survey and descriptive analysis.Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, thatsignificant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  11. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Agota Giedrė Raišienė

    2013-08-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, that significant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  12. Evaluation of carrying capacity and territorial environmental sustainability

    Directory of Open Access Journals (Sweden)

    Giuseppe Ruggiero

    2012-09-01

    Full Text Available Land use has a great impact on environmental quality, use of resources, state of ecosystems and socio-economic development. Land use can be considered sustainable if the environmental pressures of human activities do not exceed the ecological carrying capacity. A scientific knowledge of the capability of ecosystems to provide resources and absorb waste is a useful and innovative means of supporting territorial planning. This study examines the area of the Province of Bari to estimate the ecosystems’ carrying capacity, and compare it with the current environmental pressures exerted by human activities. The adapted methodology identified the environmentally sustainable level for one province.

  13. A scenario based approach for flexible resource loading under uncertainty

    NARCIS (Netherlands)

    Wullink, Gerhard; Gademann, Noud; Hans, Elias W.; van Harten, Aart

    2003-01-01

    Order acceptance decisions in manufacture-to-order environments are often made based on incomplete or uncertain information. To promise reliable due dates and to manage resource capacity adequately, resource capacity loading is an indispensable supporting tool. We propose a scenario based approach

  14. Cost of wind energy: comparing distant wind resources to local resources in the midwestern United States.

    Science.gov (United States)

    Hoppock, David C; Patiño-Echeverri, Dalia

    2010-11-15

    The best wind sites in the United States are often located far from electricity demand centers and lack transmission access. Local sites that have lower quality wind resources but do not require as much power transmission capacity are an alternative to distant wind resources. In this paper, we explore the trade-offs between developing new wind generation at local sites and installing wind farms at remote sites. We first examine the general relationship between the high capital costs required for local wind development and the relatively lower capital costs required to install a wind farm capable of generating the same electrical output at a remote site,with the results representing the maximum amount an investor should be willing to pay for transmission access. We suggest that this analysis can be used as a first step in comparing potential wind resources to meet a state renewable portfolio standard (RPS). To illustrate, we compare the cost of local wind (∼50 km from the load) to the cost of distant wind requiring new transmission (∼550-750 km from the load) to meet the Illinois RPS. We find that local, lower capacity factor wind sites are the lowest cost option for meeting the Illinois RPS if new long distance transmission is required to access distant, higher capacity factor wind resources. If higher capacity wind sites can be connected to the existing grid at minimal cost, in many cases they will have lower costs.

  15. From transistor to trapped-ion computers for quantum chemistry.

    Science.gov (United States)

    Yung, M-H; Casanova, J; Mezzacapo, A; McClean, J; Lamata, L; Aspuru-Guzik, A; Solano, E

    2014-01-07

    Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology.

  16. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  17. 18 CFR 287.101 - Determination of powerplant design capacity.

    Science.gov (United States)

    2010-04-01

    ... powerplant design capacity. For the purpose of section 103 of the Powerplant and Industrial Fuel Use Act of... powerplant design capacity. 287.101 Section 287.101 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE POWERPLANT AND INDUSTRIAL FUEL USE ACT OF...

  18. Adaptive resource allocation for efficient patient scheduling

    NARCIS (Netherlands)

    Vermeulen, Ivan B.; Bohte, Sander M.; Elkhuizen, Sylvia G.; Lameris, Han; Bakker, Piet J. M.; La Poutré, Han

    2009-01-01

    Efficient scheduling of patient appointments on expensive resources is a complex and dynamic task. A resource is typically used by several patient groups. To service these groups, resource capacity is often allocated per group, explicitly or implicitly. Importantly, due to fluctuations in demand,

  19. Effects of Manufacturing Firm’s Capacity Planning on Performance of the Firm

    Directory of Open Access Journals (Sweden)

    Kifordu Anthony

    2017-12-01

    Full Text Available This study investigated capacity planning and performance in the manufacturing sector in the Southeastern States of Nigeria based on selected Brewing industries. The population of the study was 740 staff of the brewing industry in South-Eastearn Nigeria. The sample size of 509 was obtained using a Taro Yamani’s statistical formula. The study used questionnaire and oral interview guide for data gathering. A test- retest stood completed using Spearman’s rank correlation, giving a coefficient of 0.9. Findings revealed that capacity planning significantly enriched economic performance of the industry studied. There existed a strong affirmative relationship among capacity necessities planning and resources requirements planning. The paper suggested use of capacity planning as a technique to improve all performance factors. Similarly, performance advantage subsist from the correlation of volume requirements plans and resources requirements planning. The paper summarily held the position that volume preparation improved the economic performance in the industry under review. This inferred that goals achievement is possible. Likewise, the finding of substantial constructive association between volume supplies planning and resources supplies planning inferred an affirmative communication between the variables. This meant that resources supplies planning which was a method of organizing the detailed production plans could lead to an improvement of capacity supplies planning. That is to say, they are taking future decisions on the substances required for production capability of the brewing facility.

  20. The use of Minilabs to improve the testing capacity of regulatory authorities in resource limited settings: Tanzanian experience.

    Science.gov (United States)

    Risha, Peter Gasper; Msuya, Zera; Clark, Malcolm; Johnson, Keith; Ndomondo-Sigonda, Margareth; Layloff, Thomas

    2008-08-01

    The Tanzania Food and Drugs Authority piloted the use of Minilab kits, a thin-layer-chromatographic based drug quality testing technique, in a two-tier quality assurance program. The program is intended to improve testing capacity with timely screening of the quality of medicines as they enter the market. After 1 week training of inspectors on Minilab screening techniques, they were stationed at key Ports-of-Entry (POE) to screen the quality of imported medicines. In addition, three non-Ports-of-Entry centres were established to screen samples collected during Post-Marketing-Surveillance. Standard operating procedures (SOPs) were developed to structure and standardize the implementation process. Over 1200 samples were tested using the Minilab outside the central quality control laboratory (QCL), almost doubling the previous testing capacity. The program contributed to increased regulatory reach and visibility of the Authority throughout the country, serving as a deterrent against entry of substandard medicines into market. The use of Minilab for quality screening was inexpensive and provided a high sample throughput. However, it suffers from the limitation that it can reliably detect only grossly substandard or wrong drug samples and therefore, it should not be used as an independent testing resource but in conjunction with a full-service quality control laboratory capable of auditing reported substandard results.

  1. Energy efficiency models and optimization algoruthm to enhance on-demand resource delivery in a cloud computing environment / Thusoyaone Joseph Moemi

    OpenAIRE

    Moemi, Thusoyaone Joseph

    2013-01-01

    Online hosed services are what is referred to as Cloud Computing. Access to these services is via the internet. h shifts the traditional IT resource ownership model to renting. Thus, high cost of infrastructure cannot limit the less privileged from experiencing the benefits that this new paradigm brings. Therefore, c loud computing provides flexible services to cloud user in the form o f software, platform and infrastructure as services. The goal behind cloud computing is to provi...

  2. An Applied Method for Predicting the Load-Carrying Capacity in Compression of Thin-Wall Composite Structures with Impact Damage

    Science.gov (United States)

    Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.

    2018-03-01

    The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.

  3. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  4. Pediatric emergency care capacity in a low-resource setting: An assessment of district hospitals in Rwanda.

    Directory of Open Access Journals (Sweden)

    Celestin Hategeka

    Full Text Available Health system strengthening is crucial to improving infant and child health outcomes in low-resource countries. While the knowledge related to improving newborn and child survival has advanced remarkably over the past few decades, many healthcare systems in such settings remain unable to effectively deliver pediatric advance life support management. With the introduction of the Emergency Triage, Assessment and Treatment plus Admission care (ETAT+-a locally adapted pediatric advanced life support management program-in Rwandan district hospitals, we undertook this study to assess the extent to which these hospitals are prepared to provide this pediatric advanced life support management. The results of the study will shed light on the resources and support that are currently available to implement ETAT+, which aims to improve care for severely ill infants and children.A cross-sectional survey was undertaken in eight district hospitals across Rwanda focusing on the availability of physical and human resources, as well as hospital services organizations to provide emergency triage, assessment and treatment plus admission care for severely ill infants and children.Many of essential resources deemed necessary for the provision of emergency care for severely ill infants and children were readily available (e.g. drugs and laboratory services. However, only 4/8 hospitals had BVM for newborns; while nebulizer and MDI were not available in 2/8 hospitals. Only 3/8 hospitals had F-75 and ReSoMal. Moreover, there was no adequate triage system across any of the hospitals evaluated. Further, guidelines for neonatal resuscitation and management of malaria were available in 5/8 and in 7/8 hospitals, respectively; while those for child resuscitation and management of sepsis, pneumonia, dehydration and severe malnutrition were available in less than half of the hospitals evaluated.Our assessment provides evidence to inform new strategies to enhance the capacity of

  5. Pediatric emergency care capacity in a low-resource setting: An assessment of district hospitals in Rwanda

    Science.gov (United States)

    Shoveller, Jean; Tuyisenge, Lisine; Kenyon, Cynthia; Cechetto, David F.; Lynd, Larry D.

    2017-01-01

    Background Health system strengthening is crucial to improving infant and child health outcomes in low-resource countries. While the knowledge related to improving newborn and child survival has advanced remarkably over the past few decades, many healthcare systems in such settings remain unable to effectively deliver pediatric advance life support management. With the introduction of the Emergency Triage, Assessment and Treatment plus Admission care (ETAT+)–a locally adapted pediatric advanced life support management program–in Rwandan district hospitals, we undertook this study to assess the extent to which these hospitals are prepared to provide this pediatric advanced life support management. The results of the study will shed light on the resources and support that are currently available to implement ETAT+, which aims to improve care for severely ill infants and children. Methods A cross-sectional survey was undertaken in eight district hospitals across Rwanda focusing on the availability of physical and human resources, as well as hospital services organizations to provide emergency triage, assessment and treatment plus admission care for severely ill infants and children. Results Many of essential resources deemed necessary for the provision of emergency care for severely ill infants and children were readily available (e.g. drugs and laboratory services). However, only 4/8 hospitals had BVM for newborns; while nebulizer and MDI were not available in 2/8 hospitals. Only 3/8 hospitals had F-75 and ReSoMal. Moreover, there was no adequate triage system across any of the hospitals evaluated. Further, guidelines for neonatal resuscitation and management of malaria were available in 5/8 and in 7/8 hospitals, respectively; while those for child resuscitation and management of sepsis, pneumonia, dehydration and severe malnutrition were available in less than half of the hospitals evaluated. Conclusions Our assessment provides evidence to inform new strategies

  6. The study on human resources toward industrialization in Madura

    International Nuclear Information System (INIS)

    Aziz Jakfar; Mochamad Nasrullah; Sriyana; Moch Djoko Birmano

    2007-01-01

    This research aims at arriving at rich description about human resources readiness toward industrialization by 1) determining the direction of industrialization development, 2) discovering supporting as well as interfering factors, 3) identifying alternative solution to the problems, 4) analyzing human resources capacity in terms of Human Development Index, 5) recognizing labor development strategy, 6) noticing the role of education in developing human resources, 7) formulating human resources development agenda. The goal of industrialization development in Madura region is to create such conductive circumstances for the investors that it is likely to trigger optimal industries with its potency and expansion based. Some supporting factors associated with the industrial development scenario in Madura are Suramadu bridge, the expansion of Gerbang Kertosusila into Germa Kertosusila and the availability of facilities and infrastructure. In addition, there are some interfering factors to be considered such as low perception of the local community on the importance of industrialization as well as the shortage of electricity and water intake. The alternative solutions to the obstacles above are to promote socialization program on the importance of industrialization for the advancement of Madura region by all related stakeholders while considering the use of PLTN desalination over water and electricity problems. However, human resources development capacity of Madurese, whose average capacity is considered both improper and far below the average capacity of the whole population in East Java. Nevertheless, Madurese relatively has already attained sufficient purchasing power which is above the average on East Java as a whole. Labor development strategy policy can be carried through: 1) improving accessibility to Madura to speed up the flow of outside investment, production as well as business, 2) promoting local labor force, 3) improving the prevailing economics activities

  7. Assessing the components of adaptive capacity to improve conservation and management efforts under global change.

    Science.gov (United States)

    Nicotra, Adrienne B; Beever, Erik A; Robertson, Amanda L; Hofmann, Gretchen E; O'Leary, John

    2015-10-01

    Natural-resource managers and other conservation practitioners are under unprecedented pressure to categorize and quantify the vulnerability of natural systems based on assessment of the exposure, sensitivity, and adaptive capacity of species to climate change. Despite the urgent need for these assessments, neither the theoretical basis of adaptive capacity nor the practical issues underlying its quantification has been articulated in a manner that is directly applicable to natural-resource management. Both are critical for researchers, managers, and other conservation practitioners to develop reliable strategies for assessing adaptive capacity. Drawing from principles of classical and contemporary research and examples from terrestrial, marine, plant, and animal systems, we examined broadly the theory behind the concept of adaptive capacity. We then considered how interdisciplinary, trait- and triage-based approaches encompassing the oft-overlooked interactions among components of adaptive capacity can be used to identify species and populations likely to have higher (or lower) adaptive capacity. We identified the challenges and value of such endeavors and argue for a concerted interdisciplinary research approach that combines ecology, ecological genetics, and eco-physiology to reflect the interacting components of adaptive capacity. We aimed to provide a basis for constructive discussion between natural-resource managers and researchers, discussions urgently needed to identify research directions that will deliver answers to real-world questions facing resource managers, other conservation practitioners, and policy makers. Directing research to both seek general patterns and identify ways to facilitate adaptive capacity of key species and populations within species, will enable conservation ecologists and resource managers to maximize returns on research and management investment and arrive at novel and dynamic management and policy decisions. © 2015 Society for

  8. Assessing the components of adaptive capacity to improve conservation and management efforts under global change

    Science.gov (United States)

    Nicotra, Adrienne; Beever, Erik; Robertson, Amanda; Hofmann, Gretchen; O’Leary, John

    2015-01-01

    Natural-resource managers and other conservation practitioners are under unprecedented pressure to categorize and quantify the vulnerability of natural systems based on assessment of the exposure, sensitivity, and adaptive capacity of species to climate change. Despite the urgent need for these assessments, neither the theoretical basis of adaptive capacity nor the practical issues underlying its quantification has been articulated in a manner that is directly applicable to natural-resource management. Both are critical for researchers, managers, and other conservation practitioners to develop reliable strategies for assessing adaptive capacity. Drawing from principles of classical and contemporary research and examples from terrestrial, marine, plant, and animal systems, we examined broadly the theory behind the concept of adaptive capacity. We then considered how interdisciplinary, trait- and triage-based approaches encompassing the oft-overlooked interactions among components of adaptive capacity can be used to identify species and populations likely to have higher (or lower) adaptive capacity. We identified the challenges and value of such endeavors and argue for a concerted interdisciplinary research approach that combines ecology, ecological genetics, and eco-physiology to reflect the interacting components of adaptive capacity. We aimed to provide a basis for constructive discussion between natural-resource managers and researchers, discussions urgently needed to identify research directions that will deliver answers to real-world questions facing resource managers, other conservation practitioners, and policy makers. Directing research to both seek general patterns and identify ways to facilitate adaptive capacity of key species and populations within species, will enable conservation ecologists and resource managers to maximize returns on research and management investment and arrive at novel and dynamic management and policy decisions.

  9. Challenges and opportunities in building health research capacity in ...

    African Journals Online (AJOL)

    Capacity building is considered a priority for health research institutions in developing countries to achieve the Millennium Development Goals by 2015. However, in many countries including Tanzania, much emphasis has been directed towards human resources for health with the total exclusion of human resources for ...

  10. Offshore Wind Energy Resource Assessment for Alaska

    Energy Technology Data Exchange (ETDEWEB)

    Doubrawa Moreira, Paula [National Renewable Energy Lab. (NREL), Golden, CO (United States); Scott, George N. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Musial, Walter D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kilcher, Levi F. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Draxl, Caroline [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lantz, Eric J. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-02

    This report quantifies Alaska's offshore wind resource capacity while focusing on its unique nature. It is a supplement to the existing U.S. Offshore Wind Resource Assessment, which evaluated the offshore wind resource for all other U.S. states. Together, these reports provide the foundation for the nation's offshore wind value proposition. Both studies were developed by the National Renewable Energy Laboratory. The analysis presented herein represents the first quantitative evidence of the offshore wind energy potential of Alaska. The technical offshore wind resource area in Alaska is larger than the technical offshore resource area of all other coastal U.S. states combined. Despite the abundant wind resource available, significant challenges inhibit large-scale offshore wind deployment in Alaska, such as the remoteness of the resource, its distance from load centers, and the wealth of land available for onshore wind development. Throughout this report, the energy landscape of Alaska is reviewed and a resource assessment analysis is performed in terms of gross and technical offshore capacity and energy potential.

  11. Reinforcement learning techniques for controlling resources in power networks

    Science.gov (United States)

    Kowli, Anupama Sunil

    As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.

  12. Piping data bank and erection system of Angra 2: structure, computational resources and systems

    International Nuclear Information System (INIS)

    Abud, P.R.; Court, E.G.; Rosette, A.C.

    1992-01-01

    The Piping Data Bank of Angra 2 called - Erection Management System - Was developed to manage the piping erection of the Nuclear Power Plant of Angra 2. Beyond the erection follow-up of piping and supports, it manages: the piping design, the material procurement, the flow of the fabrication documents, testing of welds and material stocks at the Warehouse. The works developed in the sense of defining the structure of the Data Bank, Computational Resources and System are here described. (author)

  13. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  14. Estimating aquifer transmissivity from specific capacity using MATLAB.

    Science.gov (United States)

    McLin, Stephen G

    2005-01-01

    Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.

  15. A Resource Service Model in the Industrial IoT System Based on Transparent Computing.

    Science.gov (United States)

    Li, Weimin; Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-03-26

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system.

  16. Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

    2012-07-14

    The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.

  17. Groundwater environmental capacity and its evaluation index.

    Science.gov (United States)

    Xing, Li Ting; Wu, Qiang; Ye, Chun He; Ye, Nan

    2010-10-01

    To date, no unified and acknowledged definition or well-developed evaluation index system of groundwater environment capacity can be found in the academia at home or abroad. The article explores the meaning of water environment capacity, and analyzes the environmental effects caused by the exploitation of groundwater resources. This research defines groundwater environmental capacity as a critical value in terms of time and space, according to which the groundwater system responds to the external influences within certain goal constraint. On the basis of observing the principles of being scientific, dominant, measurable, and applicable, six level 1 evaluation indexes and 11 constraint factors are established. Taking Jinan spring region for a case study, this research will adopt groundwater level and spring flow as constraint factors, and the allowable groundwater yield as the critical value of groundwater environmental capacity, prove the dynamic changeability and its indicating function of groundwater environmental capacity through calculation, and finally point out the development trends of researches on groundwater environmental capacity.

  18. Capacity shortfalls hinder the performance of marine protected areas globally

    Science.gov (United States)

    Gill, David A.; Mascia, Michael B.; Ahmadia, Gabby N.; Glew, Louise; Lester, Sarah E.; Barnes, Megan; Craigie, Ian; Darling, Emily S.; Free, Christopher M.; Geldmann, Jonas; Holst, Susie; Jensen, Olaf P.; White, Alan T.; Basurto, Xavier; Coad, Lauren; Gates, Ruth D.; Guannel, Greg; Mumby, Peter J.; Thomas, Hannah; Whitmee, Sarah; Woodley, Stephen; Fox, Helen E.

    2017-03-01

    Marine protected areas (MPAs) are increasingly being used globally to conserve marine resources. However, whether many MPAs are being effectively and equitably managed, and how MPA management influences substantive outcomes remain unknown. We developed a global database of management and fish population data (433 and 218 MPAs, respectively) to assess: MPA management processes; the effects of MPAs on fish populations; and relationships between management processes and ecological effects. Here we report that many MPAs failed to meet thresholds for effective and equitable management processes, with widespread shortfalls in staff and financial resources. Although 71% of MPAs positively influenced fish populations, these conservation impacts were highly variable. Staff and budget capacity were the strongest predictors of conservation impact: MPAs with adequate staff capacity had ecological effects 2.9 times greater than MPAs with inadequate capacity. Thus, continued global expansion of MPAs without adequate investment in human and financial capacity is likely to lead to sub-optimal conservation outcomes.

  19. Research status of geothermal resources in China

    Science.gov (United States)

    Zhang, Lincheng; Li, Guang

    2017-08-01

    As the representative of the new green energy, geothermal resources are characterized by large reserve, wide distribution, cleanness and environmental protection, good stability, high utilization factor and other advantages. According to the characteristics of exploitation and utilization, they can be divided into high-temperature, medium-temperature and low-temperature geothermal resources. The abundant and widely distributed geothermal resources in China have a broad prospect for development. The medium and low temperature geothermal resources are broadly distributed in the continental crustal uplift and subsidence areas inside the plate, represented by the geothermal belt on the southeast coast, while the high temperature geothermal resources concentrate on Southern Tibet-Western Sichuan-Western Yunnan Geothermal Belt and Taiwan Geothermal Belt. Currently, the geothermal resources in China are mainly used for bathing, recuperation, heating and power generation. It is a country that directly makes maximum use of geothermal energy in the world. However, China’s geothermal power generation, including installed generating capacity and power generation capacity, are far behind those of Western European countries and the USA. Studies on exploitation and development of geothermal resources are still weak.

  20. Transformational capacity and the influence of place and identity

    International Nuclear Information System (INIS)

    Marshall, N A; Park, S E; Howden, S M; Adger, W N; Brown, K

    2012-01-01

    Climate change is altering the productivity of natural resources with far-reaching implications for those who depend on them. Resource-dependent industries and communities need the capacity to adapt to a range of climate risks if they are to remain viable. In some instances, the scale and nature of the likely impacts means that transformations of function or structure will be required. Transformations represent a switch to a distinct new system where a different suite of factors become important in the design and implementation of response strategies. There is a critical gap in knowledge on understanding transformational capacity and its influences. On the basis of current knowledge on adaptive capacity we propose four foundations for measuring transformational capacity: (1) how risks and uncertainty are managed, (2) the extent of skills in planning, learning and reorganizing, (3) the level of financial and psychological flexibility to undertake change and (4) the willingness to undertake change. We test the influence of place attachment and occupational identity on transformational capacity using the Australian peanut industry, which is presently assessing significant structural change in response to predicted climatic changes. Survey data from 88% of peanut farmers in Queensland show a strong negative correlation between transformational capacity and both place attachment and occupational attachment, suggesting that whilst these factors may be important positive influences on the capacity to adapt to incremental change, they act as barriers to transformational change. (letter)

  1. Transformational capacity and the influence of place and identity

    Science.gov (United States)

    Marshall, N. A.; Park, S. E.; Adger, W. N.; Brown, K.; Howden, S. M.

    2012-09-01

    Climate change is altering the productivity of natural resources with far-reaching implications for those who depend on them. Resource-dependent industries and communities need the capacity to adapt to a range of climate risks if they are to remain viable. In some instances, the scale and nature of the likely impacts means that transformations of function or structure will be required. Transformations represent a switch to a distinct new system where a different suite of factors become important in the design and implementation of response strategies. There is a critical gap in knowledge on understanding transformational capacity and its influences. On the basis of current knowledge on adaptive capacity we propose four foundations for measuring transformational capacity: (1) how risks and uncertainty are managed, (2) the extent of skills in planning, learning and reorganizing, (3) the level of financial and psychological flexibility to undertake change and (4) the willingness to undertake change. We test the influence of place attachment and occupational identity on transformational capacity using the Australian peanut industry, which is presently assessing significant structural change in response to predicted climatic changes. Survey data from 88% of peanut farmers in Queensland show a strong negative correlation between transformational capacity and both place attachment and occupational attachment, suggesting that whilst these factors may be important positive influences on the capacity to adapt to incremental change, they act as barriers to transformational change.

  2. Lithium reserves and resources

    International Nuclear Information System (INIS)

    Evans, R.K.

    1978-01-01

    As a result of accelerating research efforts in the fields of secondary batteries and thermonuclear power generation, concern has been expressed in certain quarters regarding the availability, in sufficient quantities, of lithium. As part of a recent study by the National Research Council on behalf of the Energy Research and Development Administration, a subpanel was formed to consider the outlook for lithium. Principal areas of concern were reserves, resources and the 'surplus' available for energy applications after allowing for the growth in current lithium applications. Reserves and resources were categorized into four classes ranging from fully proved reserves to resources which are probably dependent upon the marketing of co-products to become economically attractive. Because of the proprietary nature of data on beneficiation and processing recoveries, the tonnages of available lithium are expressed in terms of plant feed. However, highly conservative assumptions have been made concerning mining recoveries and these go a considerable way to accounting for total losses. Western World reserves and resources of all classes are estimated at 10.6 million tonnes Li of which 3.5 million tonnes Li are located in the United States. Current United States capacity, virtually equivalent to Western World capacity, is 4700 tonnes Li and production in 1976 approximated to 3500 tonnes Li. Production for current applications is expected to grow to approx. 10,000 tonnes in year 2000 and 13,000 tonnes a decade later. The massive excess of reserves and resources over that necessary to support conventional requirements has limited the amount of justifiable exploration expenditures; on the last occasion, there was a a major increase in demand (by the USAEA) reserves and capacity were increased rapidly. There are no foreseeable reasons why this shouldn't happen again when the need is clear. (author)

  3. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  4. A New Resource Allocation Protocol for the Backhaul of Underwater Cellular Wireless Networks

    Directory of Open Access Journals (Sweden)

    Changho Yun

    2018-01-01

    Full Text Available In this paper, an underwater base station initiating (UBSI resource allocation is proposed for underwater cellular wireless networks (UCWNs, which is a new approach to determine the backhaul capacity of underwater base stations (UBSs. This backhaul is a communication link from a UBS to a UBS controller (UBSC. Contrary to conventional resource allocation protocols, a UBS initiates to re-determine its backhaul capacity for itself according to its queue status; it releases a portion of its backhaul capacity in the case of experiencing resource under-utilization, and also requests additional backhaul capacity to the UBSC if packet drops are caused due to queue-overflow. This protocol can be appropriate and efficient to the underwater backhaul link where the transmission rate is quite low and the latency is unneglectable. In order to investigate the applicability of the UBSI resource allocation protocol to the UCWN, its performance is extensively analyzed via system level simulations. In our analysis, considered performance measures include average packet drop rate, average resource utilization, average message overhead, and the reserved capacity of the UBSC. In particular, the simulation results show that our proposed protocol not only utilizes most of the given backhaul capacity (more than 90 percent of resource utilization on the average, but also reduces controlling message overheads induced by resource allocation (less than 2 controlling messages on the average. It is expected that the simulation results and analysis in this paper can be used as operating guidelines to apply our new resource allocation protocol for the UCWN.

  5. The Development of an Individualized Instructional Program in Beginning College Mathematics Utilizing Computer Based Resource Units. Final Report.

    Science.gov (United States)

    Rockhill, Theron D.

    Reported is an attempt to develop and evaluate an individualized instructional program in pre-calculus college mathematics. Four computer based resource units were developed in the areas of set theory, relations and function, algebra, trigonometry, and analytic geometry. Objectives were determined by experienced calculus teachers, and…

  6. Capacity Expansion Modeling for Storage Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Elaine; Stoll, Brady; Mai, Trieu

    2017-04-03

    The Resource Planning Model (RPM) is a capacity expansion model designed for regional power systems and high levels of renewable generation. Recent extensions capture value-stacking for storage technologies, including batteries and concentrating solar power with storage. After estimating per-unit capacity value and curtailment reduction potential, RPM co-optimizes investment decisions and reduced-form dispatch, accounting for planning reserves; energy value, including arbitrage and curtailment reduction; and three types of operating reserves. Multiple technology cost scenarios are analyzed to determine level of deployment in the Western Interconnection under various conditions.

  7. Thermodynamic properties of xanthone: Heat capacities, phase-transition properties, and thermodynamic-consistency analyses using computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range (5 to 520) K. • The enthalpy of combustion was measured and the enthalpy of formation was derived. • Thermodynamic-consistency analysis resolved inconsistencies in literature enthalpies of sublimation. • An inconsistency in literature enthalpies of combustion was resolved. • Application of computational chemistry in consistency analysis was demonstrated successfully. - Abstract: Heat capacities and phase-transition properties for xanthone (IUPAC name 9H-xanthen-9-one and Chemical Abstracts registry number [90-47-1]) are reported for the temperature range 5 < T/K < 524. Statistical calculations were performed and thermodynamic properties for the ideal gas were derived based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. These results are combined with sublimation pressures from the literature to allow critical evaluation of inconsistent enthalpies of sublimation for xanthone, also reported in the literature. Literature values for the enthalpy of combustion of xanthone are re-assessed, a revision is recommended for one result, and a new value for the enthalpy of formation of the ideal gas is derived. Comparisons with thermophysical properties reported in the literature are made for all other reported and derived properties, where possible

  8. Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact

    Science.gov (United States)

    Frank, Jeremy

    2004-01-01

    We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.

  9. Transportation Energy Futures Series: Alternative Fuel Infrastructure Expansion: Costs, Resources, Production Capacity, and Retail Availability for Low-Carbon Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, W. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heath, Garvin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Steward, Darlene [National Renewable Energy Lab. (NREL), Golden, CO (United States); Vimmerstedt, Laura [National Renewable Energy Lab. (NREL), Golden, CO (United States); Warner, Ethan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Webster, Karen W. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-04-01

    The petroleum-based transportation fuel system is complex and highly developed, in contrast to the nascent low-petroleum, low-carbon alternative fuel system. This report examines how expansion of the low-carbon transportation fuel infrastructure could contribute to deep reductions in petroleum use and greenhouse gas (GHG) emissions across the U.S. transportation sector. Three low-carbon scenarios, each using a different combination of low-carbon fuels, were developed to explore infrastructure expansion trends consistent with a study goal of reducing transportation sector GHG emissions to 80% less than 2005 levels by 2050.These scenarios were compared to a business-as-usual (BAU) scenario and were evaluated with respect to four criteria: fuel cost estimates, resource availability, fuel production capacity expansion, and retail infrastructure expansion.

  10. Computer modelling of the UK wind energy resource. Phase 2. Application of the methodology

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Makari, M; Newton, K; Ravenscroft, F; Whittaker, J

    1993-12-31

    This report presents the results of the second phase of a programme to estimate the UK wind energy resource. The overall objective of the programme is to provide quantitative resource estimates using a mesoscale (resolution about 1km) numerical model for the prediction of wind flow over complex terrain, in conjunction with digitised terrain data and wind data from surface meteorological stations. A network of suitable meteorological stations has been established and long term wind data obtained. Digitised terrain data for the whole UK were obtained, and wind flow modelling using the NOABL computer program has been performed. Maps of extractable wind power have been derived for various assumptions about wind turbine characteristics. Validation of the methodology indicates that the results are internally consistent, and in good agreement with available comparison data. Existing isovent maps, based on standard meteorological data which take no account of terrain effects, indicate that 10m annual mean wind speeds vary between about 4.5 and 7 m/s over the UK with only a few coastal areas over 6 m/s. The present study indicates that 28% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. The results will be useful for broad resource studies and initial site screening. Detailed resource evaluation for local sites will require more detailed local modelling or ideally long term field measurements. (12 figures, 14 tables, 21 references). (Author)

  11. Linear optical quantum computing in a single spatial mode.

    Science.gov (United States)

    Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A

    2013-10-11

    We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.

  12. Performance analysis of cloud computing services for many-tasks scientific computing

    NARCIS (Netherlands)

    Iosup, A.; Ostermann, S.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.

    2011-01-01

    Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a

  13. Computational modeling as a tool for water resources management: an alternative approach to problems of multiple uses

    Directory of Open Access Journals (Sweden)

    Haydda Manolla Chaves da Hora

    2012-04-01

    Full Text Available Today in Brazil there are many cases of incompatibility regarding use of water and its availability. Due to the increase in required variety and volume, the concept of multiple uses was created, as stated by Pinheiro et al. (2007. The use of the same resource to satisfy different needs with several restrictions (qualitative and quantitative creates conflicts. Aiming to minimize these conflicts, this work was applied to the particular cases of Hydrographic Regions VI and VIII of Rio de Janeiro State, using computational modeling techniques (based on MOHID software – Water Modeling System as a tool for water resources management.

  14. National hydroelectric power resources study. Preliminary inventory of hydropower resources. Volume 2. Pacific Southwest region

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-07-01

    The estimates of existing, incremental, and the undeveloped hydropower potential for all states in the various regions of the country are presented. In the Pacific Southwest region, the maximum physical potential for all sites exceeds 33,000 MW of capacity with an estimated average annual energy greater than 85,000 GWH. By comparison, these values represent about 6% of the total potential capacity and hydroelectric energy generation estimated for the entire US. Of the total capacity estimated for the region, 9900 MW has been installed. The remainder (23,200 MW) is the maximum which could be developed by upgrading and expanding existing projects (6000 MW) and by installing new hydroelectric power capacity at all potentially feasible, undeveloped sites (17,200 MW). Small-scale facilities account for less than 4% of the region's total installed capacity, but another 600 MW could be added to these and other small water resource projects. In addition, 600 MW could be installed at potentially feasible, undeveloped small-scale sites. The small-scale resource varies considerably, with the states of California and Utah having the largest potential for incremental development at existing projects in the Pacific Southwest region. States comprising the Southwest are Arizona, California, Hawaii, Nevada, and Utah.

  15. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    Science.gov (United States)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  16. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  17. promoting integrated water resources management in south west

    African Journals Online (AJOL)

    eobe

    1, 2 SOUTH WEST REGIONAL CENTRE FOR NATIONAL WATER RESOURCES CAPACITY BUILDING NETWORK,. FEDERAL UNIVERSITY OF ... that an integrated approach to water resource development and management offers the best ...

  18. Towards optimizing server performance in an educational MMORPG for teaching computer programming

    Science.gov (United States)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2013-10-01

    Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.

  19. Simulation analysis of resource flexibility on healthcare processes.

    Science.gov (United States)

    Simwita, Yusta W; Helgheim, Berit I

    2016-01-01

    This paper uses discrete event simulation to explore the best resource flexibility scenario and examine the effect of implementing resource flexibility on different stages of patient treatment process. Specifically we investigate the effect of resource flexibility on patient waiting time and throughput in an orthopedic care process. We further seek to explore on how implementation of resource flexibility on patient treatment processes affects patient access to healthcare services. We focus on two resources, namely, orthopedic surgeon and operating room. The observational approach was used to collect process data. The developed model was validated by comparing the simulation output with actual patient data collected from the studied orthopedic care process. We developed different scenarios to identify the best resource flexibility scenario and explore the effect of resource flexibility on patient waiting time, throughput, and future changes in demand. The developed scenarios focused on creating flexibility on service capacity of this care process by altering the amount of additional human resource capacity at different stages of patient care process and extending the use of operating room capacity. The study found that resource flexibility can improve responsiveness to patient demand in the treatment process. Testing different scenarios showed that the introduction of resource flexibility reduces patient waiting time and improves throughput. The simulation results show that patient access to health services can be improved by implementing resource flexibility at different stages of the patient treatment process. This study contributes to the current health care literature by explaining how implementing resource flexibility at different stages of patient care processes can improve ability to respond to increasing patients demands. This study was limited to a single patient process; studies focusing on additional processes are recommended.

  20. Data mining techniques used to analyze students’ opinions about computization in the educational system

    Directory of Open Access Journals (Sweden)

    Nicoleta PETCU

    2015-06-01

    Full Text Available Both the educational process and the research one, together with institutional management are unthinkable without the information technologies. Thru them one can harness the work capacity and creativity of both students and professors. The aim of this paper is to present the results of a quantitative research regarding: scope of using computers, the importance of using them, faculty activities that involve computer usage, number of hours students work with them at university, Internet and web-sites usage, e-learning platforms, investments in technology in the faculty and access to computers and other IT resources. The major conclusions of this research allow us to propose strategies for increasing the quality, efficiency and transparency of didactic, scientific, administrative and communication processes.

  1. Locational electricity capacity markets: Alternatives to restore the missing signals

    Energy Technology Data Exchange (ETDEWEB)

    Nieto, Amparo D.; Fraser, Hamish

    2007-03-15

    In the absence of a properly functioning electricity demand side, well-designed capacity payment mechanisms hold more promise for signaling the value of capacity than non-CPM alternatives. Locational CPMs that rely on market-based principles, such as forward capacity auctions, are superior to cost-based payments directed to specific must-run generators, as CPMs at least provide a meaningful price signal about the economic value of resources to potential investors. (author)

  2. Analysis on natural circulation capacity of the CARR

    Institute of Scientific and Technical Information of China (English)

    TIAN Wenxi; QIU Suizheng; WANG Jiaqiang; SU Guanghui; JIA Dounan; ZHANG Jianwei

    2007-01-01

    The investigation on natural circulation (NC) characteristics of the China Advanced Research Reactor(CARR) is very valuable for practical engineering application and also a key subject for the CARR. In this study, a computer code was developed to calculate the NC capacity of the CARR under different pool water temperatures. Effects of the pool water temperature on NC characteristics were analyzed. The results show that with increasing pool water temperature, the NC flow rate increases while the NC capacity decreases. Based on the computation results and theoretical deduction, a correlation was proposed on predicting the relationship between the NC mass flow and the core power under different conditions. The correlation prediction agrees well with the computational result within ±10% for the maximal deviation. This work is instructive for the actual operation of the CARR.

  3. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  4. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Tom; Yang, Xi

    2018-01-16

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyberinfrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum of compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyberinfrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate

  5. Enhancing capacity among faith-based organizations to implement evidence-based cancer control programs: a community-engaged approach.

    Science.gov (United States)

    Leyva, Bryan; Allen, Jennifer D; Ospino, Hosffman; Tom, Laura S; Negrón, Rosalyn; Buesa, Richard; Torres, Maria Idalí

    2017-09-01

    Evidence-based interventions (EBIs) to promote cancer control among Latinos have proliferated in recent years, though adoption and implementation of these interventions by faith-based organizations (FBOs) is limited. Capacity building may be one strategy to promote implementation. In this qualitative study, 18 community key informants were interviewed to (a) understand existing capacity for health programming among Catholic parishes, (b) characterize parishes' resource gaps and capacity-building needs implementing cancer control EBIs, and (c) elucidate strategies for delivering capacity-building assistance to parishes to facilitate implementation of EBIs. Semi-structured qualitative interviews were conducted. Key informants concurred about the capacity of Catholic parishes to deliver health programs, and described attributes of parishes that make them strong partners in health promotion initiatives, including a mission to address physical and mental health, outreach to marginalized groups, altruism among members, and existing engagement in health programming. However, resource gaps and capacity building needs were also identified. Specific recommendations participants made about how existing resources might be leveraged to address challenges include to: establish parish wellness committees; provide "hands-on" learning opportunities for parishioners to gain program planning skills; offer continuous, tailored, on-site technical assistance; facilitate relationships between parishes and community resources; and provide financial support for parishes. Leveraging parishes' existing resources and addressing their implementation needs may improve adoption of cancer control EBIs.

  6. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  7. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  8. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  9. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    OpenAIRE

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  10. Interference and memory capacity limitations.

    Science.gov (United States)

    Endress, Ansgar D; Szabó, Szilárd

    2017-10-01

    Working memory (WM) is thought to have a fixed and limited capacity. However, the origins of these capacity limitations are debated, and generally attributed to active, attentional processes. Here, we show that the existence of interference among items in memory mathematically guarantees fixed and limited capacity limits under very general conditions, irrespective of any processing assumptions. Assuming that interference (a) increases with the number of interfering items and (b) brings memory performance to chance levels for large numbers of interfering items, capacity limits are a simple function of the relative influence of memorization and interference. In contrast, we show that time-based memory limitations do not lead to fixed memory capacity limitations that are independent of the timing properties of an experiment. We show that interference can mimic both slot-like and continuous resource-like memory limitations, suggesting that these types of memory performance might not be as different as commonly believed. We speculate that slot-like WM limitations might arise from crowding-like phenomena in memory when participants have to retrieve items. Further, based on earlier research on parallel attention and enumeration, we suggest that crowding-like phenomena might be a common reason for the 3 major cognitive capacity limitations. As suggested by Miller (1956) and Cowan (2001), these capacity limitations might arise because of a common reason, even though they likely rely on distinct processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Energy Resource Planning. Optimal utilization of energy resources

    International Nuclear Information System (INIS)

    Miclescu, T.; Domschke, W.; Bazacliu, G.; Dumbrava, V.

    1996-01-01

    For a thermal power plants system, the primary energy resources cost constitutes a significant percentage of the total system operational cost. Therefore a small percentage saving in primary energy resource allocation cost for a long term, often turns out to be a significant monetary value. In recent years, with a rapidly changing fuel supply situation, including the impact of energy policies changing, this area has become extremely sensitive. Natural gas availability has been restricted in many areas, coal production and transportation cost have risen while productivity has decreased, oil imports have increased and refinery capacity failed to meet demand. The paper presents a mathematical model and a practical procedure to solve the primary energy resource allocation. The objectives is to minimise the total energy cost over the planning period subject to constraints with regards to primary energy resource, transportation and energy consumption. Various aspects of the proposed approach are discussed, and its application to a power system is illustrated.(author) 2 figs., 1 tab., 3 refs

  12. Asynchrony of wind and hydropower resources in Australia

    KAUST Repository

    Gunturu, Udaya

    2017-08-14

    Wind and hydropower together constitute nearly 80% of the renewable capacity in Australia and their resources are collocated. We show that wind and hydro generation capacity factors covary negatively at the interannual time scales. Thus, the technology diversity mitigates the variability of renewable power generation at the interannual scales. The asynchrony of wind and hydropower resources is explained by the differential impact of the two modes of the El Ni˜no Southern Oscillation – canonical and Modoki – on the wind and hydro resources. Also, the Modoki El Ni˜no and the Modoki La Ni˜na phases have greater impact. The seasonal impact patterns corroborate these results. As the proportion of wind power increases in Australia’s energy mix, this negative covariation has implications for storage capacity of excess wind generation at short time scales and for generation system adequacy at the longer time scales.

  13. Asynchrony of wind and hydropower resources in Australia

    KAUST Repository

    Gunturu, Udaya; Hallgren, Willow

    2017-01-01

    Wind and hydropower together constitute nearly 80% of the renewable capacity in Australia and their resources are collocated. We show that wind and hydro generation capacity factors covary negatively at the interannual time scales. Thus, the technology diversity mitigates the variability of renewable power generation at the interannual scales. The asynchrony of wind and hydropower resources is explained by the differential impact of the two modes of the El Ni˜no Southern Oscillation – canonical and Modoki – on the wind and hydro resources. Also, the Modoki El Ni˜no and the Modoki La Ni˜na phases have greater impact. The seasonal impact patterns corroborate these results. As the proportion of wind power increases in Australia’s energy mix, this negative covariation has implications for storage capacity of excess wind generation at short time scales and for generation system adequacy at the longer time scales.

  14. Asynchrony of wind and hydropower resources in Australia.

    Science.gov (United States)

    Gunturu, Udaya Bhaskar; Hallgren, Willow

    2017-08-18

    Wind and hydropower together constitute nearly 80% of the renewable capacity in Australia and their resources are collocated. We show that wind and hydro generation capacity factors covary negatively at the interannual time scales. Thus, the technology diversity mitigates the variability of renewable power generation at the interannual scales. The asynchrony of wind and hydropower resources is explained by the differential impact of the two modes of the El Ni˜no Southern Oscillation - canonical and Modoki - on the wind and hydro resources. Also, the Modoki El Ni˜no and the Modoki La Ni˜na phases have greater impact. The seasonal impact patterns corroborate these results. As the proportion of wind power increases in Australia's energy mix, this negative covariation has implications for storage capacity of excess wind generation at short time scales and for generation system adequacy at the longer time scales.

  15. CO2 sequestration: Storage capacity guideline needed

    Science.gov (United States)

    Frailey, S.M.; Finley, R.J.; Hickman, T.S.

    2006-01-01

    Petroleum reserves are classified for the assessment of available supplies by governmental agencies, management of business processes for achieving exploration and production efficiency, and documentation of the value of reserves and resources in financial statements. Up to the present however, the storage capacity determinations made by some organizations in the initial CO2 resource assessment are incorrect technically. New publications should thus cover differences in mineral adsorption of CO2 and dissolution of CO2 in various brine waters.

  16. A computational fluid dynamics analysis on stratified scavenging system of medium capacity two-stroke internal combustion engines

    Directory of Open Access Journals (Sweden)

    Pitta Srinivasa Rao

    2008-01-01

    Full Text Available The main objective of the present work is to make a computational study of stratified scavenging system in two-stroke medium capacity engines to reduce or to curb the emissions from the two-stroke engines. The 3-D flows within the cylinder are simulated using computational fluid dynamics and the code Fluent 6. Flow structures in the transfer ports and the exhaust port are predicted without the stratification and with the stratification, and are well predicted. The total pressure and velocity map from computation provided comprehensive information on the scavenging and stratification phenomenon. Analysis is carried out for the transfer ports flow and the extra port in the transfer port along with the exhaust port when the piston is moving from the top dead center to the bottom dead center, as the ports are closed, half open, three forth open, and full port opening. An unstructured cell is adopted for meshing the geometry created in CATIA software. Flow is simulated by solving governing equations namely conservation of mass momentum and energy using SIMPLE algorithm. Turbulence is modeled by high Reynolds number version k-e model. Experimental measurements are made for validating the numerical prediction. Good agreement is observed between predicted result and experimental data; that the stratification had significantly reduced the emissions and fuel economy is achieved.

  17. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  18. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  19. Developing a Personnel Capacity Indicator for a high turnover Cartographic Production Sector

    Science.gov (United States)

    Mandarino, Flávia; Pessôa, Leonardo A. M.

    2018-05-01

    This paper describes a framework for development of an indicator for human re-sources capacity management in a military organization responsible for nautical chart production. Graphic chart for the results of the model COPPE-COSENZA (Cosenza et al. 2015) is used to properly present the personnel capacity within a high people turnover environment. The specific skills for the nautical charts production allied to the turnover rate require continuous and adequate personnel in-corporation and a capacity building through education and on-the-job training. The adopted approach for the study establishes quantitative values to fulfill quality requirements, and also presents graphically a profile for the human resources on a specific job to facilitate diagnosis and corrective actions.

  20. Assessing institutional capacities to adapt to climate change - integrating psychological dimensions in the Adaptive Capacity Wheel

    Science.gov (United States)

    Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.

    2013-03-01

    Several case studies show that "soft social factors" (e.g. institutions, perceptions, social capital) strongly affect social capacities to adapt to climate change. Many soft social factors can probably be changed faster than "hard social factors" (e.g. economic and technological development) and are therefore particularly important for building social capacities. However, there are almost no methodologies for the systematic assessment of soft social factors. Gupta et al. (2010) have developed the Adaptive Capacity Wheel (ACW) for assessing the adaptive capacity of institutions. The ACW differentiates 22 criteria to assess six dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate. "Adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in North Western Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.

  1. Assessing institutional capacities to adapt to climate change: integrating psychological dimensions in the Adaptive Capacity Wheel

    Science.gov (United States)

    Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.

    2013-12-01

    Several case studies show that social factors like institutions, perceptions and social capital strongly affect social capacities to adapt to climate change. Together with economic and technological development they are important for building social capacities. However, there are almost no methodologies for the systematic assessment of social factors. After reviewing existing methodologies we identify the Adaptive Capacity Wheel (ACW) by Gupta et al. (2010), developed for assessing the adaptive capacity of institutions, as the most comprehensive and operationalised framework to assess social factors. The ACW differentiates 22 criteria to assess 6 dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate; "adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in northwestern Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.

  2. ECONOMICS OF HUMAN RESOURCES

    Directory of Open Access Journals (Sweden)

    IOANA - JULIETA JOSAN

    2011-04-01

    Full Text Available The purpose of this paper is to analyze human resources in terms of quantitative and qualitative side with special focus on the human capital accumulation influence. The paper examines the human resources trough human capital accumulation in terms of modern theory of human resources, educational capital, health, unemployment and migration. The findings presented in this work are based on theoretical economy publications and data collected from research materials. Sources of information include: documents from organizations - the EUROSTAT, INSSE - studies from publications, books, periodicals, and the Internet. The paper describes and analyzes human resources characteristics, human resource capacities, social and economic benefits of human capital accumulation based on economy, and the government plans and policies on health, education and labor market.

  3. Volunteered Cloud Computing for Disaster Management

    Science.gov (United States)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects

  4. Adaptive Capacity Management in Bluetooth Networks

    DEFF Research Database (Denmark)

    Son, L.T.

    , such as limited wireless bandwidth operation, routing, scheduling, network control, etc. Currently Bluetooth specification particularly does not describe in details about how to implement Quality of Service and Resource Management in Bluetooth protocol stacks. These issues become significant, when the number...... of Bluetooth devices is increasing, a larger-scale ad hoc network, scatternet, is formed, as well as the booming of Internet has demanded for large bandwidth and low delay mobile access. This dissertation is to address the capacity management issues in Bluetooth networks. The main goals of the network capacity...... capacity allocation, network traffic control, inter-piconet scheduling, and buffer management. First, after a short presentation about Bluetooth technology, and QoS issues, queueing models and a simulation-based buffer management have been constructed. Then by using analysis and simulation, it shows some...

  5. Cloud Computing:Strategies for Cloud Computing Adoption

    OpenAIRE

    Shimba, Faith

    2010-01-01

    The advent of cloud computing in recent years has sparked an interest from different organisations, institutions and users to take advantage of web applications. This is a result of the new economic model for the Information Technology (IT) department that cloud computing promises. The model promises a shift from an organisation required to invest heavily for limited IT resources that are internally managed, to a model where the organisation can buy or rent resources that are managed by a clo...

  6. Storage capacity of ultrametric committee machines

    International Nuclear Information System (INIS)

    Neirotti, J P

    2014-01-01

    The problem of computing the storage capacity of a feed-forward network, with L hidden layers, N inputs, and K units in the first hidden layer, is analyzed using techniques from statistical mechanics. We found that the storage capacity strongly depends on the network architecture α-hat c ∼(log K) 1−1/2 L and that the number of units K limits the number of possible hidden layers L through the relationship 2 L − 1 < 2log K. (paper)

  7. Leadership and managerial capacity strengthening for quality ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Success in the implementation of maternal and newborn health interventions at the country level has been generally attributed to three main interlinked factors: leadership and management, resources, and end-user-related factors. Leadership and managerial capacities are critical for transformational change and ...

  8. Information resource management concepts for records managers

    Energy Technology Data Exchange (ETDEWEB)

    Seesing, P.R.

    1992-10-01

    Information Resource Management (ERM) is the label given to the various approaches used to foster greater accountability for the use of computing resources. It is a corporate philosophy that treats information as it would its other resources. There is a reorientation from simply expenditures to considering the value of the data stored on that hardware. Accountability for computing resources is expanding beyond just the data processing (DP) or management information systems (MIS) manager to include senior organization management and user management. Management's goal for office automation is being refocused from saving money to improving productivity. A model developed by Richard Nolan (1982) illustrates the basic evolution of computer use in organizations. Computer Era: (1) Initiation (computer acquisition), (2) Contagion (intense system development), (3) Control (proliferation of management controls). Data Resource Era: (4) Integration (user service orientation), (5) Data Administration (corporate value of information), (6) Maturity (strategic approach to information technology). The first three stages mark the growth of traditional data processing and management information systems departments. The development of the IRM philosophy in an organization involves the restructuring of the DP organization and new management techniques. The three stages of the Data Resource Era represent the evolution of IRM. This paper examines each of them in greater detail.

  9. Information resource management concepts for records managers

    Energy Technology Data Exchange (ETDEWEB)

    Seesing, P.R.

    1992-10-01

    Information Resource Management (ERM) is the label given to the various approaches used to foster greater accountability for the use of computing resources. It is a corporate philosophy that treats information as it would its other resources. There is a reorientation from simply expenditures to considering the value of the data stored on that hardware. Accountability for computing resources is expanding beyond just the data processing (DP) or management information systems (MIS) manager to include senior organization management and user management. Management`s goal for office automation is being refocused from saving money to improving productivity. A model developed by Richard Nolan (1982) illustrates the basic evolution of computer use in organizations. Computer Era: (1) Initiation (computer acquisition), (2) Contagion (intense system development), (3) Control (proliferation of management controls). Data Resource Era: (4) Integration (user service orientation), (5) Data Administration (corporate value of information), (6) Maturity (strategic approach to information technology). The first three stages mark the growth of traditional data processing and management information systems departments. The development of the IRM philosophy in an organization involves the restructuring of the DP organization and new management techniques. The three stages of the Data Resource Era represent the evolution of IRM. This paper examines each of them in greater detail.

  10. computational chemistry capacity building in an underprivileged ...

    African Journals Online (AJOL)

    dell

    ABSTRACT. Computational chemistry is a fast developing branch of modern chemistry, focusing on the study of molecules to enable better understanding of the properties of substances. Its applications comprise a variety of fields, from drug design to the design of compounds with desired properties. (e.g., catalysts with ...

  11. Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities

    OpenAIRE

    Buyya, Rajkumar; Yeo, Chee Shin; Venugopal, Srikumar

    2008-01-01

    This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents...

  12. Cost-effectiveness Assessment of 5G Systems with Cooperative Radio Resource Sharing

    Directory of Open Access Journals (Sweden)

    V. Nikolikj

    2015-11-01

    Full Text Available By use of techno-economic analysis of heterogeneous hierarchical cell structures and spectral efficiencies of the forthcoming advanced radio access technologies, this paper proposes various cost-efficient capacity enlargement strategies evaluated through the level of the production cost per transferred data unit and achievable profit margins. For the purpose of maximizing the aggregate performance (capacity or profit, we also assess the cooperative manners of radio resource sharing between mobile network operators, especially in the cases of capacity over-provisioning, when we also determine the principles to provide guaranteed data rates to a particular number of users. The results show that, for heavily loaded office environments, the future 5G pico base stations could be a preferable deployment solution. Also, we confirm that the radio resource management method with dynamic resource allocation can significantly improve the capacity of two comparably loaded operators which share the resources and aim to increase their cost effectiveness.

  13. Volunteer Computing for Science Gateways

    OpenAIRE

    Anderson, David

    2017-01-01

    This poster offers information about volunteer computing for science gateways that offer high-throughput computing services. Volunteer computing can be used to get computing power. This increases the visibility of the gateway to the general public as well as increasing computing capacity at little cost.

  14. Workforce capacity to address obesity: a Western Australian cross-sectional study identifies the gap between health priority and human resources needed.

    Science.gov (United States)

    Begley, Andrea; Pollard, Christina Mary

    2016-08-25

    The disease burden due to poor nutrition, physical inactivity and obesity is high and increasing. An adequately sized and skilled workforce is required to respond to this issue. This study describes the public health nutrition and physical activity (NAPA) practice priorities and explores health managers and practitioner's beliefs regarding workforce capacity to deliver on these priorities. A workforce audit was conducted including a telephone survey of all managers and a postal survey of practitioners working in the area of NAPA promotion in Western Australia in 2004. Managers gave their perspective on workforce priorities, current competencies and future needs, with a 70 % response rate. Practitioners reported on public health workforce priorities, qualifications and needs, with a 56 % response rate. The top practice priorities for managers were diabetes (35 %), alcohol and other drugs (33 %), and cardiovascular disease (27 %). Obesity (19 %), poor nutrition (15 %) and inadequate physical activity (10 %) were of lower priority. For nutrition, managers identified lack of staff (60.4 %), organisational and management factors (39.5 %) and insufficient financial resources (30.2 %) as the major barriers to adequate service delivery. For physical activity services, insufficient financial resources (41.7 %) and staffing (35.4 %) and a lack of specific physical activity service specifications (25.0 %) were the main barriers. Practitioners identified inadequate staffing as the main barrier to service delivery for nutrition (42.3 %) and physical activity (23.3 %). Ideally, managers said they required 152 % more specialist nutritionists in the workforce and 131 % specialists for physical activity services to meet health outcomes in addition to other generalist staff. Human and financial resources and organisational factors were the main barriers to meeting obesity, and public health nutrition and physical activity outcomes. Services were being delivered by

  15. Carrying capacity in a heterogeneous environment with habitat connectivity.

    Science.gov (United States)

    Zhang, Bo; Kula, Alex; Mack, Keenan M L; Zhai, Lu; Ryce, Arrix L; Ni, Wei-Ming; DeAngelis, Donald L; Van Dyken, J David

    2017-09-01

    A large body of theory predicts that populations diffusing in heterogeneous environments reach higher total size than if non-diffusing, and, paradoxically, higher size than in a corresponding homogeneous environment. However, this theory and its assumptions have not been rigorously tested. Here, we extended previous theory to include exploitable resources, proving qualitatively novel results, which we tested experimentally using spatially diffusing laboratory populations of yeast. Consistent with previous theory, we predicted and experimentally observed that spatial diffusion increased total equilibrium population abundance in heterogeneous environments, with the effect size depending on the relationship between r and K. Refuting previous theory, however, we discovered that homogeneously distributed resources support higher total carrying capacity than heterogeneously distributed resources, even with species diffusion. Our results provide rigorous experimental tests of new and old theory, demonstrating how the traditional notion of carrying capacity is ambiguous for populations diffusing in spatially heterogeneous environments. © 2017 John Wiley & Sons Ltd/CNRS.

  16. Neural markers of individual and age differences in TVA attention capacity parameters

    DEFF Research Database (Denmark)

    Wiegand, Iris

    2013-01-01

    The ‘Theory of Visual Attention’ quantifies an interindividual’s capacity of attentional resources in parameters visual processing speed C and vSTM storage capacity K. Distinct neural markers of interindividual differences in these functions were identified by combining TVA-based assessment...

  17. The pilot way to Grid resources using glideinWMS

    CERN Document Server

    Sfiligoi, Igor; Holzman, Burt; Mhashilkar, Parag; Padhi, Sanjay; Wurthwrin, Frank

    Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of grid middleware shared between them. This deployment model has proven to be very convenient for computing resource providers, but has introduced several problems for the users of the system, the three major being the complexity of job scheduling, the non-uniformity of compute resources, and the lack of good job monitoring. Pilot jobs address all the above problems by creating a virtual private computing pool on top of grid resources. This paper presents both the general pilot concept, as well as a concrete implementation, called glideinWMS, deployed in the Open Science Grid.

  18. National Uranium Resource Evaluation Program. Hydrogeochemical and Stream Sediment Reconnaissance Basic Data Reports Computer Program Requests Manual

    International Nuclear Information System (INIS)

    1980-01-01

    This manual is intended to aid those who are unfamiliar with ordering computer output for verification and preparation of Uranium Resource Evaluation (URE) Project reconnaissance basic data reports. The manual is also intended to help standardize the procedures for preparing the reports. Each section describes a program or group of related programs. The sections are divided into three parts: Purpose, Request Forms, and Requested Information

  19. Carrying Capacity Model Applied to Coastal Ecotourism of Baluran National Park, Indonesia

    Science.gov (United States)

    Armono, H. D.; Rosyid, D. M.; Nuzula, N. I.

    2017-07-01

    The resources of Baluran National Park have been used for marine and coastal ecotourism. The increasing number of visitors has led to the increasing of tourists and its related activities. This condition will cause the degradation of resources and the welfare of local communities. This research aims to determine the sustainability of coastal ecotourism management by calculating the effective number of tourists who can be accepted. The study uses the concept of tourism carrying capacity, consists the ecological environment, economic, social and physical carrying capacity. The results of the combined carrying capacity analysis in Baluran National Park ecotourism shows that the number of 3.288 people per day (151.248 tourists per year) is the maximum number of accepted tourists. The current number of tourist arrivals is only 241 people per day (87.990 tourists per year) which is far below the carrying capacity.

  20. When High-Capacity Readers Slow Down and Low-Capacity Readers Speed Up: Working Memory and Locality Effects.

    Science.gov (United States)

    Nicenboim, Bruno; Logačev, Pavel; Gattei, Carolina; Vasishth, Shravan

    2016-01-01

    We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.

  1. When high-capacity readers slow down and low-capacity readers speed up: Working memory and locality effects

    Directory of Open Access Journals (Sweden)

    Bruno eNicenboim

    2016-03-01

    Full Text Available We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German, while taking into account readers’ working memory capacity and controlling for expectation (Levy, 2008 and other factors. We predicted only locality effects, that is, a slow-down produced by increased dependency distance (Gibson, 2000; Lewis & Vasishth, 2005. Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.

  2. Water-Constrained Electric Sector Capacity Expansion Modeling Under Climate Change Scenarios

    Science.gov (United States)

    Cohen, S. M.; Macknick, J.; Miara, A.; Vorosmarty, C. J.; Averyt, K.; Meldrum, J.; Corsi, F.; Prousevitch, A.; Rangwala, I.

    2015-12-01

    Over 80% of U.S. electricity generation uses a thermoelectric process, which requires significant quantities of water for power plant cooling. This water requirement exposes the electric sector to vulnerabilities related to shifts in water availability driven by climate change as well as reductions in power plant efficiencies. Electricity demand is also sensitive to climate change, which in most of the United States leads to warming temperatures that increase total cooling-degree days. The resulting demand increase is typically greater for peak demand periods. This work examines the sensitivity of the development and operations of the U.S. electric sector to the impacts of climate change using an electric sector capacity expansion model that endogenously represents seasonal and local water resource availability as well as climate impacts on water availability, electricity demand, and electricity system performance. Capacity expansion portfolios and water resource implications from 2010 to 2050 are shown at high spatial resolution under a series of climate scenarios. Results demonstrate the importance of water availability for future electric sector capacity planning and operations, especially under more extreme hotter and drier climate scenarios. In addition, region-specific changes in electricity demand and water resources require region-specific responses that depend on local renewable resource availability and electricity market conditions. Climate change and the associated impacts on water availability and temperature can affect the types of power plants that are built, their location, and their impact on regional water resources.

  3. Framework for Computation Offloading in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dejan Kovachev

    2012-12-01

    Full Text Available The inherently limited processing power and battery lifetime of mobile phones hinder the possible execution of computationally intensive applications like content-based video analysis or 3D modeling. Offloading of computationally intensive application parts from the mobile platform into a remote cloud infrastructure or nearby idle computers addresses this problem. This paper presents our Mobile Augmentation Cloud Services (MACS middleware which enables adaptive extension of Android application execution from a mobile client into the cloud. Applications are developed by using the standard Android development pattern. The middleware does the heavy lifting of adaptive application partitioning, resource monitoring and computation offloading. These elastic mobile applications can run as usual mobile application, but they can also use remote computing resources transparently. Two prototype applications using the MACS middleware demonstrate the benefits of the approach. The evaluation shows that applications, which involve costly computations, can benefit from offloading with around 95% energy savings and significant performance gains compared to local execution only.

  4. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  5. Regional Resource Planning Study

    International Nuclear Information System (INIS)

    2001-01-01

    Natural gas and electricity commodities are among the most volatile commodities in the world. Spurred on by the recent significant increases in the price of natural gas, the BC Utilities Commission initiated an investigation into factors impacting on natural gas prices, and the validity of the Sumas index (a market trading point, or interchange where multiple pipelines interconnect, allowing the purchase and sale of gas among market participants) as a price setting mechanism. The Commission also sought the opinions and perspectives of the the province's natural gas industry regarding the high volatility of the Sumas gas prices, and as to what could be done to alleviate the wild fluctuations. Following review of the responses from stakeholders, the Commission issued a directive to BC Gas to undertake discussions on regional resource planning with full representation from all stakeholders. This study is the result of the Commission's directive, and is intended to address the issues contained in the directives. Accordingly, the study examined gas demand in the region, demand growth, including power generation, natural gas resource balance in the region, the California impacts on demand and on supply to the region, supply shortfalls on a peak day, and on a seasonal and annual basis, near term remedies, possible resource additions in the longer term, the economic justification for adding major resources and proposed actions to develop needed resource additions. The study confirmed the existence of a growing capacity deficit, which limits the supply of natural gas to the region. Near term options to alleviate the regional capacity deficit were found to be limited to discouraging power generation from serving export markets, demand side management efforts, and expansion of the WEI's systems by 105 mmcf/d. Longer term solutions would involve larger scale expansion of WEI's T-South capacity, the BC Gas' Inland Pacific Connector Project and the Washington Lateral proposed by

  6. 76 FR 41297 - Grant Program To Build Tribal Energy Development Capacity

    Science.gov (United States)

    2011-07-13

    .... Determine what process(es) and/or procedure(s) may be used to eliminate capacity gaps or sustain the... Ineligible for TEDC Grant Funding Feasibility studies and energy resource assessments; Purchase of resource assessment data; Research and development of speculative or unproven technologies; Purchase or lease of...

  7. Risk Analysis of Volume Cheat Strategy in a Competitive Capacity Market

    DEFF Research Database (Denmark)

    Feng, Donghan; Xu, Zhao

    2009-01-01

    Capacity market provides additional revenue stream for the power suppliers. In a capacity-energy combined market environment, suppliers have incentives to deliberately over-offer their capacities in the capacity market while bid very high price in the energy and ancillary markets to avoid operation....... This paper analyzes the risks and profits of this capacity-over-offer behavior, and develops a method for computing non-operable penalty level which can prevent the capacity-over-offer behavior. It is found that the effective penalty level is highly correlated with the stochastic characteristics......-energy market environment....

  8. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  9. Distributed generation, storage, demand response and energy efficiency as alternatives to grid capacity enhancement

    International Nuclear Information System (INIS)

    Poudineh, Rahmatallah; Jamasb, Tooraj

    2014-01-01

    The need for investment in capital intensive electricity networks is on the rise in many countries. A major advantage of distributed resources is their potential for deferring investments in distribution network capacity. However, utilizing the full benefits of these resources requires addressing several technical, economic and regulatory challenges. A significant barrier pertains to the lack of an efficient market mechanism that enables this concept and also is consistent with business model of distribution companies under an unbundled power sector paradigm. This paper proposes a market-oriented approach termed as “contract for deferral scheme” (CDS). The scheme outlines how an economically efficient portfolio of distributed generation, storage, demand response and energy efficiency can be integrated as network resources to reduce the need for grid capacity and defer demand driven network investments. - Highlights: • The paper explores a practical framework for smart electricity distribution grids. • The aim is to defer large capital investments in the network by utilizing and incentivising distributed generation, demand response, energy efficiency and storage as network resources. • The paper discusses a possible new market model that enables integration of distributed resources as alternative to grid capacity enhancement

  10. A comprehensive measure of the energy resource: Wind power potential (WPP)

    International Nuclear Information System (INIS)

    Zhang, Jie; Chowdhury, Souma; Messac, Achille

    2014-01-01

    Highlights: • A more comprehensive metric is developed to accurately assess the quality of wind resources at a site. • WPP exploits the joint distribution of wind speed and direction, and yields more credible estimates. • WPP investigates the effect of wind distribution on the optimal net power generation of a farm. • The results show that WPD and WPP follow different trends. - Abstract: Currently, the quality of available wind energy at a site is assessed using wind power density (WPD). This paper proposes to use a more comprehensive metric: the wind power potential (WPP). While the former accounts for only wind speed information, the latter exploits the joint distribution of wind speed and wind direction and yields more credible estimates. The WPP investigates the effect of wind velocity distribution on the optimal net power generation of a farm. A joint distribution of wind speed and direction is used to characterize the stochastic variation of wind conditions. Two joint distribution methods are adopted in this paper: bivariate normal distribution and anisotropic lognormal method. The net power generation for a particular farmland size and installed capacity is maximized for different distributions of wind speed and wind direction, using the Unrestricted Wind Farm Layout Optimization (UWFLO) framework. A response surface is constructed to represent the computed maximum wind farm capacity factor as a function of the parameters of the wind distribution. Two different response surface methods are adopted in this paper: (i) the adaptive hybrid functions (AHF), and (ii) the quadratic response surface method (QRSM). Toward this end, for any farm site, we can (i) estimate the parameters of the joint distribution using recorded wind data (for bivariate normal or anisotropic lognormal distributions) and (ii) predict the maximum capacity factor for a specified farm size and capacity using this response surface. The WPP metric is illustrated using recorded wind

  11. Colonoscopy resource availability and colonoscopy utilization in Ontario, Canada

    Directory of Open Access Journals (Sweden)

    Colleen Webber

    2017-04-01

    The availability of colonoscopy resources improved in Ontario between 2007 and 2013. However, the geographic variation in resource availability and findings that higher colonoscopy resource availability is associated with higher colonoscopy utilization suggest that certain areas of the province may be under-resourced. These areas may be appropriate targets for efforts to improve colonoscopy capacity in Ontario.

  12. Lung diffusing capacity for nitric oxide and carbon monoxide in relation to morphological changes as assessed by computed tomography in patients with cystic fibrosis

    Directory of Open Access Journals (Sweden)

    Nowak Dennis

    2009-06-01

    Full Text Available Abstract Background Due to large-scale destruction, changes in membrane diffusion (Dm may occur in cystic fibrosis (CF, in correspondence to alterations observed by computed tomography (CT. Dm can be easily quantified via the diffusing capacity for nitric oxide (DLNO, as opposed to the conventional diffusing capacity for carbon monoxide (DLCO. We thus studied the relationship between DLNO as well as DLCO and a CF-specific CT score in patients with stable CF. Methods Simultaneous single-breath determinations of DLNO and DLCO were performed in 21 CF patients (mean ± SD age 35 ± 9 y, FEV1 66 ± 28%pred. Patients also underwent spirometry and bodyplethysmography. CT scans were evaluated via the Brody score and rank correlations (rS with z-scores of functional measures were computed. Results CT scores correlated best with DLNO (rS = -0.83; p S = -0.63; p CO (rS = -0.79; p NO were significantly lower than for DLCO (p 1, IVC or bodyplethysmographic (e.g., SRaw, RV/TLC indices were weaker than for DLNO or DLCO but most of them were also significant (p Conclusion In this cross sectional study in patients with CF, DLNO and DLCO reflected CT-morphological alterations of the lung better than other measures. Thus the combined diffusing capacity for NO and CO may play a future role for the non-invasive, functional assessment of structural alterations of the lung in CF.

  13. Security of fixed and wireless computer networks

    NARCIS (Netherlands)

    Verschuren, J.; Degen, A.J.G.; Veugen, P.J.M.

    2003-01-01

    A few decades ago, most computers were stand-alone machines: they were able to process information using their own resources. Later, computer systems were connected to each other enabling a computer system to exchange data with another computer and to use resources of another computer. With the

  14. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  15. Enhancing Lay Counselor Capacity to Improve Patient Outcomes with Multimedia Technology.

    Science.gov (United States)

    Robbins, Reuben N; Mellins, Claude A; Leu, Cheng-Shiun; Rowe, Jessica; Warne, Patricia; Abrams, Elaine J; Witte, Susan; Stein, Dan J; Remien, Robert H

    2015-06-01

    Multimedia technologies offer powerful tools to increase capacity of health workers to deliver standardized, effective, and engaging antiretroviral medication adherence counseling. Masivukeni-is an innovative multimedia-based, computer-driven, lay counselor-delivered intervention designed to help people living with HIV in resource-limited settings achieve optimal adherence. This pilot study examined medication adherence and key psychosocial outcomes among 55 non-adherent South African HIV+ patients, on antiretroviral therapy (ART) for at least 6 months, who were randomized to receive either Masivukeni or standard of care (SOC) counseling for ART non-adherence. At baseline, there were no significant differences between the SOC and Masivukeni groups on any outcome variables. At post-intervention (approximately 5-6 weeks after baseline), -clinic-based pill count adherence data available for 20 participants (10 per intervention arm) showed a 10 % improvement for-participants and a decrease of 8 % for SOC participants. Masivukeni participants reported significantly more positive attitudes towards disclosure and medication social support, less social rejection, and better clinic-patient relationships than did SOC participants. Masivukeni shows promise to promote optimal adherence and provides preliminary evidence that multimedia, computer-based technology can help lay counselors offer better adherence counseling than standard approaches.

  16. Enhancing Lay Counselor Capacity to Improve Patient Outcomes with Multimedia Technology

    Science.gov (United States)

    Robbins, Reuben N.; Mellins, Claude A.; Leu, Cheng-Shiun; Rowe, Jessica; Warne, Patricia; Abrams, Elaine J.; Witte, Susan; Stein, Dan J.; Remien, Robert H.

    2015-01-01

    Multimedia technologies offer powerful tools to increase capacity of health workers to deliver standardized, effective, and engaging antiretroviral medication adherence counseling. Masivukeni is an innovative multimedia-based, computer-driven, lay counselor-delivered intervention designed to help people living with HIV in resource-limited settings achieve optimal adherence. This pilot study examined medication adherence and key psychosocial outcomes among 55 non-adherent South African HIV+ patients, on ART for at least 6 months, who were randomized to receive either Masivukeni or standard of care (SOC) counseling for ART non-adherence. At baseline, there were no significant differences between the SOC and Masivukeni groups on any outcome variables. At post-intervention (approximately 5–6 weeks after baseline), clinic-based pill count adherence data available for 20 participants (10 per intervention arm) showed a 10% improvement for Masivukeni participants and a decrease of 8% for SOC participants. Masivukeni participants reported significantly more positive attitudes towards disclosure and medication social support, less social rejection, and better clinic-patient relationships than did SOC participants. Masivukeni shows promise to promote optimal adherence and provides preliminary evidence that multimedia, computer-based technology can help lay counselors offer better adherence counseling than standard approaches. PMID:25566763

  17. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  18. Theoretical study of the magnetic heat capacity of praseodymium metal

    International Nuclear Information System (INIS)

    Glenn, R.L.

    1976-01-01

    The heat capacity of praseodymium metal at low temperatures is calculated using a valence change model. The effect of the presence of a small temperature-dependent and field-dependent percentage of 4+ ions is computed using crystalfield techniques. Good agreement with the experimentally determined values is obtained for polycrystalline and single-crystal praseodymium in zero field and various other fields up to 30 koe. In addition, the effects of selected exchange models on the heat capacity and susceptibility are computed. The model is shown to be compatible with both the parallel and perpendicular susceptibilities

  19. Switching Brains: Cloud-based Intelligent Resources Management for the Internet of Cognitive Things

    Directory of Open Access Journals (Sweden)

    R. Francisco

    2014-05-01

    Full Text Available Cognitive technologies can bring important benefits to our everyday life, enabling connected devices to do tasks that in the past only humans could do, leading to the Cognitive Internet of Things. Wireless Sensor and Actuator Networks (WSAN are often employed for communication between Internet objects. However, WSAN face some problems, namely sensors’ energy and CPU load consumption, which are common to other networked devices, such as mobile devices or robotic platforms. Additionally, cognitive functionalities often require large processing power, for running machine learning algorithms, computer vision processing, or behavioral and emotional architectures. Cloud massive storage capacity, large processing speeds and elasticity are appropriate to address these problems. This paper proposes a middleware that transfers flows of execution between devices and the cloud for computationally demanding applications (such as those integrating a robotic brain, to efficiently manage devices’ resources.

  20. Supply security and short-run capacity markets for electricity

    International Nuclear Information System (INIS)

    Creti, Anna; Fabra, Natalia

    2007-01-01

    The creation of electricity markets has raised the fundamental question as to whether markets create the right incentives for the provision of the reserves needed to maintain supply security in the short-run, or whether some form of regulation is required. In some states in the US, electricity distributors have been made responsible for providing such reserves by contracting capacity in excess of their forecasted peak demand. The so-called Installed Capacity Markets provide one means of contracting reserves, and are the subject of this paper. Under monopoly as well as under perfect competition, we identify firms' short-run opportunity costs of committing resources in the capacity market and the costs of inducing full capacity commitment. The long-run investment problem is not considered. From a welfare viewpoint, we also compare the desirability of providing reserves either through capacity markets or through the demand side (i.e. power curtailments). At the optimum, capacity obligations equal peak demand (plus expected outages) and the capacity deficiency rate (which serves as a price cap) is set at firms' opportunity costs of providing full capacity commitment. (Author)

  1. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    Energy Technology Data Exchange (ETDEWEB)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V. [Institute of Informatics Problems, Russian Academy of Sciences (Russian Federation); Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S. [Telecommunication Systems Department, Peoples’ Friendship University of Russia (Russian Federation)

    2015-03-10

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.

  2. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    International Nuclear Information System (INIS)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S.

    2015-01-01

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures

  3. Exact capacity analysis of multihop transmission over amplify-and-forward relay fading channels

    KAUST Repository

    Yilmaz, Ferkan; Kucur, Oǧuz; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we propose an analytical framework on the exact computation of the average capacity of multihop transmission over amplify-and-forward relay fading channels. Our approach relies on the algebraic combination of Mellin and Laplace transforms to obtain exact single integral expressions which can be easily computed by Gauss-Chebyshev Quadrature (GCQ) rule. As such, the derived results are a convenient tool to analyze the average capacity of multihop transmission over amplify-and-forward relay fading channels. As an application of the analytical framework on the exact computation of the average capacity of multihop transmission, some examples are accentuated for generalized Nakagami-m fading channels. Numerical and simulation results, performed to verify the correctness of the proposed formulation, are in perfect agreement. ©2010 IEEE.

  4. Exact capacity analysis of multihop transmission over amplify-and-forward relay fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2010-09-01

    In this paper, we propose an analytical framework on the exact computation of the average capacity of multihop transmission over amplify-and-forward relay fading channels. Our approach relies on the algebraic combination of Mellin and Laplace transforms to obtain exact single integral expressions which can be easily computed by Gauss-Chebyshev Quadrature (GCQ) rule. As such, the derived results are a convenient tool to analyze the average capacity of multihop transmission over amplify-and-forward relay fading channels. As an application of the analytical framework on the exact computation of the average capacity of multihop transmission, some examples are accentuated for generalized Nakagami-m fading channels. Numerical and simulation results, performed to verify the correctness of the proposed formulation, are in perfect agreement. ©2010 IEEE.

  5. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    International Nuclear Information System (INIS)

    Lavoie-Courchesne, S; Chouinard-Decorte, F; Doyon, J; Bellec, P; Rioux, P; Sherif, T; Rousseau, M-E; Das, S; Adalat, R; Evans, A C; Craddock, C; Margulies, D; Chu, C; Lyttelton, O

    2012-01-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  6. High capacity photonic integrated switching circuits

    NARCIS (Netherlands)

    Albores Mejia, A.

    2011-01-01

    As the demand for high-capacity data transfer keeps increasing in high performance computing and in a broader range of system area networking environments; reconfiguring the strained networks at ever faster speeds with larger volumes of traffic has become a huge challenge. Formidable bottlenecks

  7. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  8. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    Science.gov (United States)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  9. A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate

    Science.gov (United States)

    Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.

    2016-09-01

    Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.

  10. A resource management architecture for metacomputing systems.

    Energy Technology Data Exchange (ETDEWEB)

    Czajkowski, K.; Foster, I.; Karonis, N.; Kesselman, C.; Martin, S.; Smith, W.; Tuecke, S.

    1999-08-24

    Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.

  11. COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...

    Science.gov (United States)

    This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).

  12. COMPARATIVE STUDY OF CLOUD COMPUTING AND MOBILE CLOUD COMPUTING

    OpenAIRE

    Nidhi Rajak*, Diwakar Shukla

    2018-01-01

    Present era is of Information and Communication Technology (ICT) and there are number of researches are going on Cloud Computing and Mobile Cloud Computing such security issues, data management, load balancing and so on. Cloud computing provides the services to the end user over Internet and the primary objectives of this computing are resource sharing and pooling among the end users. Mobile Cloud Computing is a combination of Cloud Computing and Mobile Computing. Here, data is stored in...

  13. Canadian gas resource

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    Canadian exports of gas to the United States are a critical component of EMF-9 (North American Gas Supplies). However, it has been noted that there are differences between US expectations for imports and Canadian forecasts of export supply capacity. Recent studies by the National Petroleum Council (NPC) and the US Department of Energy (DOE) indicate that 1.8 to 2.4 Tcf of imports may be required in the mid to late 1990's; A recent study by Canada's National Energy Board (NEB) indicates that the conventional resource base may not be able to provide continued gas exports to the US after the mid 1990's and that frontier sources would need to be developed to meet US expectations. The discrepancies between US expectations and Canadian estimates of capacity are of great concern to US policymakers because they call into question the availability of secure supplies of natural gas and suggest that the cost of imports (if available) will be high. By implication, if shortages are to be averted, massive investment may be required to bring these higher cost sources to market. Since the long-term supply picture will be determined by the underlying resource base, EMF-9 participants have been asked to provide estimates of critical components of the Canadian resource base. This paper provides a summary of ICF-Lewin's recent investigation of both the Conventional and Tight Gas resource in Canada's Western Sedimentary Basin, which includes both quantitative estimates and a brief sketch of the analysis methodology

  14. Using Multiple Seasonal Holt-Winters Exponential Smoothing to Predict Cloud Resource Provisioning

    OpenAIRE

    Ashraf A. Shahin

    2016-01-01

    Elasticity is one of the key features of cloud computing that attracts many SaaS providers to minimize their services' cost. Cost is minimized by automatically provision and release computational resources depend on actual computational needs. However, delay of starting up new virtual resources can cause Service Level Agreement violation. Consequently, predicting cloud resources provisioning gains a lot of attention to scale computational resources in advance. However, most of current approac...

  15. Multiple governance and fisheries commons: Investigating the performance of local capacities in rural Bangladesh

    Directory of Open Access Journals (Sweden)

    Abdullah Al Mamun

    2016-03-01

    Full Text Available This study presents a post-facto evaluation of the local capacity development processes used under co-management of fisheries and other resources of southern Bangladesh. It answers the question of how supportive were the capacity development tools used in implementing co-management. An 18 month study was conducted and six cases were investigated to understand the approaches to co-management programs used to develop local capacity. Founded in pragmatism and viewing co-management through a governance lens, a comparative case study method was used that combined both qualitative and quantitative research approaches for data collection and subsequent analysis. This study provides empirical evidence that co-management programs have applied a number of strategies (e.g. human resource and economic development to enhance local capacities. However, these strategies have achieved mixed results with regard to developing governance that supports livelihoods. Training provided to develop human resources and economic capacity were not useful for fishers or had little lasting effects on fisheries development due to poor monitoring and a disconnection with the needs of local users. This study concludes that comanagement can facilitate local capacity but in order to realize the full potential of this approach we must address the issues of inappropriate technologies for training, the financial barriers to fishers with low cash income, and uneven power relationships among stakeholders, to create an enabling environment for effective modern governance of the fisheries commons. Our findings indicate a needsbased approach to capacity building is needed in order to support the livelihoods of local users through co-management

  16. Building Capacity for Protected Area Management in Lao PDR

    Science.gov (United States)

    Rao, Madhu; Johnson, Arlyne; Spence, Kelly; Sypasong, Ahnsany; Bynum, Nora; Sterling, Eleanor; Phimminith, Thavy; Praxaysombath, Bounthob

    2014-04-01

    Declining biodiversity in protected areas in Laos is attributed to unsustainable exploitation of natural resources. At a basic level, an important need is to develop capacity in academic and professional training institutions to provide relevant training to conservation professionals. The paper (a) describes the capacity building approach undertaken to achieve this goal, (b) evaluates the effectiveness of the approach in building capacity for implementing conservation and (c) reviews implementation outcomes. Strong linkages between organizations implementing field conservation, professional training institutions, and relevant Government agencies are central to enhancing effectiveness of capacity building initiatives aimed at improving the practice of conservation. Protected area management technical capacity needs will need to directly influence curriculum design to insure both relevance and effectiveness of training in improving protected area management. Sustainability of capacity building initiatives is largely dependent on the level of interest and commitment by host-country institutions within a supportive Government policy framework in addition to engagement of organizations implementing conservation.

  17. Primer on gas integrated resource planning

    Energy Technology Data Exchange (ETDEWEB)

    Goldman, C.; Comnes, G.A.; Busch, J.; Wiel, S. [Lawrence Berkeley Lab., CA (United States)

    1993-12-01

    This report discusses the following topics: gas resource planning: need for IRP; gas integrated resource planning: methods and models; supply and capacity planning for gas utilities; methods for estimating gas avoided costs; economic analysis of gas utility DSM programs: benefit-cost tests; gas DSM technologies and programs; end-use fuel substitution; and financial aspects of gas demand-side management programs.

  18. A summary of resource theories from a behavorial perspective

    NARCIS (Netherlands)

    Sanders, A.F.

    1997-01-01

    This paper presents a concise review of the development of limited capacity metaphors for explaining performance limits. Capacity views have been mainly inspired by limited capacity computers of the sixties and seventies, which are now replaced by parallel distributed processors with a virtually

  19. A summary of resource theories from a behavioral perspective

    NARCIS (Netherlands)

    Sanders, A.F.

    1996-01-01

    This paper presents a concise review of the development of limited capacity metaphors for explaining performance limits. Capacity views have been mainly inspired by limited capacity computers of the sixties and seventies, which are now replaced by parallel distributed processors with a virtually

  20. 1993 Pacific Northwest Loads and Resources Study.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    1993-12-01

    The Loads and Resources Study is presented in three documents: (1) this summary of Federal system and Pacific Northwest region loads and resources; (2) a technical appendix detailing forecasted Pacific Northwest economic trends and loads, and (3) a technical appendix detailing the loads and resources for each major Pacific Northwest generating utility. In this loads and resources study, resource availability is compared with a range of forecasted electricity consumption. The forecasted future electricity demands -- firm loads -- are subtracted from the projected capability of existing and {open_quotes}contracted for{close_quotes} resources to determine whether Bonneville Power Administration (BPA) and the region will be surplus or deficit. If resources are greater than loads in any particular year or month, there is a surplus of energy and/or capacity, which BPA can sell to increase revenues. Conversely, if firm loads exceed available resources, there is a deficit of energy and/or capacity, and additional conservation, contract purchases, or generating resources will be needed to meet load growth. The Pacific Northwest Loads and Resources Study analyzes the Pacific Northwest`s projected loads and available generating resources in two parts: (1) the loads and resources of the Federal system, for which BPA is the marketing agency; and (2) the larger Pacific Northwest regional power system, which includes loads and resource in addition to the Federal system. The loads and resources analysis in this study simulates the operation of the power system under the Pacific Northwest Coordination Agreement (PNCA) produced by the Pacific Northwest Coordinating Group. This study presents the Federal system and regional analyses for five load forecasts: high, medium-high, medium, medium-low, and low. This analysis projects the yearly average energy consumption and resource availability for Operating Years (OY) 1994--95 through 2003--04.

  1. SOCR: Statistics Online Computational Resource

    OpenAIRE

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis...

  2. Options on capacity imbalance

    International Nuclear Information System (INIS)

    Roggen, M.

    2002-01-01

    Since the start of this year, the Dutch energy company Nuon has been using a computer system to formulate real-time responses to national capacity imbalances in the electricity supply market. The work earns Nuon a fixed fee from TenneT (Dutch Transmission System Operator) and ensures a more stable imbalance price for everyone. The key to success has been the decision to start the project from scratch [nl

  3. An Adaptive Procedure for Task Scheduling Optimization in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Pham Phuoc Hung

    2015-01-01

    Full Text Available Nowadays, mobile cloud computing (MCC has emerged as a new paradigm which enables offloading computation-intensive, resource-consuming tasks up to a powerful computing platform in cloud, leaving only simple jobs to the capacity-limited thin client devices such as smartphones, tablets, Apple’s iWatch, and Google Glass. However, it still faces many challenges due to inherent problems of thin clients, especially the slow processing and low network connectivity. So far, a number of research studies have been carried out, trying to eliminate these problems, yet few have been found efficient. In this paper, we present an enhanced architecture, taking advantage of collaboration of thin clients and conventional desktop or laptop computers, known as thick clients, particularly aiming at improving cloud access. Additionally, we introduce an innovative genetic approach for task scheduling such that the processing time is minimized, while considering network contention and cloud cost. Our simulation shows that the proposed approach is more cost-effective and achieves better performance compared with others.

  4. Estimation of economic parameters of U.S. hydropower resources

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Douglas G. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Hunt, Richard T. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Reeves, Kelly S. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Carroll, Greg R. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2003-06-01

    Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”

  5. The Fermilab computing farms in 1997

    International Nuclear Information System (INIS)

    Wolbers, S.

    1998-01-01

    The farms in 1997 went through a variety of changes. First, the farms expansion, begun in 1996, was completed. This boosted the computing capacity to something like 20,000 MIPS (where a MIP is a unit defined by running a program, TINY, on the machine and comparing the machine performance to a VAX 11/780). In SpecInt92, it would probably rate close to 40,000. The use of the farms was not all that large. The fixed target experiments were not generally in full production in 1997, but spent time tuning up code. Other users processed on the farms, but tended to come and go and not saturate the resource. Some of the old farms were retired, saving the lab money on maintenance and saving the farms support staff effort

  6. Evaluating the capacity value of wind power considering transmission and operational constraints

    International Nuclear Information System (INIS)

    Gil, Esteban; Aravena, Ignacio

    2014-01-01

    Highlights: • Discussion of power system adequacy and the capacity value of wind power. • Method for estimating capacity value of wind power is proposed. • Monte Carlo simulation used to consider transmission and operational constraints. • Application of the method to the Chilean Northern Interconnected System (SING). - Abstract: This paper presents a method for estimating the capacity value of wind considering transmission and operational constraints. The method starts by calculating a metric for system adequacy by repeatedly simulating market operations in a Monte Carlo scheme that accounts for forced generator outages, wind resource variability, and operational conditions. Then, a capacity value calculation that uses the simulation results is proposed, and its application to the Chilean Northern Interconnected System (SING) is discussed. A comparison of the capacity value for two different types of wind farms is performed using the proposed method, and the results are compared with the method currently used in Chile and the method recommended by the IEEE. The method proposed in the paper captures the contribution of the variable generation resources to power system adequacy more accurately than the method currently employed in the SING, and showed capable of taking into account transmission and operational constraints

  7. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  8. Sustainable Development of Research Capacity in West Africa

    Science.gov (United States)

    Liebe, J. R.; Rogmann, A.; Falk, U.; Nyarko, B. K.; Amisigo, B.; Barry, B.; Vlek, P. L.

    2010-12-01

    In West Africa, the management and efficient use of natural resources is becoming ever more important. This is largely due to steeply increasing demand through population growth and economic development, and through the effects of greater uncertainty due to climate and environmental change. Developing research capacity in these countries is an essential step in enabling them to assess their natural resources independently, and to develop national strategies and policies to manage their natural resources in the light of growing demand and increasing climatic uncertainty. The project “Sustainable Development of Research Capacity in West Africa based on the GLOWA Volta Project” (SDRC) is an 18 month project, funded by the German Ministry of Education and Research, to strengthen the research capacity in West Africa. The SDRC is based on three columns: I. knowledge transfer and strengthening of human capacity; II. strengthening of infrastructural research capacity; and III. strengthening the institutional capacity. The SDRC makes use of the wide range of research results and decision support tools developed in the GLOWA Volta Project (GVP), a nine-year, interdisciplinary research project (2000-2009) with a regional focus on the Volta Basin. The tools and models that have been transferred and trained in the framework of GVP and SDRC cover a range of topics, such as modeling the onset of the rainy season, hydrological, economic, hydro-economic modeling, GIS and Remote Sensing, and the training of database managers, to name a few. Infrastructural capacity is developed by the transfer of a micro-meteorological research network to the Meteorological Service of Burkina Faso, joint operation of a tele-transmitted hydrological gauging network with the Hydrological Service of Ghana, and the provision of hard- and software capacity to use the trained models. At the center of the SDRC effort is the strengthening of the Volta Basin Authority, a newly established river basin

  9. Evaluation of Marine Resource Carrying Capacity in the Construction of Qingdao Blue Economy Zone%青岛市蓝色经济区建设的海洋资源承载力评价

    Institute of Scientific and Technical Information of China (English)

    李京梅; 许玲

    2013-01-01

    From the standpoints of marine resources supply and marine industry demands ,this article built a comprehensive evaluation indicator system and measured marine resource carrying capacity in Qing-dao from 2001 to 2010 by the fuzzy comprehensive evaluation method .The results showed that the devel-opment of marine industry was beyond the marine resource carrying capacity in 2001 to 2006 and 2008 ,and was within the carrying capacity in the rest three years .The construction of the seaports has increased the supply ability of marine resources to some extent ,but the pollution caused by traditional marine industries is still huge ,and is a major cause for the bad performance of marine resource carrying capacity .It is sug-gested that traditional aquaculture should be reformed ,and environment friendly industries like recreation fishery and tourism should be developed in the process of constructing the blue economy zone .%从胶州湾海域资源环境供给和青岛市海洋产业增长需求的角度,构建了青岛市海洋资源承载力综合评价指标体系,利用模糊综合评价方法,对青岛市2001-2010年间海洋资源承载状况进行测度。结果表明,2001-2006年及2008年青岛市海洋资源承载力处于超载状态,其余年份处于适载状态。海岸带开发的港口建设在一定程度上提高了海洋资源的供给能力,但传统海洋产业的排污需求依然较大,是海洋资源承载力超载的重要原因。建议在蓝色经济区建设中,加快对传统养殖业的改造,大力发展休闲渔业、滨海旅游业等能耗低、排污少的产业。

  10. Absorptive capacity and smart companies

    Directory of Open Access Journals (Sweden)

    Patricia Moro González

    2014-12-01

    Full Text Available Purpose: The current competitive environment is substantially modifying the organizations’ learning processes due to a global increase of available information allowing this to be transformed into knowledge. This opportunity has been exploited since the nineties by the tools of “Business Analytics” and “Business Intelligence” but, nevertheless, being integrated in the study of new organizational capacities engaged in the process of creating intelligence inside organizations is still an outstanding task. The review of the concept of absorptive capacity and a detailed study from the perspective of this new reality will be the main objective of study of this paper.Design/methodology/approach: By comparing classical absorptive capacity and absorptive capacity from the point of view of information management tools in each one of the three stages of the organizational learning cycle, some gaps of the former are overcome/fulfilled. The academic/bibliographical references provided in this paper have been obtained from ISI web of knowledge, Scopus and Dialnet data bases, supporting the state of affairs on absorptive capacity and thereafter filtering by "Business Intelligence" and "Business Analytics". Specialized websites and Business Schools` Publications there have also been included, crowning the content on information management tools used that are currently used in the strategic consulting.Findings: Our contribution to the literature is the development of "smart absorptive capacity". This is a new capacity emerging from the reformulation of the classical concept of absorptive capacity wherein some aspects of its definition that might have been omitted are emphasized. The result of this new approach is the creation of a new Theoretical Model of Organizational Intelligence, which aims to explain, within the framework of the Resources and Capabilities Theory, the competitive advantage achieved by the so-called smart companies

  11. Patient mix optimisation for inpatient planning with multiple resources

    NARCIS (Netherlands)

    Vissers, J.M.H.; Adan, I.J.B.F.; Dellaert, N.P.; Jeunet, J.; Bekkers, J.A.; Tanfani, E.; Testi, A.

    2012-01-01

    This contribution addresses the planning of admissions of surgical patients, requiring different resources such as beds and nursing capacity at wards, operating rooms and operating theatre personnel at an operating theatre, intensive care beds and intensive care nursing capacity at an intensive care

  12. Assessment of Stochastic Capacity Consumption in Railway Networks

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex; Nielsen, Otto Anker

    2015-01-01

    The railway industry continuously strive to reduce cost and utilise resources optimally. Thus, there is a demand for tools that are able to fast and efficiently provide decision-makers with solutions that can help them achieve their goals. In strategic planning of capacity, this translates...

  13. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Science.gov (United States)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  14. Strategic process strengthens and orientates the regional RandD capacity

    Energy Technology Data Exchange (ETDEWEB)

    Knuuttila, Kirsi [JAMK Univ. of Applied Sciences, Jyvaeskylae (Finland)], e-mail: kirsi.knuuttila@jamk.fi; Krissakova, Ingrid [National Forest Centre-Forest Research Inst., Zvolen (Slovakia); Barbena, Goizeder [Centro Nacional de Energias Renovables/ Departamento de Biomasa, CENER, Pamplona (Spain); Hryniewicz, Marek [Inst. of Technology and Life Sciences, ITP, Raszyn (Poland); Ketikidis, Chrysovalantis [Centre for Research and Technology Hellas, CERTH/ Inst. for Solid Fuels Technology and Applications, Thessaloniki (Greece); Wihersaari, Margareta [Jyvaeskylae Univ., Jyvaeskylae (Finland)

    2012-11-01

    The use of biomasses and competition for biomass resources are rapidly increasing in Europe due to the positive environmental advantages that this energy source entails, including climate change prevention. Central Finland, Navarra (Spain), Western Macedonia (Greece), Slovakia and Wielkopolska (Poland) have taken the joint initiative to strengthen the regional expertise, cooperation capacities and innovation environment in the field of sustainable use of biomass resources. The initiative to develop the regional research driven clusters is supported by BIOCLUS project (www.bioclus.eu) co-financed by FP7 Regions of Knowledge Programme. The biomass research orientated clusters have built up Regional Strategic RandD Agendas (SRA) and Joint Action Plans (JAP) based on the SRAs. The starting point for SRA is the comprehensive understanding of regional biomass resources. SRA and JAP process orientates and strengthens research activities and capacity building in the selected research fields related to sustainable use of biomass. The agenda supports expertise development and cooperation in the regional research-driven cluster. It identifies the focus of research activities in the future and supports the authorities in directing the use of human and financial resources.

  15. Capacity expansion model of wind power generation based on ELCC

    Science.gov (United States)

    Yuan, Bo; Zong, Jin; Wu, Shengyu

    2018-02-01

    Capacity expansion is an indispensable prerequisite for power system planning and construction. A reasonable, efficient and accurate capacity expansion model (CEM) is crucial to power system planning. In most current CEMs, the capacity of wind power generation is considered as boundary conditions instead of decision variables, which may lead to curtailment or over construction of flexible resource, especially at a high renewable energy penetration scenario. This paper proposed a wind power generation capacity value(CV) calculation method based on effective load-carrying capability, and a CEM that co-optimizes wind power generation and conventional power sources. Wind power generation is considered as decision variable in this model, and the model can accurately reflect the uncertainty nature of wind power.

  16. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  17. Mutual research capacity strengthening: a qualitative study of two-way partnerships in public health research

    Directory of Open Access Journals (Sweden)

    Redman-MacLaren Michelle

    2012-12-01

    Full Text Available Abstract Introduction Capacity building has been employed in international health and development sectors to describe the process of ‘experts’ from more resourced countries training people in less resourced countries. Hence the concept has an implicit power imbalance based on ‘expert’ knowledge. In 2011, a health research strengthening workshop was undertaken at Atoifi Adventist Hospital, Solomon Islands to further strengthen research skills of the Hospital and College of Nursing staff and East Kwaio community leaders through partnering in practical research projects. The workshop was based on participatory research frameworks underpinned by decolonising methodologies, which sought to challenge historical power imbalances and inequities. Our research question was, “Is research capacity strengthening a two-way process?” Methods In this qualitative study, five Solomon Islanders and five Australians each responded to four open-ended questions about their experience of the research capacity strengthening workshop and activities: five chose face to face interview, five chose to provide written responses. Written responses and interview transcripts were inductively analysed in NVivo 9. Results Six major themes emerged. These were: Respectful relationships; Increased knowledge and experience with research process; Participation at all stages in the research process; Contribution to public health action; Support and sustain research opportunities; and Managing challenges of capacity strengthening. All researchers identified benefits for themselves, their institution and/or community, regardless of their role or country of origin, indicating that the capacity strengthening had been a two-way process. Conclusions The flexible and responsive process we used to strengthen research capacity was identified as mutually beneficial. Using community-based participatory frameworks underpinned by decolonising methodologies is assisting to redress

  18. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    International Nuclear Information System (INIS)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-01-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [de

  19. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  20. Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing

    Science.gov (United States)

    Woodcock, R.; Wyborn, L.

    2012-04-01

    Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying